Mar 17 17:24:55.379082 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Mar 17 17:24:55.379105 kernel: Linux version 6.6.83-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Mon Mar 17 16:05:23 -00 2025 Mar 17 17:24:55.379113 kernel: KASLR enabled Mar 17 17:24:55.379119 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Mar 17 17:24:55.379126 kernel: printk: bootconsole [pl11] enabled Mar 17 17:24:55.379131 kernel: efi: EFI v2.7 by EDK II Mar 17 17:24:55.379138 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f214018 RNG=0x3fd5f998 MEMRESERVE=0x3e423d98 Mar 17 17:24:55.379144 kernel: random: crng init done Mar 17 17:24:55.379150 kernel: secureboot: Secure boot disabled Mar 17 17:24:55.379157 kernel: ACPI: Early table checksum verification disabled Mar 17 17:24:55.379163 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Mar 17 17:24:55.379168 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 17:24:55.379174 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 17:24:55.379182 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Mar 17 17:24:55.379189 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 17:24:55.379195 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 17:24:55.379201 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 17:24:55.379209 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 17:24:55.379215 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 17:24:55.379221 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 17:24:55.379227 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Mar 17 17:24:55.379233 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 17:24:55.379240 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Mar 17 17:24:55.379246 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Mar 17 17:24:55.379252 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Mar 17 17:24:55.379258 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Mar 17 17:24:55.379264 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Mar 17 17:24:55.385501 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Mar 17 17:24:55.385525 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Mar 17 17:24:55.385531 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Mar 17 17:24:55.385538 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Mar 17 17:24:55.385544 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Mar 17 17:24:55.385550 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Mar 17 17:24:55.385557 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Mar 17 17:24:55.385563 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Mar 17 17:24:55.385569 kernel: NUMA: NODE_DATA [mem 0x1bf7ed800-0x1bf7f2fff] Mar 17 17:24:55.385575 kernel: Zone ranges: Mar 17 17:24:55.385582 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Mar 17 17:24:55.385588 kernel: DMA32 empty Mar 17 17:24:55.385594 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Mar 17 17:24:55.385605 kernel: Movable zone start for each node Mar 17 17:24:55.385611 kernel: Early memory node ranges Mar 17 17:24:55.385618 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Mar 17 17:24:55.385625 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Mar 17 17:24:55.385632 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Mar 17 17:24:55.385640 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Mar 17 17:24:55.385646 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Mar 17 17:24:55.385653 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Mar 17 17:24:55.385660 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Mar 17 17:24:55.385667 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Mar 17 17:24:55.385674 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Mar 17 17:24:55.385680 kernel: psci: probing for conduit method from ACPI. Mar 17 17:24:55.385687 kernel: psci: PSCIv1.1 detected in firmware. Mar 17 17:24:55.385694 kernel: psci: Using standard PSCI v0.2 function IDs Mar 17 17:24:55.385700 kernel: psci: MIGRATE_INFO_TYPE not supported. Mar 17 17:24:55.385707 kernel: psci: SMC Calling Convention v1.4 Mar 17 17:24:55.385714 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Mar 17 17:24:55.385722 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Mar 17 17:24:55.385728 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Mar 17 17:24:55.385735 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Mar 17 17:24:55.385742 kernel: pcpu-alloc: [0] 0 [0] 1 Mar 17 17:24:55.385749 kernel: Detected PIPT I-cache on CPU0 Mar 17 17:24:55.385755 kernel: CPU features: detected: GIC system register CPU interface Mar 17 17:24:55.385762 kernel: CPU features: detected: Hardware dirty bit management Mar 17 17:24:55.385769 kernel: CPU features: detected: Spectre-BHB Mar 17 17:24:55.385775 kernel: CPU features: kernel page table isolation forced ON by KASLR Mar 17 17:24:55.385782 kernel: CPU features: detected: Kernel page table isolation (KPTI) Mar 17 17:24:55.385789 kernel: CPU features: detected: ARM erratum 1418040 Mar 17 17:24:55.385797 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Mar 17 17:24:55.385804 kernel: CPU features: detected: SSBS not fully self-synchronizing Mar 17 17:24:55.385810 kernel: alternatives: applying boot alternatives Mar 17 17:24:55.385818 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=31b104f73129b84fa679201ebe02fbfd197d071bbf0576d6ccc5c5442bcbb405 Mar 17 17:24:55.385826 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 17 17:24:55.385832 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 17 17:24:55.385839 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 17 17:24:55.385845 kernel: Fallback order for Node 0: 0 Mar 17 17:24:55.385852 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Mar 17 17:24:55.385859 kernel: Policy zone: Normal Mar 17 17:24:55.385865 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 17 17:24:55.385873 kernel: software IO TLB: area num 2. Mar 17 17:24:55.385880 kernel: software IO TLB: mapped [mem 0x0000000036620000-0x000000003a620000] (64MB) Mar 17 17:24:55.385887 kernel: Memory: 3982368K/4194160K available (10240K kernel code, 2186K rwdata, 8100K rodata, 39744K init, 897K bss, 211792K reserved, 0K cma-reserved) Mar 17 17:24:55.385894 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 17 17:24:55.385901 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 17 17:24:55.385908 kernel: rcu: RCU event tracing is enabled. Mar 17 17:24:55.385915 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 17 17:24:55.385922 kernel: Trampoline variant of Tasks RCU enabled. Mar 17 17:24:55.385929 kernel: Tracing variant of Tasks RCU enabled. Mar 17 17:24:55.385935 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 17 17:24:55.385943 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 17 17:24:55.385951 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 17 17:24:55.385957 kernel: GICv3: 960 SPIs implemented Mar 17 17:24:55.385964 kernel: GICv3: 0 Extended SPIs implemented Mar 17 17:24:55.385971 kernel: Root IRQ handler: gic_handle_irq Mar 17 17:24:55.385978 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Mar 17 17:24:55.385984 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Mar 17 17:24:55.385991 kernel: ITS: No ITS available, not enabling LPIs Mar 17 17:24:55.385998 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 17 17:24:55.386005 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 17 17:24:55.386012 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Mar 17 17:24:55.386019 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Mar 17 17:24:55.386026 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Mar 17 17:24:55.386034 kernel: Console: colour dummy device 80x25 Mar 17 17:24:55.386041 kernel: printk: console [tty1] enabled Mar 17 17:24:55.386049 kernel: ACPI: Core revision 20230628 Mar 17 17:24:55.386056 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Mar 17 17:24:55.386063 kernel: pid_max: default: 32768 minimum: 301 Mar 17 17:24:55.386069 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 17 17:24:55.386076 kernel: landlock: Up and running. Mar 17 17:24:55.386083 kernel: SELinux: Initializing. Mar 17 17:24:55.386090 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 17:24:55.386099 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 17:24:55.386106 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 17 17:24:55.386114 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 17 17:24:55.386121 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Mar 17 17:24:55.386127 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 Mar 17 17:24:55.386135 kernel: Hyper-V: enabling crash_kexec_post_notifiers Mar 17 17:24:55.386142 kernel: rcu: Hierarchical SRCU implementation. Mar 17 17:24:55.386168 kernel: rcu: Max phase no-delay instances is 400. Mar 17 17:24:55.386175 kernel: Remapping and enabling EFI services. Mar 17 17:24:55.386183 kernel: smp: Bringing up secondary CPUs ... Mar 17 17:24:55.386190 kernel: Detected PIPT I-cache on CPU1 Mar 17 17:24:55.386197 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Mar 17 17:24:55.386206 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 17 17:24:55.386213 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Mar 17 17:24:55.386220 kernel: smp: Brought up 1 node, 2 CPUs Mar 17 17:24:55.386228 kernel: SMP: Total of 2 processors activated. Mar 17 17:24:55.386235 kernel: CPU features: detected: 32-bit EL0 Support Mar 17 17:24:55.386244 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Mar 17 17:24:55.386251 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Mar 17 17:24:55.386258 kernel: CPU features: detected: CRC32 instructions Mar 17 17:24:55.386266 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Mar 17 17:24:55.386308 kernel: CPU features: detected: LSE atomic instructions Mar 17 17:24:55.386317 kernel: CPU features: detected: Privileged Access Never Mar 17 17:24:55.386325 kernel: CPU: All CPU(s) started at EL1 Mar 17 17:24:55.386332 kernel: alternatives: applying system-wide alternatives Mar 17 17:24:55.386339 kernel: devtmpfs: initialized Mar 17 17:24:55.386349 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 17 17:24:55.386356 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 17 17:24:55.386363 kernel: pinctrl core: initialized pinctrl subsystem Mar 17 17:24:55.386371 kernel: SMBIOS 3.1.0 present. Mar 17 17:24:55.386378 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Mar 17 17:24:55.386385 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 17 17:24:55.386392 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 17 17:24:55.386400 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 17 17:24:55.386409 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 17 17:24:55.386416 kernel: audit: initializing netlink subsys (disabled) Mar 17 17:24:55.386423 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Mar 17 17:24:55.386431 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 17 17:24:55.386438 kernel: cpuidle: using governor menu Mar 17 17:24:55.386445 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 17 17:24:55.386453 kernel: ASID allocator initialised with 32768 entries Mar 17 17:24:55.386460 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 17 17:24:55.386467 kernel: Serial: AMBA PL011 UART driver Mar 17 17:24:55.386476 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Mar 17 17:24:55.386483 kernel: Modules: 0 pages in range for non-PLT usage Mar 17 17:24:55.386491 kernel: Modules: 508944 pages in range for PLT usage Mar 17 17:24:55.386498 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 17 17:24:55.386505 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 17 17:24:55.386512 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 17 17:24:55.386520 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 17 17:24:55.386527 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 17 17:24:55.386534 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 17 17:24:55.386544 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 17 17:24:55.386551 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 17 17:24:55.386558 kernel: ACPI: Added _OSI(Module Device) Mar 17 17:24:55.386566 kernel: ACPI: Added _OSI(Processor Device) Mar 17 17:24:55.386573 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 17 17:24:55.386580 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 17 17:24:55.386588 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 17 17:24:55.386595 kernel: ACPI: Interpreter enabled Mar 17 17:24:55.386602 kernel: ACPI: Using GIC for interrupt routing Mar 17 17:24:55.386609 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Mar 17 17:24:55.386618 kernel: printk: console [ttyAMA0] enabled Mar 17 17:24:55.386632 kernel: printk: bootconsole [pl11] disabled Mar 17 17:24:55.386639 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Mar 17 17:24:55.386646 kernel: iommu: Default domain type: Translated Mar 17 17:24:55.386654 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 17 17:24:55.386661 kernel: efivars: Registered efivars operations Mar 17 17:24:55.386668 kernel: vgaarb: loaded Mar 17 17:24:55.386678 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 17 17:24:55.386685 kernel: VFS: Disk quotas dquot_6.6.0 Mar 17 17:24:55.386694 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 17 17:24:55.386702 kernel: pnp: PnP ACPI init Mar 17 17:24:55.386709 kernel: pnp: PnP ACPI: found 0 devices Mar 17 17:24:55.386716 kernel: NET: Registered PF_INET protocol family Mar 17 17:24:55.386723 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 17 17:24:55.386730 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 17 17:24:55.386738 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 17 17:24:55.386745 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 17 17:24:55.386754 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 17 17:24:55.386762 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 17 17:24:55.386770 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 17:24:55.386777 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 17:24:55.386784 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 17 17:24:55.386792 kernel: PCI: CLS 0 bytes, default 64 Mar 17 17:24:55.386799 kernel: kvm [1]: HYP mode not available Mar 17 17:24:55.386806 kernel: Initialise system trusted keyrings Mar 17 17:24:55.386813 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 17 17:24:55.386822 kernel: Key type asymmetric registered Mar 17 17:24:55.386829 kernel: Asymmetric key parser 'x509' registered Mar 17 17:24:55.386836 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 17 17:24:55.386843 kernel: io scheduler mq-deadline registered Mar 17 17:24:55.386851 kernel: io scheduler kyber registered Mar 17 17:24:55.386858 kernel: io scheduler bfq registered Mar 17 17:24:55.386865 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 17 17:24:55.386872 kernel: thunder_xcv, ver 1.0 Mar 17 17:24:55.386879 kernel: thunder_bgx, ver 1.0 Mar 17 17:24:55.386886 kernel: nicpf, ver 1.0 Mar 17 17:24:55.386895 kernel: nicvf, ver 1.0 Mar 17 17:24:55.387032 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 17 17:24:55.387104 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-03-17T17:24:54 UTC (1742232294) Mar 17 17:24:55.387114 kernel: efifb: probing for efifb Mar 17 17:24:55.387121 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Mar 17 17:24:55.387128 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Mar 17 17:24:55.387136 kernel: efifb: scrolling: redraw Mar 17 17:24:55.387145 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 17 17:24:55.387152 kernel: Console: switching to colour frame buffer device 128x48 Mar 17 17:24:55.387160 kernel: fb0: EFI VGA frame buffer device Mar 17 17:24:55.387167 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Mar 17 17:24:55.387175 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 17 17:24:55.387182 kernel: No ACPI PMU IRQ for CPU0 Mar 17 17:24:55.387189 kernel: No ACPI PMU IRQ for CPU1 Mar 17 17:24:55.387197 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Mar 17 17:24:55.387204 kernel: watchdog: Delayed init of the lockup detector failed: -19 Mar 17 17:24:55.387213 kernel: watchdog: Hard watchdog permanently disabled Mar 17 17:24:55.387220 kernel: NET: Registered PF_INET6 protocol family Mar 17 17:24:55.387227 kernel: Segment Routing with IPv6 Mar 17 17:24:55.387235 kernel: In-situ OAM (IOAM) with IPv6 Mar 17 17:24:55.387242 kernel: NET: Registered PF_PACKET protocol family Mar 17 17:24:55.387249 kernel: Key type dns_resolver registered Mar 17 17:24:55.387256 kernel: registered taskstats version 1 Mar 17 17:24:55.387264 kernel: Loading compiled-in X.509 certificates Mar 17 17:24:55.387285 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.83-flatcar: 74c9b4f5dfad711856d7363c976664fc02c1e24c' Mar 17 17:24:55.387292 kernel: Key type .fscrypt registered Mar 17 17:24:55.387302 kernel: Key type fscrypt-provisioning registered Mar 17 17:24:55.387310 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 17 17:24:55.387317 kernel: ima: Allocated hash algorithm: sha1 Mar 17 17:24:55.387324 kernel: ima: No architecture policies found Mar 17 17:24:55.387332 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 17 17:24:55.387339 kernel: clk: Disabling unused clocks Mar 17 17:24:55.387347 kernel: Freeing unused kernel memory: 39744K Mar 17 17:24:55.387354 kernel: Run /init as init process Mar 17 17:24:55.387362 kernel: with arguments: Mar 17 17:24:55.387370 kernel: /init Mar 17 17:24:55.387377 kernel: with environment: Mar 17 17:24:55.387384 kernel: HOME=/ Mar 17 17:24:55.387391 kernel: TERM=linux Mar 17 17:24:55.387398 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 17 17:24:55.387408 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 17 17:24:55.387417 systemd[1]: Detected virtualization microsoft. Mar 17 17:24:55.387427 systemd[1]: Detected architecture arm64. Mar 17 17:24:55.387434 systemd[1]: Running in initrd. Mar 17 17:24:55.387442 systemd[1]: No hostname configured, using default hostname. Mar 17 17:24:55.387449 systemd[1]: Hostname set to . Mar 17 17:24:55.387457 systemd[1]: Initializing machine ID from random generator. Mar 17 17:24:55.387465 systemd[1]: Queued start job for default target initrd.target. Mar 17 17:24:55.387473 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:24:55.387481 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:24:55.387491 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 17 17:24:55.387499 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 17:24:55.387507 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 17 17:24:55.387515 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 17 17:24:55.387525 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 17 17:24:55.387533 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 17 17:24:55.387540 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:24:55.387550 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:24:55.387558 systemd[1]: Reached target paths.target - Path Units. Mar 17 17:24:55.387566 systemd[1]: Reached target slices.target - Slice Units. Mar 17 17:24:55.387573 systemd[1]: Reached target swap.target - Swaps. Mar 17 17:24:55.387581 systemd[1]: Reached target timers.target - Timer Units. Mar 17 17:24:55.387589 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 17:24:55.387597 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 17:24:55.387604 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 17 17:24:55.387612 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 17 17:24:55.387622 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:24:55.387630 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 17:24:55.387638 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:24:55.387645 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 17:24:55.387653 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 17 17:24:55.387661 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 17:24:55.387669 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 17 17:24:55.387676 systemd[1]: Starting systemd-fsck-usr.service... Mar 17 17:24:55.387687 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 17:24:55.387694 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 17:24:55.387723 systemd-journald[218]: Collecting audit messages is disabled. Mar 17 17:24:55.387743 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:24:55.387753 systemd-journald[218]: Journal started Mar 17 17:24:55.387772 systemd-journald[218]: Runtime Journal (/run/log/journal/962d4d8e3a5749febe10978fc48b21dd) is 8.0M, max 78.5M, 70.5M free. Mar 17 17:24:55.388474 systemd-modules-load[219]: Inserted module 'overlay' Mar 17 17:24:55.409239 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 17:24:55.421358 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 17 17:24:55.440939 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 17 17:24:55.440964 kernel: Bridge firewalling registered Mar 17 17:24:55.433713 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:24:55.439929 systemd-modules-load[219]: Inserted module 'br_netfilter' Mar 17 17:24:55.449553 systemd[1]: Finished systemd-fsck-usr.service. Mar 17 17:24:55.464880 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 17:24:55.481820 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:24:55.504690 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:24:55.512435 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:24:55.540045 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 17 17:24:55.560500 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 17:24:55.571337 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:24:55.588431 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:24:55.603458 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 17:24:55.617079 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:24:55.649568 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 17 17:24:55.665921 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 17:24:55.695160 dracut-cmdline[251]: dracut-dracut-053 Mar 17 17:24:55.695160 dracut-cmdline[251]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=31b104f73129b84fa679201ebe02fbfd197d071bbf0576d6ccc5c5442bcbb405 Mar 17 17:24:55.682152 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 17:24:55.707776 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:24:55.718025 systemd-resolved[258]: Positive Trust Anchors: Mar 17 17:24:55.718039 systemd-resolved[258]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 17:24:55.718070 systemd-resolved[258]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 17:24:55.721938 systemd-resolved[258]: Defaulting to hostname 'linux'. Mar 17 17:24:55.727908 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 17:24:55.759962 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:24:55.881310 kernel: SCSI subsystem initialized Mar 17 17:24:55.889314 kernel: Loading iSCSI transport class v2.0-870. Mar 17 17:24:55.899302 kernel: iscsi: registered transport (tcp) Mar 17 17:24:55.917746 kernel: iscsi: registered transport (qla4xxx) Mar 17 17:24:55.917807 kernel: QLogic iSCSI HBA Driver Mar 17 17:24:55.959430 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 17 17:24:55.974479 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 17 17:24:56.008717 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 17 17:24:56.008773 kernel: device-mapper: uevent: version 1.0.3 Mar 17 17:24:56.015306 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 17 17:24:56.067297 kernel: raid6: neonx8 gen() 15786 MB/s Mar 17 17:24:56.085285 kernel: raid6: neonx4 gen() 15659 MB/s Mar 17 17:24:56.105292 kernel: raid6: neonx2 gen() 13214 MB/s Mar 17 17:24:56.126286 kernel: raid6: neonx1 gen() 10492 MB/s Mar 17 17:24:56.146309 kernel: raid6: int64x8 gen() 6958 MB/s Mar 17 17:24:56.166317 kernel: raid6: int64x4 gen() 7353 MB/s Mar 17 17:24:56.188335 kernel: raid6: int64x2 gen() 6134 MB/s Mar 17 17:24:56.212611 kernel: raid6: int64x1 gen() 5044 MB/s Mar 17 17:24:56.212705 kernel: raid6: using algorithm neonx8 gen() 15786 MB/s Mar 17 17:24:56.237016 kernel: raid6: .... xor() 11910 MB/s, rmw enabled Mar 17 17:24:56.237085 kernel: raid6: using neon recovery algorithm Mar 17 17:24:56.250239 kernel: xor: measuring software checksum speed Mar 17 17:24:56.250324 kernel: 8regs : 19778 MB/sec Mar 17 17:24:56.254115 kernel: 32regs : 19646 MB/sec Mar 17 17:24:56.257819 kernel: arm64_neon : 26919 MB/sec Mar 17 17:24:56.262143 kernel: xor: using function: arm64_neon (26919 MB/sec) Mar 17 17:24:56.315302 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 17 17:24:56.327158 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 17 17:24:56.346443 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:24:56.370970 systemd-udevd[439]: Using default interface naming scheme 'v255'. Mar 17 17:24:56.376972 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:24:56.396745 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 17 17:24:56.425507 dracut-pre-trigger[451]: rd.md=0: removing MD RAID activation Mar 17 17:24:56.458807 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 17:24:56.473853 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 17:24:56.513748 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:24:56.534754 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 17 17:24:56.563108 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 17 17:24:56.577982 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 17:24:56.604617 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:24:56.625441 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 17:24:56.653626 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 17 17:24:56.678412 kernel: hv_vmbus: Vmbus version:5.3 Mar 17 17:24:56.678447 kernel: hv_vmbus: registering driver hyperv_keyboard Mar 17 17:24:56.678457 kernel: pps_core: LinuxPPS API ver. 1 registered Mar 17 17:24:56.679295 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 17 17:24:56.722852 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Mar 17 17:24:56.722878 kernel: hv_vmbus: registering driver hid_hyperv Mar 17 17:24:56.722889 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Mar 17 17:24:56.722899 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Mar 17 17:24:56.722918 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Mar 17 17:24:56.723095 kernel: hv_vmbus: registering driver hv_storvsc Mar 17 17:24:56.723106 kernel: PTP clock support registered Mar 17 17:24:56.723116 kernel: scsi host0: storvsc_host_t Mar 17 17:24:56.723220 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Mar 17 17:24:56.723243 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Mar 17 17:24:56.692899 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 17:24:56.795770 kernel: hv_utils: Registering HyperV Utility Driver Mar 17 17:24:56.795793 kernel: scsi host1: storvsc_host_t Mar 17 17:24:56.795936 kernel: hv_vmbus: registering driver hv_utils Mar 17 17:24:56.795947 kernel: hv_vmbus: registering driver hv_netvsc Mar 17 17:24:56.795956 kernel: hv_utils: Heartbeat IC version 3.0 Mar 17 17:24:56.795966 kernel: hv_utils: Shutdown IC version 3.2 Mar 17 17:24:56.795983 kernel: hv_utils: TimeSync IC version 4.0 Mar 17 17:24:56.693185 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:24:56.562809 systemd-resolved[258]: Clock change detected. Flushing caches. Mar 17 17:24:56.599105 systemd-journald[218]: Time jumped backwards, rotating. Mar 17 17:24:56.599150 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Mar 17 17:24:56.618600 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 17 17:24:56.618618 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Mar 17 17:24:56.599639 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:24:56.611803 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:24:56.612048 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:24:56.618741 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:24:56.693594 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Mar 17 17:24:56.722976 kernel: hv_netvsc 00224876-f62f-0022-4876-f62f00224876 eth0: VF slot 1 added Mar 17 17:24:56.723134 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Mar 17 17:24:56.723242 kernel: sd 0:0:0:0: [sda] Write Protect is off Mar 17 17:24:56.723329 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Mar 17 17:24:56.723412 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Mar 17 17:24:56.723491 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 17 17:24:56.723500 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Mar 17 17:24:56.642895 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:24:56.746305 kernel: hv_vmbus: registering driver hv_pci Mar 17 17:24:56.746331 kernel: hv_pci 2c416f2f-642c-4672-83ec-d1dc04f21702: PCI VMBus probing: Using version 0x10004 Mar 17 17:24:56.843204 kernel: hv_pci 2c416f2f-642c-4672-83ec-d1dc04f21702: PCI host bridge to bus 642c:00 Mar 17 17:24:56.843335 kernel: pci_bus 642c:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Mar 17 17:24:56.843437 kernel: pci_bus 642c:00: No busn resource found for root bus, will use [bus 00-ff] Mar 17 17:24:56.843513 kernel: pci 642c:00:02.0: [15b3:1018] type 00 class 0x020000 Mar 17 17:24:56.843694 kernel: pci 642c:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Mar 17 17:24:56.843790 kernel: pci 642c:00:02.0: enabling Extended Tags Mar 17 17:24:56.843879 kernel: pci 642c:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 642c:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Mar 17 17:24:56.843972 kernel: pci_bus 642c:00: busn_res: [bus 00-ff] end is updated to 00 Mar 17 17:24:56.844056 kernel: pci 642c:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Mar 17 17:24:56.685429 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:24:56.685561 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:24:56.746812 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:24:56.784743 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:24:56.806811 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:24:56.887797 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:24:56.917436 kernel: mlx5_core 642c:00:02.0: enabling device (0000 -> 0002) Mar 17 17:24:57.127271 kernel: mlx5_core 642c:00:02.0: firmware version: 16.30.1284 Mar 17 17:24:57.127408 kernel: hv_netvsc 00224876-f62f-0022-4876-f62f00224876 eth0: VF registering: eth1 Mar 17 17:24:57.127502 kernel: mlx5_core 642c:00:02.0 eth1: joined to eth0 Mar 17 17:24:57.127649 kernel: mlx5_core 642c:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Mar 17 17:24:57.135579 kernel: mlx5_core 642c:00:02.0 enP25644s1: renamed from eth1 Mar 17 17:24:57.305182 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Mar 17 17:24:57.377759 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (504) Mar 17 17:24:57.393962 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Mar 17 17:24:57.444790 kernel: BTRFS: device fsid c0c482e3-6885-4a4e-b31c-6bc8f8c403e7 devid 1 transid 40 /dev/sda3 scanned by (udev-worker) (495) Mar 17 17:24:57.450245 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Mar 17 17:24:57.468295 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Mar 17 17:24:57.476314 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Mar 17 17:24:57.511868 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 17 17:24:57.543573 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 17 17:24:57.554617 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 17 17:24:58.563589 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 17 17:24:58.564643 disk-uuid[606]: The operation has completed successfully. Mar 17 17:24:58.631977 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 17 17:24:58.632077 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 17 17:24:58.654762 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 17 17:24:58.670482 sh[692]: Success Mar 17 17:24:58.702801 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 17 17:24:58.912039 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 17 17:24:58.922716 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 17 17:24:58.929741 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 17 17:24:58.970595 kernel: BTRFS info (device dm-0): first mount of filesystem c0c482e3-6885-4a4e-b31c-6bc8f8c403e7 Mar 17 17:24:58.970659 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 17 17:24:58.970670 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 17 17:24:58.983695 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 17 17:24:58.988336 kernel: BTRFS info (device dm-0): using free space tree Mar 17 17:24:59.343772 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 17 17:24:59.349849 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 17 17:24:59.375841 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 17 17:24:59.391772 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 17 17:24:59.425609 kernel: BTRFS info (device sda6): first mount of filesystem 3dbd9b64-bd31-4292-be10-51551993b53f Mar 17 17:24:59.425633 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 17 17:24:59.425642 kernel: BTRFS info (device sda6): using free space tree Mar 17 17:24:59.439630 kernel: BTRFS info (device sda6): auto enabling async discard Mar 17 17:24:59.457995 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 17 17:24:59.463993 kernel: BTRFS info (device sda6): last unmount of filesystem 3dbd9b64-bd31-4292-be10-51551993b53f Mar 17 17:24:59.472847 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 17 17:24:59.487835 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 17 17:24:59.546670 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 17:24:59.568683 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 17:24:59.596931 systemd-networkd[876]: lo: Link UP Mar 17 17:24:59.596940 systemd-networkd[876]: lo: Gained carrier Mar 17 17:24:59.601788 systemd-networkd[876]: Enumeration completed Mar 17 17:24:59.606166 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 17:24:59.612974 systemd-networkd[876]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:24:59.612978 systemd-networkd[876]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:24:59.613694 systemd[1]: Reached target network.target - Network. Mar 17 17:24:59.682607 kernel: mlx5_core 642c:00:02.0 enP25644s1: Link up Mar 17 17:24:59.721591 kernel: hv_netvsc 00224876-f62f-0022-4876-f62f00224876 eth0: Data path switched to VF: enP25644s1 Mar 17 17:24:59.722144 systemd-networkd[876]: enP25644s1: Link UP Mar 17 17:24:59.722246 systemd-networkd[876]: eth0: Link UP Mar 17 17:24:59.722340 systemd-networkd[876]: eth0: Gained carrier Mar 17 17:24:59.722348 systemd-networkd[876]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:24:59.730998 systemd-networkd[876]: enP25644s1: Gained carrier Mar 17 17:24:59.758600 systemd-networkd[876]: eth0: DHCPv4 address 10.200.20.35/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 17 17:25:00.384033 ignition[809]: Ignition 2.20.0 Mar 17 17:25:00.384046 ignition[809]: Stage: fetch-offline Mar 17 17:25:00.386235 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 17:25:00.384087 ignition[809]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:25:00.384095 ignition[809]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 17 17:25:00.384194 ignition[809]: parsed url from cmdline: "" Mar 17 17:25:00.384199 ignition[809]: no config URL provided Mar 17 17:25:00.384203 ignition[809]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 17:25:00.419833 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 17 17:25:00.384210 ignition[809]: no config at "/usr/lib/ignition/user.ign" Mar 17 17:25:00.384215 ignition[809]: failed to fetch config: resource requires networking Mar 17 17:25:00.384404 ignition[809]: Ignition finished successfully Mar 17 17:25:00.448360 ignition[884]: Ignition 2.20.0 Mar 17 17:25:00.448367 ignition[884]: Stage: fetch Mar 17 17:25:00.448652 ignition[884]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:25:00.448662 ignition[884]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 17 17:25:00.448789 ignition[884]: parsed url from cmdline: "" Mar 17 17:25:00.448792 ignition[884]: no config URL provided Mar 17 17:25:00.448798 ignition[884]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 17:25:00.448806 ignition[884]: no config at "/usr/lib/ignition/user.ign" Mar 17 17:25:00.448834 ignition[884]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Mar 17 17:25:00.572353 ignition[884]: GET result: OK Mar 17 17:25:00.572430 ignition[884]: config has been read from IMDS userdata Mar 17 17:25:00.572470 ignition[884]: parsing config with SHA512: 708ab64a050d514500c330f20c511d5c7300b95a7f2149941edf3b555d00883b504cde9b341a5dbeb120547b9165e5bbb27fc111b2812fd52cd0b77e65719cb1 Mar 17 17:25:00.577402 unknown[884]: fetched base config from "system" Mar 17 17:25:00.577869 ignition[884]: fetch: fetch complete Mar 17 17:25:00.577412 unknown[884]: fetched base config from "system" Mar 17 17:25:00.577875 ignition[884]: fetch: fetch passed Mar 17 17:25:00.577417 unknown[884]: fetched user config from "azure" Mar 17 17:25:00.577948 ignition[884]: Ignition finished successfully Mar 17 17:25:00.583072 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 17 17:25:00.600843 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 17 17:25:00.635160 ignition[891]: Ignition 2.20.0 Mar 17 17:25:00.635173 ignition[891]: Stage: kargs Mar 17 17:25:00.640424 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 17 17:25:00.635360 ignition[891]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:25:00.635370 ignition[891]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 17 17:25:00.636395 ignition[891]: kargs: kargs passed Mar 17 17:25:00.636450 ignition[891]: Ignition finished successfully Mar 17 17:25:00.670773 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 17 17:25:00.684792 ignition[897]: Ignition 2.20.0 Mar 17 17:25:00.689983 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 17 17:25:00.684800 ignition[897]: Stage: disks Mar 17 17:25:00.696527 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 17 17:25:00.685052 ignition[897]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:25:00.706698 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 17 17:25:00.685074 ignition[897]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 17 17:25:00.719857 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 17:25:00.686228 ignition[897]: disks: disks passed Mar 17 17:25:00.728748 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 17:25:00.686287 ignition[897]: Ignition finished successfully Mar 17 17:25:00.740903 systemd[1]: Reached target basic.target - Basic System. Mar 17 17:25:00.774118 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 17 17:25:00.795988 systemd-networkd[876]: eth0: Gained IPv6LL Mar 17 17:25:00.869632 systemd-fsck[905]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Mar 17 17:25:00.881780 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 17 17:25:00.907649 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 17 17:25:00.966598 kernel: EXT4-fs (sda9): mounted filesystem 6b579bf2-7716-4d59-98eb-b92ea668693e r/w with ordered data mode. Quota mode: none. Mar 17 17:25:00.967769 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 17 17:25:00.972919 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 17 17:25:01.024682 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 17:25:01.034712 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 17 17:25:01.056642 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (916) Mar 17 17:25:01.066821 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Mar 17 17:25:01.101412 kernel: BTRFS info (device sda6): first mount of filesystem 3dbd9b64-bd31-4292-be10-51551993b53f Mar 17 17:25:01.101442 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 17 17:25:01.101454 kernel: BTRFS info (device sda6): using free space tree Mar 17 17:25:01.093710 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 17 17:25:01.128733 kernel: BTRFS info (device sda6): auto enabling async discard Mar 17 17:25:01.093752 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 17:25:01.116582 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 17 17:25:01.130814 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 17:25:01.154143 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 17 17:25:01.616455 coreos-metadata[918]: Mar 17 17:25:01.616 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 17 17:25:01.632007 coreos-metadata[918]: Mar 17 17:25:01.631 INFO Fetch successful Mar 17 17:25:01.641408 coreos-metadata[918]: Mar 17 17:25:01.634 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Mar 17 17:25:01.655143 coreos-metadata[918]: Mar 17 17:25:01.655 INFO Fetch successful Mar 17 17:25:01.669740 coreos-metadata[918]: Mar 17 17:25:01.669 INFO wrote hostname ci-4152.2.2-a-e33ca1f69b to /sysroot/etc/hostname Mar 17 17:25:01.679956 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 17 17:25:01.696368 systemd-networkd[876]: enP25644s1: Gained IPv6LL Mar 17 17:25:01.948719 initrd-setup-root[947]: cut: /sysroot/etc/passwd: No such file or directory Mar 17 17:25:01.989787 initrd-setup-root[954]: cut: /sysroot/etc/group: No such file or directory Mar 17 17:25:02.011648 initrd-setup-root[961]: cut: /sysroot/etc/shadow: No such file or directory Mar 17 17:25:02.030588 initrd-setup-root[968]: cut: /sysroot/etc/gshadow: No such file or directory Mar 17 17:25:03.035701 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 17 17:25:03.054756 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 17 17:25:03.064900 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 17 17:25:03.088434 kernel: BTRFS info (device sda6): last unmount of filesystem 3dbd9b64-bd31-4292-be10-51551993b53f Mar 17 17:25:03.082692 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 17 17:25:03.120363 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 17 17:25:03.135301 ignition[1037]: INFO : Ignition 2.20.0 Mar 17 17:25:03.141164 ignition[1037]: INFO : Stage: mount Mar 17 17:25:03.141164 ignition[1037]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:25:03.141164 ignition[1037]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 17 17:25:03.141164 ignition[1037]: INFO : mount: mount passed Mar 17 17:25:03.141164 ignition[1037]: INFO : Ignition finished successfully Mar 17 17:25:03.143508 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 17 17:25:03.173661 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 17 17:25:03.188773 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 17:25:03.221864 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1047) Mar 17 17:25:03.235658 kernel: BTRFS info (device sda6): first mount of filesystem 3dbd9b64-bd31-4292-be10-51551993b53f Mar 17 17:25:03.235725 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 17 17:25:03.240155 kernel: BTRFS info (device sda6): using free space tree Mar 17 17:25:03.246567 kernel: BTRFS info (device sda6): auto enabling async discard Mar 17 17:25:03.248857 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 17:25:03.275362 ignition[1065]: INFO : Ignition 2.20.0 Mar 17 17:25:03.275362 ignition[1065]: INFO : Stage: files Mar 17 17:25:03.283802 ignition[1065]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:25:03.283802 ignition[1065]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 17 17:25:03.283802 ignition[1065]: DEBUG : files: compiled without relabeling support, skipping Mar 17 17:25:03.283802 ignition[1065]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 17 17:25:03.283802 ignition[1065]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 17 17:25:03.355408 ignition[1065]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 17 17:25:03.363672 ignition[1065]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 17 17:25:03.363672 ignition[1065]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 17 17:25:03.358123 unknown[1065]: wrote ssh authorized keys file for user: core Mar 17 17:25:03.404327 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Mar 17 17:25:03.415747 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Mar 17 17:25:03.454811 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 17 17:25:03.591858 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Mar 17 17:25:03.603543 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 17 17:25:03.603543 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Mar 17 17:25:04.023612 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 17 17:25:04.215101 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 17 17:25:04.226760 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 17 17:25:04.226760 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 17 17:25:04.226760 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 17 17:25:04.226760 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 17 17:25:04.226760 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 17:25:04.226760 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 17:25:04.226760 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 17:25:04.226760 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 17:25:04.226760 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 17:25:04.226760 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 17:25:04.226760 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Mar 17 17:25:04.226760 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Mar 17 17:25:04.226760 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Mar 17 17:25:04.226760 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 Mar 17 17:25:04.621689 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 17 17:25:04.906846 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Mar 17 17:25:04.906846 ignition[1065]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 17 17:25:04.943626 ignition[1065]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 17:25:04.954936 ignition[1065]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 17:25:04.954936 ignition[1065]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 17 17:25:04.954936 ignition[1065]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Mar 17 17:25:04.954936 ignition[1065]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Mar 17 17:25:04.954936 ignition[1065]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 17 17:25:04.954936 ignition[1065]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 17 17:25:04.954936 ignition[1065]: INFO : files: files passed Mar 17 17:25:04.954936 ignition[1065]: INFO : Ignition finished successfully Mar 17 17:25:04.968321 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 17 17:25:05.000841 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 17 17:25:05.021773 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 17 17:25:05.082226 initrd-setup-root-after-ignition[1091]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:25:05.082226 initrd-setup-root-after-ignition[1091]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:25:05.047306 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 17 17:25:05.122335 initrd-setup-root-after-ignition[1095]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:25:05.047400 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 17 17:25:05.082493 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 17:25:05.099823 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 17 17:25:05.138828 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 17 17:25:05.182805 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 17 17:25:05.182935 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 17 17:25:05.197544 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 17 17:25:05.210601 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 17 17:25:05.222307 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 17 17:25:05.243041 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 17 17:25:05.265331 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 17:25:05.281750 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 17 17:25:05.301943 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 17 17:25:05.302070 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 17 17:25:05.316432 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:25:05.330028 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:25:05.343460 systemd[1]: Stopped target timers.target - Timer Units. Mar 17 17:25:05.355887 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 17 17:25:05.355984 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 17:25:05.374062 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 17 17:25:05.386871 systemd[1]: Stopped target basic.target - Basic System. Mar 17 17:25:05.397670 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 17 17:25:05.409758 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 17:25:05.424214 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 17 17:25:05.438357 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 17 17:25:05.451411 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 17:25:05.465872 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 17 17:25:05.480705 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 17 17:25:05.493761 systemd[1]: Stopped target swap.target - Swaps. Mar 17 17:25:05.504050 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 17 17:25:05.504134 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 17 17:25:05.522595 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:25:05.541475 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:25:05.560065 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 17 17:25:05.566901 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:25:05.574670 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 17 17:25:05.574758 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 17 17:25:05.597522 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 17 17:25:05.597610 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 17:25:05.611697 systemd[1]: ignition-files.service: Deactivated successfully. Mar 17 17:25:05.611758 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 17 17:25:05.623815 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 17 17:25:05.623868 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 17 17:25:05.664797 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 17 17:25:05.679628 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 17 17:25:05.679715 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:25:05.710270 ignition[1117]: INFO : Ignition 2.20.0 Mar 17 17:25:05.710270 ignition[1117]: INFO : Stage: umount Mar 17 17:25:05.745166 ignition[1117]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:25:05.745166 ignition[1117]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 17 17:25:05.745166 ignition[1117]: INFO : umount: umount passed Mar 17 17:25:05.745166 ignition[1117]: INFO : Ignition finished successfully Mar 17 17:25:05.712705 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 17 17:25:05.720280 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 17 17:25:05.720361 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:25:05.728568 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 17 17:25:05.728652 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 17:25:05.752250 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 17 17:25:05.752375 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 17 17:25:05.762195 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 17 17:25:05.762323 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 17 17:25:05.771994 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 17 17:25:05.772064 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 17 17:25:05.783780 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 17 17:25:05.783846 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 17 17:25:05.790259 systemd[1]: Stopped target network.target - Network. Mar 17 17:25:05.802866 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 17 17:25:05.802963 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 17:25:05.816972 systemd[1]: Stopped target paths.target - Path Units. Mar 17 17:25:05.822126 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 17 17:25:05.833678 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:25:05.842853 systemd[1]: Stopped target slices.target - Slice Units. Mar 17 17:25:05.853694 systemd[1]: Stopped target sockets.target - Socket Units. Mar 17 17:25:05.867022 systemd[1]: iscsid.socket: Deactivated successfully. Mar 17 17:25:05.867096 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 17:25:05.878242 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 17 17:25:05.878296 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 17:25:05.885186 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 17 17:25:05.885262 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 17 17:25:05.896510 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 17 17:25:05.896603 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 17 17:25:05.903948 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 17 17:25:05.914828 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 17 17:25:05.926588 systemd-networkd[876]: eth0: DHCPv6 lease lost Mar 17 17:25:06.176847 kernel: hv_netvsc 00224876-f62f-0022-4876-f62f00224876 eth0: Data path switched from VF: enP25644s1 Mar 17 17:25:05.928282 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 17 17:25:05.929204 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 17 17:25:05.929287 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 17 17:25:05.937825 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 17 17:25:05.937946 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 17 17:25:05.944933 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 17 17:25:05.945028 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 17 17:25:05.960000 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 17 17:25:05.960079 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:25:05.972680 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 17 17:25:05.972766 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 17 17:25:06.003807 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 17 17:25:06.013472 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 17 17:25:06.013591 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 17:25:06.027620 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 17:25:06.027679 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:25:06.043411 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 17 17:25:06.043467 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 17 17:25:06.055277 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 17 17:25:06.055330 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:25:06.071935 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:25:06.123604 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 17 17:25:06.123810 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:25:06.137698 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 17 17:25:06.137752 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 17 17:25:06.150340 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 17 17:25:06.150382 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:25:06.172511 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 17 17:25:06.172610 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 17 17:25:06.189032 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 17 17:25:06.189103 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 17 17:25:06.205795 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 17:25:06.205865 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:25:06.245790 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 17 17:25:06.262474 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 17 17:25:06.262581 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:25:06.276434 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:25:06.491215 systemd-journald[218]: Received SIGTERM from PID 1 (systemd). Mar 17 17:25:06.276491 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:25:06.290165 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 17 17:25:06.290292 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 17 17:25:06.302095 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 17 17:25:06.302206 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 17 17:25:06.315084 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 17 17:25:06.346169 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 17 17:25:06.366840 systemd[1]: Switching root. Mar 17 17:25:06.536460 systemd-journald[218]: Journal stopped Mar 17 17:24:55.379082 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Mar 17 17:24:55.379105 kernel: Linux version 6.6.83-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Mon Mar 17 16:05:23 -00 2025 Mar 17 17:24:55.379113 kernel: KASLR enabled Mar 17 17:24:55.379119 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Mar 17 17:24:55.379126 kernel: printk: bootconsole [pl11] enabled Mar 17 17:24:55.379131 kernel: efi: EFI v2.7 by EDK II Mar 17 17:24:55.379138 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f214018 RNG=0x3fd5f998 MEMRESERVE=0x3e423d98 Mar 17 17:24:55.379144 kernel: random: crng init done Mar 17 17:24:55.379150 kernel: secureboot: Secure boot disabled Mar 17 17:24:55.379157 kernel: ACPI: Early table checksum verification disabled Mar 17 17:24:55.379163 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Mar 17 17:24:55.379168 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 17:24:55.379174 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 17:24:55.379182 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Mar 17 17:24:55.379189 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 17:24:55.379195 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 17:24:55.379201 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 17:24:55.379209 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 17:24:55.379215 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 17:24:55.379221 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 17:24:55.379227 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Mar 17 17:24:55.379233 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Mar 17 17:24:55.379240 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Mar 17 17:24:55.379246 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Mar 17 17:24:55.379252 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Mar 17 17:24:55.379258 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Mar 17 17:24:55.379264 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Mar 17 17:24:55.385501 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Mar 17 17:24:55.385525 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Mar 17 17:24:55.385531 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Mar 17 17:24:55.385538 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Mar 17 17:24:55.385544 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Mar 17 17:24:55.385550 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Mar 17 17:24:55.385557 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Mar 17 17:24:55.385563 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Mar 17 17:24:55.385569 kernel: NUMA: NODE_DATA [mem 0x1bf7ed800-0x1bf7f2fff] Mar 17 17:24:55.385575 kernel: Zone ranges: Mar 17 17:24:55.385582 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Mar 17 17:24:55.385588 kernel: DMA32 empty Mar 17 17:24:55.385594 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Mar 17 17:24:55.385605 kernel: Movable zone start for each node Mar 17 17:24:55.385611 kernel: Early memory node ranges Mar 17 17:24:55.385618 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Mar 17 17:24:55.385625 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Mar 17 17:24:55.385632 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Mar 17 17:24:55.385640 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Mar 17 17:24:55.385646 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Mar 17 17:24:55.385653 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Mar 17 17:24:55.385660 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Mar 17 17:24:55.385667 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Mar 17 17:24:55.385674 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Mar 17 17:24:55.385680 kernel: psci: probing for conduit method from ACPI. Mar 17 17:24:55.385687 kernel: psci: PSCIv1.1 detected in firmware. Mar 17 17:24:55.385694 kernel: psci: Using standard PSCI v0.2 function IDs Mar 17 17:24:55.385700 kernel: psci: MIGRATE_INFO_TYPE not supported. Mar 17 17:24:55.385707 kernel: psci: SMC Calling Convention v1.4 Mar 17 17:24:55.385714 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Mar 17 17:24:55.385722 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Mar 17 17:24:55.385728 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Mar 17 17:24:55.385735 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Mar 17 17:24:55.385742 kernel: pcpu-alloc: [0] 0 [0] 1 Mar 17 17:24:55.385749 kernel: Detected PIPT I-cache on CPU0 Mar 17 17:24:55.385755 kernel: CPU features: detected: GIC system register CPU interface Mar 17 17:24:55.385762 kernel: CPU features: detected: Hardware dirty bit management Mar 17 17:24:55.385769 kernel: CPU features: detected: Spectre-BHB Mar 17 17:24:55.385775 kernel: CPU features: kernel page table isolation forced ON by KASLR Mar 17 17:24:55.385782 kernel: CPU features: detected: Kernel page table isolation (KPTI) Mar 17 17:24:55.385789 kernel: CPU features: detected: ARM erratum 1418040 Mar 17 17:24:55.385797 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Mar 17 17:24:55.385804 kernel: CPU features: detected: SSBS not fully self-synchronizing Mar 17 17:24:55.385810 kernel: alternatives: applying boot alternatives Mar 17 17:24:55.385818 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=31b104f73129b84fa679201ebe02fbfd197d071bbf0576d6ccc5c5442bcbb405 Mar 17 17:24:55.385826 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 17 17:24:55.385832 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 17 17:24:55.385839 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 17 17:24:55.385845 kernel: Fallback order for Node 0: 0 Mar 17 17:24:55.385852 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Mar 17 17:24:55.385859 kernel: Policy zone: Normal Mar 17 17:24:55.385865 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 17 17:24:55.385873 kernel: software IO TLB: area num 2. Mar 17 17:24:55.385880 kernel: software IO TLB: mapped [mem 0x0000000036620000-0x000000003a620000] (64MB) Mar 17 17:24:55.385887 kernel: Memory: 3982368K/4194160K available (10240K kernel code, 2186K rwdata, 8100K rodata, 39744K init, 897K bss, 211792K reserved, 0K cma-reserved) Mar 17 17:24:55.385894 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 17 17:24:55.385901 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 17 17:24:55.385908 kernel: rcu: RCU event tracing is enabled. Mar 17 17:24:55.385915 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 17 17:24:55.385922 kernel: Trampoline variant of Tasks RCU enabled. Mar 17 17:24:55.385929 kernel: Tracing variant of Tasks RCU enabled. Mar 17 17:24:55.385935 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 17 17:24:55.385943 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 17 17:24:55.385951 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 17 17:24:55.385957 kernel: GICv3: 960 SPIs implemented Mar 17 17:24:55.385964 kernel: GICv3: 0 Extended SPIs implemented Mar 17 17:24:55.385971 kernel: Root IRQ handler: gic_handle_irq Mar 17 17:24:55.385978 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Mar 17 17:24:55.385984 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Mar 17 17:24:55.385991 kernel: ITS: No ITS available, not enabling LPIs Mar 17 17:24:55.385998 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 17 17:24:55.386005 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 17 17:24:55.386012 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Mar 17 17:24:55.386019 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Mar 17 17:24:55.386026 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Mar 17 17:24:55.386034 kernel: Console: colour dummy device 80x25 Mar 17 17:24:55.386041 kernel: printk: console [tty1] enabled Mar 17 17:24:55.386049 kernel: ACPI: Core revision 20230628 Mar 17 17:24:55.386056 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Mar 17 17:24:55.386063 kernel: pid_max: default: 32768 minimum: 301 Mar 17 17:24:55.386069 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 17 17:24:55.386076 kernel: landlock: Up and running. Mar 17 17:24:55.386083 kernel: SELinux: Initializing. Mar 17 17:24:55.386090 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 17:24:55.386099 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 17:24:55.386106 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 17 17:24:55.386114 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 17 17:24:55.386121 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Mar 17 17:24:55.386127 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 Mar 17 17:24:55.386135 kernel: Hyper-V: enabling crash_kexec_post_notifiers Mar 17 17:24:55.386142 kernel: rcu: Hierarchical SRCU implementation. Mar 17 17:24:55.386168 kernel: rcu: Max phase no-delay instances is 400. Mar 17 17:24:55.386175 kernel: Remapping and enabling EFI services. Mar 17 17:24:55.386183 kernel: smp: Bringing up secondary CPUs ... Mar 17 17:24:55.386190 kernel: Detected PIPT I-cache on CPU1 Mar 17 17:24:55.386197 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Mar 17 17:24:55.386206 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 17 17:24:55.386213 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Mar 17 17:24:55.386220 kernel: smp: Brought up 1 node, 2 CPUs Mar 17 17:24:55.386228 kernel: SMP: Total of 2 processors activated. Mar 17 17:24:55.386235 kernel: CPU features: detected: 32-bit EL0 Support Mar 17 17:24:55.386244 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Mar 17 17:24:55.386251 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Mar 17 17:24:55.386258 kernel: CPU features: detected: CRC32 instructions Mar 17 17:24:55.386266 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Mar 17 17:24:55.386308 kernel: CPU features: detected: LSE atomic instructions Mar 17 17:24:55.386317 kernel: CPU features: detected: Privileged Access Never Mar 17 17:24:55.386325 kernel: CPU: All CPU(s) started at EL1 Mar 17 17:24:55.386332 kernel: alternatives: applying system-wide alternatives Mar 17 17:24:55.386339 kernel: devtmpfs: initialized Mar 17 17:24:55.386349 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 17 17:24:55.386356 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 17 17:24:55.386363 kernel: pinctrl core: initialized pinctrl subsystem Mar 17 17:24:55.386371 kernel: SMBIOS 3.1.0 present. Mar 17 17:24:55.386378 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Mar 17 17:24:55.386385 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 17 17:24:55.386392 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 17 17:24:55.386400 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 17 17:24:55.386409 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 17 17:24:55.386416 kernel: audit: initializing netlink subsys (disabled) Mar 17 17:24:55.386423 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Mar 17 17:24:55.386431 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 17 17:24:55.386438 kernel: cpuidle: using governor menu Mar 17 17:24:55.386445 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 17 17:24:55.386453 kernel: ASID allocator initialised with 32768 entries Mar 17 17:24:55.386460 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 17 17:24:55.386467 kernel: Serial: AMBA PL011 UART driver Mar 17 17:24:55.386476 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Mar 17 17:24:55.386483 kernel: Modules: 0 pages in range for non-PLT usage Mar 17 17:24:55.386491 kernel: Modules: 508944 pages in range for PLT usage Mar 17 17:24:55.386498 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 17 17:24:55.386505 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 17 17:24:55.386512 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 17 17:24:55.386520 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 17 17:24:55.386527 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 17 17:24:55.386534 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 17 17:24:55.386544 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 17 17:24:55.386551 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 17 17:24:55.386558 kernel: ACPI: Added _OSI(Module Device) Mar 17 17:24:55.386566 kernel: ACPI: Added _OSI(Processor Device) Mar 17 17:24:55.386573 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 17 17:24:55.386580 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 17 17:24:55.386588 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 17 17:24:55.386595 kernel: ACPI: Interpreter enabled Mar 17 17:24:55.386602 kernel: ACPI: Using GIC for interrupt routing Mar 17 17:24:55.386609 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Mar 17 17:24:55.386618 kernel: printk: console [ttyAMA0] enabled Mar 17 17:24:55.386632 kernel: printk: bootconsole [pl11] disabled Mar 17 17:24:55.386639 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Mar 17 17:24:55.386646 kernel: iommu: Default domain type: Translated Mar 17 17:24:55.386654 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 17 17:24:55.386661 kernel: efivars: Registered efivars operations Mar 17 17:24:55.386668 kernel: vgaarb: loaded Mar 17 17:24:55.386678 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 17 17:24:55.386685 kernel: VFS: Disk quotas dquot_6.6.0 Mar 17 17:24:55.386694 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 17 17:24:55.386702 kernel: pnp: PnP ACPI init Mar 17 17:24:55.386709 kernel: pnp: PnP ACPI: found 0 devices Mar 17 17:24:55.386716 kernel: NET: Registered PF_INET protocol family Mar 17 17:24:55.386723 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 17 17:24:55.386730 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 17 17:24:55.386738 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 17 17:24:55.386745 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 17 17:24:55.386754 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 17 17:24:55.386762 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 17 17:24:55.386770 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 17:24:55.386777 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 17:24:55.386784 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 17 17:24:55.386792 kernel: PCI: CLS 0 bytes, default 64 Mar 17 17:24:55.386799 kernel: kvm [1]: HYP mode not available Mar 17 17:24:55.386806 kernel: Initialise system trusted keyrings Mar 17 17:24:55.386813 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 17 17:24:55.386822 kernel: Key type asymmetric registered Mar 17 17:24:55.386829 kernel: Asymmetric key parser 'x509' registered Mar 17 17:24:55.386836 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 17 17:24:55.386843 kernel: io scheduler mq-deadline registered Mar 17 17:24:55.386851 kernel: io scheduler kyber registered Mar 17 17:24:55.386858 kernel: io scheduler bfq registered Mar 17 17:24:55.386865 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 17 17:24:55.386872 kernel: thunder_xcv, ver 1.0 Mar 17 17:24:55.386879 kernel: thunder_bgx, ver 1.0 Mar 17 17:24:55.386886 kernel: nicpf, ver 1.0 Mar 17 17:24:55.386895 kernel: nicvf, ver 1.0 Mar 17 17:24:55.387032 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 17 17:24:55.387104 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-03-17T17:24:54 UTC (1742232294) Mar 17 17:24:55.387114 kernel: efifb: probing for efifb Mar 17 17:24:55.387121 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Mar 17 17:24:55.387128 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Mar 17 17:24:55.387136 kernel: efifb: scrolling: redraw Mar 17 17:24:55.387145 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 17 17:24:55.387152 kernel: Console: switching to colour frame buffer device 128x48 Mar 17 17:24:55.387160 kernel: fb0: EFI VGA frame buffer device Mar 17 17:24:55.387167 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Mar 17 17:24:55.387175 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 17 17:24:55.387182 kernel: No ACPI PMU IRQ for CPU0 Mar 17 17:24:55.387189 kernel: No ACPI PMU IRQ for CPU1 Mar 17 17:24:55.387197 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Mar 17 17:24:55.387204 kernel: watchdog: Delayed init of the lockup detector failed: -19 Mar 17 17:24:55.387213 kernel: watchdog: Hard watchdog permanently disabled Mar 17 17:24:55.387220 kernel: NET: Registered PF_INET6 protocol family Mar 17 17:24:55.387227 kernel: Segment Routing with IPv6 Mar 17 17:24:55.387235 kernel: In-situ OAM (IOAM) with IPv6 Mar 17 17:24:55.387242 kernel: NET: Registered PF_PACKET protocol family Mar 17 17:24:55.387249 kernel: Key type dns_resolver registered Mar 17 17:24:55.387256 kernel: registered taskstats version 1 Mar 17 17:24:55.387264 kernel: Loading compiled-in X.509 certificates Mar 17 17:24:55.387285 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.83-flatcar: 74c9b4f5dfad711856d7363c976664fc02c1e24c' Mar 17 17:24:55.387292 kernel: Key type .fscrypt registered Mar 17 17:24:55.387302 kernel: Key type fscrypt-provisioning registered Mar 17 17:24:55.387310 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 17 17:24:55.387317 kernel: ima: Allocated hash algorithm: sha1 Mar 17 17:24:55.387324 kernel: ima: No architecture policies found Mar 17 17:24:55.387332 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 17 17:24:55.387339 kernel: clk: Disabling unused clocks Mar 17 17:24:55.387347 kernel: Freeing unused kernel memory: 39744K Mar 17 17:24:55.387354 kernel: Run /init as init process Mar 17 17:24:55.387362 kernel: with arguments: Mar 17 17:24:55.387370 kernel: /init Mar 17 17:24:55.387377 kernel: with environment: Mar 17 17:24:55.387384 kernel: HOME=/ Mar 17 17:24:55.387391 kernel: TERM=linux Mar 17 17:24:55.387398 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 17 17:24:55.387408 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 17 17:24:55.387417 systemd[1]: Detected virtualization microsoft. Mar 17 17:24:55.387427 systemd[1]: Detected architecture arm64. Mar 17 17:24:55.387434 systemd[1]: Running in initrd. Mar 17 17:24:55.387442 systemd[1]: No hostname configured, using default hostname. Mar 17 17:24:55.387449 systemd[1]: Hostname set to . Mar 17 17:24:55.387457 systemd[1]: Initializing machine ID from random generator. Mar 17 17:24:55.387465 systemd[1]: Queued start job for default target initrd.target. Mar 17 17:24:55.387473 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:24:55.387481 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:24:55.387491 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 17 17:24:55.387499 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 17:24:55.387507 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 17 17:24:55.387515 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 17 17:24:55.387525 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 17 17:24:55.387533 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 17 17:24:55.387540 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:24:55.387550 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:24:55.387558 systemd[1]: Reached target paths.target - Path Units. Mar 17 17:24:55.387566 systemd[1]: Reached target slices.target - Slice Units. Mar 17 17:24:55.387573 systemd[1]: Reached target swap.target - Swaps. Mar 17 17:24:55.387581 systemd[1]: Reached target timers.target - Timer Units. Mar 17 17:24:55.387589 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 17:24:55.387597 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 17:24:55.387604 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 17 17:24:55.387612 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 17 17:24:55.387622 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:24:55.387630 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 17:24:55.387638 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:24:55.387645 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 17:24:55.387653 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 17 17:24:55.387661 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 17:24:55.387669 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 17 17:24:55.387676 systemd[1]: Starting systemd-fsck-usr.service... Mar 17 17:24:55.387687 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 17:24:55.387694 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 17:24:55.387723 systemd-journald[218]: Collecting audit messages is disabled. Mar 17 17:24:55.387743 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:24:55.387753 systemd-journald[218]: Journal started Mar 17 17:24:55.387772 systemd-journald[218]: Runtime Journal (/run/log/journal/962d4d8e3a5749febe10978fc48b21dd) is 8.0M, max 78.5M, 70.5M free. Mar 17 17:24:55.388474 systemd-modules-load[219]: Inserted module 'overlay' Mar 17 17:24:55.409239 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 17:24:55.421358 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 17 17:24:55.440939 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 17 17:24:55.440964 kernel: Bridge firewalling registered Mar 17 17:24:55.433713 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:24:55.439929 systemd-modules-load[219]: Inserted module 'br_netfilter' Mar 17 17:24:55.449553 systemd[1]: Finished systemd-fsck-usr.service. Mar 17 17:24:55.464880 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 17:24:55.481820 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:24:55.504690 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:24:55.512435 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:24:55.540045 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 17 17:24:55.560500 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 17:24:55.571337 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:24:55.588431 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:24:55.603458 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 17:24:55.617079 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:24:55.649568 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 17 17:24:55.665921 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 17:24:55.695160 dracut-cmdline[251]: dracut-dracut-053 Mar 17 17:24:55.695160 dracut-cmdline[251]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=31b104f73129b84fa679201ebe02fbfd197d071bbf0576d6ccc5c5442bcbb405 Mar 17 17:24:55.682152 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 17:24:55.707776 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:24:55.718025 systemd-resolved[258]: Positive Trust Anchors: Mar 17 17:24:55.718039 systemd-resolved[258]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 17:24:55.718070 systemd-resolved[258]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 17:24:55.721938 systemd-resolved[258]: Defaulting to hostname 'linux'. Mar 17 17:24:55.727908 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 17:24:55.759962 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:24:55.881310 kernel: SCSI subsystem initialized Mar 17 17:24:55.889314 kernel: Loading iSCSI transport class v2.0-870. Mar 17 17:24:55.899302 kernel: iscsi: registered transport (tcp) Mar 17 17:24:55.917746 kernel: iscsi: registered transport (qla4xxx) Mar 17 17:24:55.917807 kernel: QLogic iSCSI HBA Driver Mar 17 17:24:55.959430 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 17 17:24:55.974479 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 17 17:24:56.008717 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 17 17:24:56.008773 kernel: device-mapper: uevent: version 1.0.3 Mar 17 17:24:56.015306 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 17 17:24:56.067297 kernel: raid6: neonx8 gen() 15786 MB/s Mar 17 17:24:56.085285 kernel: raid6: neonx4 gen() 15659 MB/s Mar 17 17:24:56.105292 kernel: raid6: neonx2 gen() 13214 MB/s Mar 17 17:24:56.126286 kernel: raid6: neonx1 gen() 10492 MB/s Mar 17 17:24:56.146309 kernel: raid6: int64x8 gen() 6958 MB/s Mar 17 17:24:56.166317 kernel: raid6: int64x4 gen() 7353 MB/s Mar 17 17:24:56.188335 kernel: raid6: int64x2 gen() 6134 MB/s Mar 17 17:24:56.212611 kernel: raid6: int64x1 gen() 5044 MB/s Mar 17 17:24:56.212705 kernel: raid6: using algorithm neonx8 gen() 15786 MB/s Mar 17 17:24:56.237016 kernel: raid6: .... xor() 11910 MB/s, rmw enabled Mar 17 17:24:56.237085 kernel: raid6: using neon recovery algorithm Mar 17 17:24:56.250239 kernel: xor: measuring software checksum speed Mar 17 17:24:56.250324 kernel: 8regs : 19778 MB/sec Mar 17 17:24:56.254115 kernel: 32regs : 19646 MB/sec Mar 17 17:24:56.257819 kernel: arm64_neon : 26919 MB/sec Mar 17 17:24:56.262143 kernel: xor: using function: arm64_neon (26919 MB/sec) Mar 17 17:24:56.315302 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 17 17:24:56.327158 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 17 17:24:56.346443 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:24:56.370970 systemd-udevd[439]: Using default interface naming scheme 'v255'. Mar 17 17:24:56.376972 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:24:56.396745 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 17 17:24:56.425507 dracut-pre-trigger[451]: rd.md=0: removing MD RAID activation Mar 17 17:24:56.458807 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 17:24:56.473853 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 17:24:56.513748 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:24:56.534754 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 17 17:24:56.563108 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 17 17:24:56.577982 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 17:24:56.604617 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:24:56.625441 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 17:24:56.653626 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 17 17:24:56.678412 kernel: hv_vmbus: Vmbus version:5.3 Mar 17 17:24:56.678447 kernel: hv_vmbus: registering driver hyperv_keyboard Mar 17 17:24:56.678457 kernel: pps_core: LinuxPPS API ver. 1 registered Mar 17 17:24:56.679295 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 17 17:24:56.722852 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Mar 17 17:24:56.722878 kernel: hv_vmbus: registering driver hid_hyperv Mar 17 17:24:56.722889 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Mar 17 17:24:56.722899 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Mar 17 17:24:56.722918 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Mar 17 17:24:56.723095 kernel: hv_vmbus: registering driver hv_storvsc Mar 17 17:24:56.723106 kernel: PTP clock support registered Mar 17 17:24:56.723116 kernel: scsi host0: storvsc_host_t Mar 17 17:24:56.723220 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Mar 17 17:24:56.723243 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Mar 17 17:24:56.692899 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 17:24:56.795770 kernel: hv_utils: Registering HyperV Utility Driver Mar 17 17:24:56.795793 kernel: scsi host1: storvsc_host_t Mar 17 17:24:56.795936 kernel: hv_vmbus: registering driver hv_utils Mar 17 17:24:56.795947 kernel: hv_vmbus: registering driver hv_netvsc Mar 17 17:24:56.795956 kernel: hv_utils: Heartbeat IC version 3.0 Mar 17 17:24:56.795966 kernel: hv_utils: Shutdown IC version 3.2 Mar 17 17:24:56.795983 kernel: hv_utils: TimeSync IC version 4.0 Mar 17 17:24:56.693185 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:24:56.562809 systemd-resolved[258]: Clock change detected. Flushing caches. Mar 17 17:24:56.599105 systemd-journald[218]: Time jumped backwards, rotating. Mar 17 17:24:56.599150 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Mar 17 17:24:56.618600 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 17 17:24:56.618618 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Mar 17 17:24:56.599639 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:24:56.611803 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:24:56.612048 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:24:56.618741 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:24:56.693594 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Mar 17 17:24:56.722976 kernel: hv_netvsc 00224876-f62f-0022-4876-f62f00224876 eth0: VF slot 1 added Mar 17 17:24:56.723134 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Mar 17 17:24:56.723242 kernel: sd 0:0:0:0: [sda] Write Protect is off Mar 17 17:24:56.723329 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Mar 17 17:24:56.723412 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Mar 17 17:24:56.723491 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 17 17:24:56.723500 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Mar 17 17:24:56.642895 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:24:56.746305 kernel: hv_vmbus: registering driver hv_pci Mar 17 17:24:56.746331 kernel: hv_pci 2c416f2f-642c-4672-83ec-d1dc04f21702: PCI VMBus probing: Using version 0x10004 Mar 17 17:24:56.843204 kernel: hv_pci 2c416f2f-642c-4672-83ec-d1dc04f21702: PCI host bridge to bus 642c:00 Mar 17 17:24:56.843335 kernel: pci_bus 642c:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Mar 17 17:24:56.843437 kernel: pci_bus 642c:00: No busn resource found for root bus, will use [bus 00-ff] Mar 17 17:24:56.843513 kernel: pci 642c:00:02.0: [15b3:1018] type 00 class 0x020000 Mar 17 17:24:56.843694 kernel: pci 642c:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Mar 17 17:24:56.843790 kernel: pci 642c:00:02.0: enabling Extended Tags Mar 17 17:24:56.843879 kernel: pci 642c:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 642c:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Mar 17 17:24:56.843972 kernel: pci_bus 642c:00: busn_res: [bus 00-ff] end is updated to 00 Mar 17 17:24:56.844056 kernel: pci 642c:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Mar 17 17:24:56.685429 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:24:56.685561 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:24:56.746812 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:24:56.784743 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:24:56.806811 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:24:56.887797 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:24:56.917436 kernel: mlx5_core 642c:00:02.0: enabling device (0000 -> 0002) Mar 17 17:24:57.127271 kernel: mlx5_core 642c:00:02.0: firmware version: 16.30.1284 Mar 17 17:24:57.127408 kernel: hv_netvsc 00224876-f62f-0022-4876-f62f00224876 eth0: VF registering: eth1 Mar 17 17:24:57.127502 kernel: mlx5_core 642c:00:02.0 eth1: joined to eth0 Mar 17 17:24:57.127649 kernel: mlx5_core 642c:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Mar 17 17:24:57.135579 kernel: mlx5_core 642c:00:02.0 enP25644s1: renamed from eth1 Mar 17 17:24:57.305182 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Mar 17 17:24:57.377759 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (504) Mar 17 17:24:57.393962 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Mar 17 17:24:57.444790 kernel: BTRFS: device fsid c0c482e3-6885-4a4e-b31c-6bc8f8c403e7 devid 1 transid 40 /dev/sda3 scanned by (udev-worker) (495) Mar 17 17:24:57.450245 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Mar 17 17:24:57.468295 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Mar 17 17:24:57.476314 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Mar 17 17:24:57.511868 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 17 17:24:57.543573 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 17 17:24:57.554617 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 17 17:24:58.563589 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 17 17:24:58.564643 disk-uuid[606]: The operation has completed successfully. Mar 17 17:24:58.631977 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 17 17:24:58.632077 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 17 17:24:58.654762 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 17 17:24:58.670482 sh[692]: Success Mar 17 17:24:58.702801 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 17 17:24:58.912039 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 17 17:24:58.922716 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 17 17:24:58.929741 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 17 17:24:58.970595 kernel: BTRFS info (device dm-0): first mount of filesystem c0c482e3-6885-4a4e-b31c-6bc8f8c403e7 Mar 17 17:24:58.970659 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 17 17:24:58.970670 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 17 17:24:58.983695 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 17 17:24:58.988336 kernel: BTRFS info (device dm-0): using free space tree Mar 17 17:24:59.343772 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 17 17:24:59.349849 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 17 17:24:59.375841 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 17 17:24:59.391772 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 17 17:24:59.425609 kernel: BTRFS info (device sda6): first mount of filesystem 3dbd9b64-bd31-4292-be10-51551993b53f Mar 17 17:24:59.425633 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 17 17:24:59.425642 kernel: BTRFS info (device sda6): using free space tree Mar 17 17:24:59.439630 kernel: BTRFS info (device sda6): auto enabling async discard Mar 17 17:24:59.457995 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 17 17:24:59.463993 kernel: BTRFS info (device sda6): last unmount of filesystem 3dbd9b64-bd31-4292-be10-51551993b53f Mar 17 17:24:59.472847 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 17 17:24:59.487835 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 17 17:24:59.546670 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 17:24:59.568683 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 17:24:59.596931 systemd-networkd[876]: lo: Link UP Mar 17 17:24:59.596940 systemd-networkd[876]: lo: Gained carrier Mar 17 17:24:59.601788 systemd-networkd[876]: Enumeration completed Mar 17 17:24:59.606166 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 17:24:59.612974 systemd-networkd[876]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:24:59.612978 systemd-networkd[876]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:24:59.613694 systemd[1]: Reached target network.target - Network. Mar 17 17:24:59.682607 kernel: mlx5_core 642c:00:02.0 enP25644s1: Link up Mar 17 17:24:59.721591 kernel: hv_netvsc 00224876-f62f-0022-4876-f62f00224876 eth0: Data path switched to VF: enP25644s1 Mar 17 17:24:59.722144 systemd-networkd[876]: enP25644s1: Link UP Mar 17 17:24:59.722246 systemd-networkd[876]: eth0: Link UP Mar 17 17:24:59.722340 systemd-networkd[876]: eth0: Gained carrier Mar 17 17:24:59.722348 systemd-networkd[876]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:24:59.730998 systemd-networkd[876]: enP25644s1: Gained carrier Mar 17 17:24:59.758600 systemd-networkd[876]: eth0: DHCPv4 address 10.200.20.35/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 17 17:25:00.384033 ignition[809]: Ignition 2.20.0 Mar 17 17:25:00.384046 ignition[809]: Stage: fetch-offline Mar 17 17:25:00.386235 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 17:25:00.384087 ignition[809]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:25:00.384095 ignition[809]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 17 17:25:00.384194 ignition[809]: parsed url from cmdline: "" Mar 17 17:25:00.384199 ignition[809]: no config URL provided Mar 17 17:25:00.384203 ignition[809]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 17:25:00.419833 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 17 17:25:00.384210 ignition[809]: no config at "/usr/lib/ignition/user.ign" Mar 17 17:25:00.384215 ignition[809]: failed to fetch config: resource requires networking Mar 17 17:25:00.384404 ignition[809]: Ignition finished successfully Mar 17 17:25:00.448360 ignition[884]: Ignition 2.20.0 Mar 17 17:25:00.448367 ignition[884]: Stage: fetch Mar 17 17:25:00.448652 ignition[884]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:25:00.448662 ignition[884]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 17 17:25:00.448789 ignition[884]: parsed url from cmdline: "" Mar 17 17:25:00.448792 ignition[884]: no config URL provided Mar 17 17:25:00.448798 ignition[884]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 17:25:00.448806 ignition[884]: no config at "/usr/lib/ignition/user.ign" Mar 17 17:25:00.448834 ignition[884]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Mar 17 17:25:00.572353 ignition[884]: GET result: OK Mar 17 17:25:00.572430 ignition[884]: config has been read from IMDS userdata Mar 17 17:25:00.572470 ignition[884]: parsing config with SHA512: 708ab64a050d514500c330f20c511d5c7300b95a7f2149941edf3b555d00883b504cde9b341a5dbeb120547b9165e5bbb27fc111b2812fd52cd0b77e65719cb1 Mar 17 17:25:00.577402 unknown[884]: fetched base config from "system" Mar 17 17:25:00.577869 ignition[884]: fetch: fetch complete Mar 17 17:25:00.577412 unknown[884]: fetched base config from "system" Mar 17 17:25:00.577875 ignition[884]: fetch: fetch passed Mar 17 17:25:00.577417 unknown[884]: fetched user config from "azure" Mar 17 17:25:00.577948 ignition[884]: Ignition finished successfully Mar 17 17:25:00.583072 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 17 17:25:00.600843 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 17 17:25:00.635160 ignition[891]: Ignition 2.20.0 Mar 17 17:25:00.635173 ignition[891]: Stage: kargs Mar 17 17:25:00.640424 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 17 17:25:00.635360 ignition[891]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:25:00.635370 ignition[891]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 17 17:25:00.636395 ignition[891]: kargs: kargs passed Mar 17 17:25:00.636450 ignition[891]: Ignition finished successfully Mar 17 17:25:00.670773 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 17 17:25:00.684792 ignition[897]: Ignition 2.20.0 Mar 17 17:25:00.689983 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 17 17:25:00.684800 ignition[897]: Stage: disks Mar 17 17:25:00.696527 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 17 17:25:00.685052 ignition[897]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:25:00.706698 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 17 17:25:00.685074 ignition[897]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 17 17:25:00.719857 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 17:25:00.686228 ignition[897]: disks: disks passed Mar 17 17:25:00.728748 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 17:25:00.686287 ignition[897]: Ignition finished successfully Mar 17 17:25:00.740903 systemd[1]: Reached target basic.target - Basic System. Mar 17 17:25:00.774118 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 17 17:25:00.795988 systemd-networkd[876]: eth0: Gained IPv6LL Mar 17 17:25:00.869632 systemd-fsck[905]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Mar 17 17:25:00.881780 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 17 17:25:00.907649 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 17 17:25:00.966598 kernel: EXT4-fs (sda9): mounted filesystem 6b579bf2-7716-4d59-98eb-b92ea668693e r/w with ordered data mode. Quota mode: none. Mar 17 17:25:00.967769 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 17 17:25:00.972919 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 17 17:25:01.024682 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 17:25:01.034712 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 17 17:25:01.056642 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (916) Mar 17 17:25:01.066821 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Mar 17 17:25:01.101412 kernel: BTRFS info (device sda6): first mount of filesystem 3dbd9b64-bd31-4292-be10-51551993b53f Mar 17 17:25:01.101442 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 17 17:25:01.101454 kernel: BTRFS info (device sda6): using free space tree Mar 17 17:25:01.093710 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 17 17:25:01.128733 kernel: BTRFS info (device sda6): auto enabling async discard Mar 17 17:25:01.093752 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 17:25:01.116582 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 17 17:25:01.130814 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 17:25:01.154143 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 17 17:25:01.616455 coreos-metadata[918]: Mar 17 17:25:01.616 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 17 17:25:01.632007 coreos-metadata[918]: Mar 17 17:25:01.631 INFO Fetch successful Mar 17 17:25:01.641408 coreos-metadata[918]: Mar 17 17:25:01.634 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Mar 17 17:25:01.655143 coreos-metadata[918]: Mar 17 17:25:01.655 INFO Fetch successful Mar 17 17:25:01.669740 coreos-metadata[918]: Mar 17 17:25:01.669 INFO wrote hostname ci-4152.2.2-a-e33ca1f69b to /sysroot/etc/hostname Mar 17 17:25:01.679956 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 17 17:25:01.696368 systemd-networkd[876]: enP25644s1: Gained IPv6LL Mar 17 17:25:01.948719 initrd-setup-root[947]: cut: /sysroot/etc/passwd: No such file or directory Mar 17 17:25:01.989787 initrd-setup-root[954]: cut: /sysroot/etc/group: No such file or directory Mar 17 17:25:02.011648 initrd-setup-root[961]: cut: /sysroot/etc/shadow: No such file or directory Mar 17 17:25:02.030588 initrd-setup-root[968]: cut: /sysroot/etc/gshadow: No such file or directory Mar 17 17:25:03.035701 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 17 17:25:03.054756 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 17 17:25:03.064900 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 17 17:25:03.088434 kernel: BTRFS info (device sda6): last unmount of filesystem 3dbd9b64-bd31-4292-be10-51551993b53f Mar 17 17:25:03.082692 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 17 17:25:03.120363 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 17 17:25:03.135301 ignition[1037]: INFO : Ignition 2.20.0 Mar 17 17:25:03.141164 ignition[1037]: INFO : Stage: mount Mar 17 17:25:03.141164 ignition[1037]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:25:03.141164 ignition[1037]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 17 17:25:03.141164 ignition[1037]: INFO : mount: mount passed Mar 17 17:25:03.141164 ignition[1037]: INFO : Ignition finished successfully Mar 17 17:25:03.143508 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 17 17:25:03.173661 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 17 17:25:03.188773 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 17:25:03.221864 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1047) Mar 17 17:25:03.235658 kernel: BTRFS info (device sda6): first mount of filesystem 3dbd9b64-bd31-4292-be10-51551993b53f Mar 17 17:25:03.235725 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 17 17:25:03.240155 kernel: BTRFS info (device sda6): using free space tree Mar 17 17:25:03.246567 kernel: BTRFS info (device sda6): auto enabling async discard Mar 17 17:25:03.248857 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 17:25:03.275362 ignition[1065]: INFO : Ignition 2.20.0 Mar 17 17:25:03.275362 ignition[1065]: INFO : Stage: files Mar 17 17:25:03.283802 ignition[1065]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:25:03.283802 ignition[1065]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 17 17:25:03.283802 ignition[1065]: DEBUG : files: compiled without relabeling support, skipping Mar 17 17:25:03.283802 ignition[1065]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 17 17:25:03.283802 ignition[1065]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 17 17:25:03.355408 ignition[1065]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 17 17:25:03.363672 ignition[1065]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 17 17:25:03.363672 ignition[1065]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 17 17:25:03.358123 unknown[1065]: wrote ssh authorized keys file for user: core Mar 17 17:25:03.404327 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Mar 17 17:25:03.415747 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Mar 17 17:25:03.454811 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 17 17:25:03.591858 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Mar 17 17:25:03.603543 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 17 17:25:03.603543 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Mar 17 17:25:04.023612 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 17 17:25:04.215101 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 17 17:25:04.226760 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 17 17:25:04.226760 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 17 17:25:04.226760 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 17 17:25:04.226760 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 17 17:25:04.226760 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 17:25:04.226760 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 17:25:04.226760 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 17:25:04.226760 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 17:25:04.226760 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 17:25:04.226760 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 17:25:04.226760 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Mar 17 17:25:04.226760 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Mar 17 17:25:04.226760 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Mar 17 17:25:04.226760 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 Mar 17 17:25:04.621689 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 17 17:25:04.906846 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Mar 17 17:25:04.906846 ignition[1065]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 17 17:25:04.943626 ignition[1065]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 17:25:04.954936 ignition[1065]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 17:25:04.954936 ignition[1065]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 17 17:25:04.954936 ignition[1065]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Mar 17 17:25:04.954936 ignition[1065]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Mar 17 17:25:04.954936 ignition[1065]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 17 17:25:04.954936 ignition[1065]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 17 17:25:04.954936 ignition[1065]: INFO : files: files passed Mar 17 17:25:04.954936 ignition[1065]: INFO : Ignition finished successfully Mar 17 17:25:04.968321 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 17 17:25:05.000841 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 17 17:25:05.021773 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 17 17:25:05.082226 initrd-setup-root-after-ignition[1091]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:25:05.082226 initrd-setup-root-after-ignition[1091]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:25:05.047306 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 17 17:25:05.122335 initrd-setup-root-after-ignition[1095]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:25:05.047400 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 17 17:25:05.082493 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 17:25:05.099823 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 17 17:25:05.138828 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 17 17:25:05.182805 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 17 17:25:05.182935 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 17 17:25:05.197544 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 17 17:25:05.210601 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 17 17:25:05.222307 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 17 17:25:05.243041 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 17 17:25:05.265331 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 17:25:05.281750 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 17 17:25:05.301943 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 17 17:25:05.302070 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 17 17:25:05.316432 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:25:05.330028 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:25:05.343460 systemd[1]: Stopped target timers.target - Timer Units. Mar 17 17:25:05.355887 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 17 17:25:05.355984 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 17:25:05.374062 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 17 17:25:05.386871 systemd[1]: Stopped target basic.target - Basic System. Mar 17 17:25:05.397670 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 17 17:25:05.409758 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 17:25:05.424214 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 17 17:25:05.438357 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 17 17:25:05.451411 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 17:25:05.465872 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 17 17:25:05.480705 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 17 17:25:05.493761 systemd[1]: Stopped target swap.target - Swaps. Mar 17 17:25:05.504050 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 17 17:25:05.504134 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 17 17:25:05.522595 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:25:05.541475 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:25:05.560065 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 17 17:25:05.566901 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:25:05.574670 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 17 17:25:05.574758 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 17 17:25:05.597522 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 17 17:25:05.597610 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 17:25:05.611697 systemd[1]: ignition-files.service: Deactivated successfully. Mar 17 17:25:05.611758 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 17 17:25:05.623815 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 17 17:25:05.623868 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 17 17:25:05.664797 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 17 17:25:05.679628 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 17 17:25:05.679715 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:25:05.710270 ignition[1117]: INFO : Ignition 2.20.0 Mar 17 17:25:05.710270 ignition[1117]: INFO : Stage: umount Mar 17 17:25:05.745166 ignition[1117]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:25:05.745166 ignition[1117]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Mar 17 17:25:05.745166 ignition[1117]: INFO : umount: umount passed Mar 17 17:25:05.745166 ignition[1117]: INFO : Ignition finished successfully Mar 17 17:25:05.712705 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 17 17:25:05.720280 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 17 17:25:05.720361 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:25:05.728568 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 17 17:25:05.728652 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 17:25:05.752250 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 17 17:25:05.752375 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 17 17:25:05.762195 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 17 17:25:05.762323 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 17 17:25:05.771994 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 17 17:25:05.772064 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 17 17:25:05.783780 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 17 17:25:05.783846 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 17 17:25:05.790259 systemd[1]: Stopped target network.target - Network. Mar 17 17:25:05.802866 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 17 17:25:05.802963 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 17:25:05.816972 systemd[1]: Stopped target paths.target - Path Units. Mar 17 17:25:05.822126 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 17 17:25:05.833678 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:25:05.842853 systemd[1]: Stopped target slices.target - Slice Units. Mar 17 17:25:05.853694 systemd[1]: Stopped target sockets.target - Socket Units. Mar 17 17:25:05.867022 systemd[1]: iscsid.socket: Deactivated successfully. Mar 17 17:25:05.867096 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 17:25:05.878242 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 17 17:25:05.878296 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 17:25:05.885186 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 17 17:25:05.885262 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 17 17:25:05.896510 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 17 17:25:05.896603 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 17 17:25:05.903948 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 17 17:25:05.914828 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 17 17:25:05.926588 systemd-networkd[876]: eth0: DHCPv6 lease lost Mar 17 17:25:06.176847 kernel: hv_netvsc 00224876-f62f-0022-4876-f62f00224876 eth0: Data path switched from VF: enP25644s1 Mar 17 17:25:05.928282 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 17 17:25:05.929204 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 17 17:25:05.929287 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 17 17:25:05.937825 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 17 17:25:05.937946 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 17 17:25:05.944933 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 17 17:25:05.945028 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 17 17:25:05.960000 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 17 17:25:05.960079 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:25:05.972680 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 17 17:25:05.972766 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 17 17:25:06.003807 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 17 17:25:06.013472 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 17 17:25:06.013591 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 17:25:06.027620 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 17:25:06.027679 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:25:06.043411 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 17 17:25:06.043467 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 17 17:25:06.055277 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 17 17:25:06.055330 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:25:06.071935 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:25:06.123604 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 17 17:25:06.123810 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:25:06.137698 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 17 17:25:06.137752 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 17 17:25:06.150340 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 17 17:25:06.150382 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:25:06.172511 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 17 17:25:06.172610 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 17 17:25:06.189032 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 17 17:25:06.189103 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 17 17:25:06.205795 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 17:25:06.205865 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:25:06.245790 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 17 17:25:06.262474 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 17 17:25:06.262581 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:25:06.276434 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:25:06.491215 systemd-journald[218]: Received SIGTERM from PID 1 (systemd). Mar 17 17:25:06.276491 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:25:06.290165 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 17 17:25:06.290292 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 17 17:25:06.302095 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 17 17:25:06.302206 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 17 17:25:06.315084 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 17 17:25:06.346169 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 17 17:25:06.366840 systemd[1]: Switching root. Mar 17 17:25:06.536460 systemd-journald[218]: Journal stopped Mar 17 17:25:11.195082 kernel: SELinux: policy capability network_peer_controls=1 Mar 17 17:25:11.195106 kernel: SELinux: policy capability open_perms=1 Mar 17 17:25:11.195116 kernel: SELinux: policy capability extended_socket_class=1 Mar 17 17:25:11.195123 kernel: SELinux: policy capability always_check_network=0 Mar 17 17:25:11.195133 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 17 17:25:11.195141 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 17 17:25:11.195149 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 17 17:25:11.195157 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 17 17:25:11.195165 kernel: audit: type=1403 audit(1742232307.879:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 17 17:25:11.195175 systemd[1]: Successfully loaded SELinux policy in 173.620ms. Mar 17 17:25:11.195186 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.521ms. Mar 17 17:25:11.195196 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 17 17:25:11.195205 systemd[1]: Detected virtualization microsoft. Mar 17 17:25:11.195213 systemd[1]: Detected architecture arm64. Mar 17 17:25:11.195222 systemd[1]: Detected first boot. Mar 17 17:25:11.195233 systemd[1]: Hostname set to . Mar 17 17:25:11.195242 systemd[1]: Initializing machine ID from random generator. Mar 17 17:25:11.195251 zram_generator::config[1159]: No configuration found. Mar 17 17:25:11.195260 systemd[1]: Populated /etc with preset unit settings. Mar 17 17:25:11.195269 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 17 17:25:11.195280 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 17 17:25:11.195289 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 17 17:25:11.195300 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 17 17:25:11.195309 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 17 17:25:11.195319 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 17 17:25:11.195328 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 17 17:25:11.195337 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 17 17:25:11.195346 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 17 17:25:11.195355 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 17 17:25:11.195365 systemd[1]: Created slice user.slice - User and Session Slice. Mar 17 17:25:11.195374 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:25:11.195383 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:25:11.195392 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 17 17:25:11.195402 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 17 17:25:11.195411 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 17 17:25:11.195420 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 17:25:11.195429 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Mar 17 17:25:11.195440 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:25:11.195449 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 17 17:25:11.195458 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 17 17:25:11.195470 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 17 17:25:11.195481 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 17 17:25:11.195491 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:25:11.195500 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 17:25:11.195509 systemd[1]: Reached target slices.target - Slice Units. Mar 17 17:25:11.195520 systemd[1]: Reached target swap.target - Swaps. Mar 17 17:25:11.195543 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 17 17:25:11.195555 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 17 17:25:11.195564 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:25:11.195574 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 17:25:11.195583 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:25:11.195595 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 17 17:25:11.195604 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 17 17:25:11.195614 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 17 17:25:11.195623 systemd[1]: Mounting media.mount - External Media Directory... Mar 17 17:25:11.195632 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 17 17:25:11.195642 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 17 17:25:11.195651 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 17 17:25:11.195662 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 17 17:25:11.195672 systemd[1]: Reached target machines.target - Containers. Mar 17 17:25:11.195681 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 17 17:25:11.195691 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:25:11.195701 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 17:25:11.195711 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 17 17:25:11.195720 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:25:11.195730 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 17 17:25:11.195740 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:25:11.195750 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 17 17:25:11.195759 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:25:11.195769 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 17 17:25:11.195779 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 17 17:25:11.195788 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 17 17:25:11.195797 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 17 17:25:11.195807 systemd[1]: Stopped systemd-fsck-usr.service. Mar 17 17:25:11.195818 kernel: fuse: init (API version 7.39) Mar 17 17:25:11.195827 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 17:25:11.195836 kernel: loop: module loaded Mar 17 17:25:11.195844 kernel: ACPI: bus type drm_connector registered Mar 17 17:25:11.195853 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 17:25:11.195863 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 17 17:25:11.195888 systemd-journald[1262]: Collecting audit messages is disabled. Mar 17 17:25:11.195911 systemd-journald[1262]: Journal started Mar 17 17:25:11.195931 systemd-journald[1262]: Runtime Journal (/run/log/journal/085c2962a5ee47ccb307296452144510) is 8.0M, max 78.5M, 70.5M free. Mar 17 17:25:10.131944 systemd[1]: Queued start job for default target multi-user.target. Mar 17 17:25:10.251400 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Mar 17 17:25:10.251818 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 17 17:25:10.252121 systemd[1]: systemd-journald.service: Consumed 3.511s CPU time. Mar 17 17:25:11.216568 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 17 17:25:11.232780 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 17:25:11.249143 systemd[1]: verity-setup.service: Deactivated successfully. Mar 17 17:25:11.249229 systemd[1]: Stopped verity-setup.service. Mar 17 17:25:11.268413 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 17:25:11.269262 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 17 17:25:11.275829 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 17 17:25:11.283039 systemd[1]: Mounted media.mount - External Media Directory. Mar 17 17:25:11.289240 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 17 17:25:11.295897 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 17 17:25:11.303389 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 17 17:25:11.309220 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 17 17:25:11.316511 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:25:11.324332 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 17 17:25:11.324486 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 17 17:25:11.331691 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:25:11.331829 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:25:11.338839 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 17:25:11.338971 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 17 17:25:11.345970 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:25:11.346122 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:25:11.354135 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 17 17:25:11.354263 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 17 17:25:11.361487 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:25:11.361658 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:25:11.368462 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 17:25:11.376037 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 17 17:25:11.384143 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 17 17:25:11.392612 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:25:11.410895 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 17 17:25:11.425669 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 17 17:25:11.433314 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 17 17:25:11.441240 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 17 17:25:11.441283 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 17:25:11.448495 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 17 17:25:11.457100 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 17 17:25:11.465016 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 17 17:25:11.471434 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:25:11.497707 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 17 17:25:11.505255 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 17 17:25:11.512360 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:25:11.514221 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 17 17:25:11.521795 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:25:11.522950 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:25:11.532802 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 17 17:25:11.548276 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 17 17:25:11.556732 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 17 17:25:11.569325 systemd-journald[1262]: Time spent on flushing to /var/log/journal/085c2962a5ee47ccb307296452144510 is 61.438ms for 902 entries. Mar 17 17:25:11.569325 systemd-journald[1262]: System Journal (/var/log/journal/085c2962a5ee47ccb307296452144510) is 11.8M, max 2.6G, 2.6G free. Mar 17 17:25:11.750343 systemd-journald[1262]: Received client request to flush runtime journal. Mar 17 17:25:11.750409 kernel: loop0: detected capacity change from 0 to 116808 Mar 17 17:25:11.750429 systemd-journald[1262]: /var/log/journal/085c2962a5ee47ccb307296452144510/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Mar 17 17:25:11.750459 systemd-journald[1262]: Rotating system journal. Mar 17 17:25:11.568830 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 17 17:25:11.583244 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 17 17:25:11.597274 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 17 17:25:11.619666 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 17 17:25:11.641832 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:25:11.664015 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 17 17:25:11.681861 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 17 17:25:11.689928 udevadm[1296]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 17 17:25:11.752733 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 17 17:25:11.772589 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 17 17:25:11.773231 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 17 17:25:11.926640 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 17 17:25:11.940704 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 17:25:12.046044 systemd-tmpfiles[1311]: ACLs are not supported, ignoring. Mar 17 17:25:12.046411 systemd-tmpfiles[1311]: ACLs are not supported, ignoring. Mar 17 17:25:12.051712 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:25:12.200574 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 17 17:25:12.254555 kernel: loop1: detected capacity change from 0 to 28720 Mar 17 17:25:12.567644 kernel: loop2: detected capacity change from 0 to 201592 Mar 17 17:25:12.615560 kernel: loop3: detected capacity change from 0 to 113536 Mar 17 17:25:12.909584 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 17 17:25:12.923729 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:25:12.950257 systemd-udevd[1319]: Using default interface naming scheme 'v255'. Mar 17 17:25:13.369587 kernel: loop4: detected capacity change from 0 to 116808 Mar 17 17:25:13.378562 kernel: loop5: detected capacity change from 0 to 28720 Mar 17 17:25:13.387556 kernel: loop6: detected capacity change from 0 to 201592 Mar 17 17:25:13.398564 kernel: loop7: detected capacity change from 0 to 113536 Mar 17 17:25:13.401246 (sd-merge)[1321]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Mar 17 17:25:13.401712 (sd-merge)[1321]: Merged extensions into '/usr'. Mar 17 17:25:13.405284 systemd[1]: Reloading requested from client PID 1293 ('systemd-sysext') (unit systemd-sysext.service)... Mar 17 17:25:13.405420 systemd[1]: Reloading... Mar 17 17:25:13.486609 zram_generator::config[1347]: No configuration found. Mar 17 17:25:13.684574 kernel: mousedev: PS/2 mouse device common for all mice Mar 17 17:25:13.727555 kernel: hv_vmbus: registering driver hv_balloon Mar 17 17:25:13.727645 kernel: hv_vmbus: registering driver hyperv_fb Mar 17 17:25:13.727685 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Mar 17 17:25:13.734939 kernel: hv_balloon: Memory hot add disabled on ARM64 Mar 17 17:25:13.735035 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Mar 17 17:25:13.744315 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:25:13.751817 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Mar 17 17:25:13.764554 kernel: Console: switching to colour dummy device 80x25 Mar 17 17:25:13.764668 kernel: Console: switching to colour frame buffer device 128x48 Mar 17 17:25:13.823608 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (1392) Mar 17 17:25:13.833845 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Mar 17 17:25:13.834231 systemd[1]: Reloading finished in 428 ms. Mar 17 17:25:13.861718 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:25:13.876279 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 17 17:25:13.911936 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Mar 17 17:25:13.942235 systemd[1]: Starting ensure-sysext.service... Mar 17 17:25:13.947726 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 17 17:25:13.956281 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 17:25:13.964450 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 17:25:13.989687 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:25:14.001468 systemd[1]: Reloading requested from client PID 1499 ('systemctl') (unit ensure-sysext.service)... Mar 17 17:25:14.001486 systemd[1]: Reloading... Mar 17 17:25:14.080608 zram_generator::config[1535]: No configuration found. Mar 17 17:25:14.189513 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:25:14.225965 systemd-tmpfiles[1502]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 17 17:25:14.226237 systemd-tmpfiles[1502]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 17 17:25:14.226908 systemd-tmpfiles[1502]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 17 17:25:14.227127 systemd-tmpfiles[1502]: ACLs are not supported, ignoring. Mar 17 17:25:14.227171 systemd-tmpfiles[1502]: ACLs are not supported, ignoring. Mar 17 17:25:14.233317 systemd-tmpfiles[1502]: Detected autofs mount point /boot during canonicalization of boot. Mar 17 17:25:14.233331 systemd-tmpfiles[1502]: Skipping /boot Mar 17 17:25:14.240084 systemd-tmpfiles[1502]: Detected autofs mount point /boot during canonicalization of boot. Mar 17 17:25:14.240100 systemd-tmpfiles[1502]: Skipping /boot Mar 17 17:25:14.275325 systemd[1]: Reloading finished in 273 ms. Mar 17 17:25:14.292166 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:25:14.303563 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 17 17:25:14.314992 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 17 17:25:14.332854 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 17:25:14.340616 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 17 17:25:14.360779 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 17 17:25:14.381675 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 17 17:25:14.393360 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 17:25:14.408148 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 17 17:25:14.423991 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 17 17:25:14.437842 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:25:14.442106 lvm[1599]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 17:25:14.443916 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:25:14.458848 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:25:14.478258 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:25:14.484744 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:25:14.485924 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:25:14.486099 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:25:14.492948 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:25:14.493104 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:25:14.501321 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 17 17:25:14.509401 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:25:14.509588 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:25:14.518616 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 17 17:25:14.531465 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:25:14.538694 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:25:14.545796 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 17 17:25:14.556181 lvm[1617]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 17:25:14.556819 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:25:14.569381 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:25:14.577930 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:25:14.583718 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:25:14.589573 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 17 17:25:14.597717 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 17 17:25:14.605341 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:25:14.605485 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:25:14.612733 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:25:14.612885 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:25:14.622270 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:25:14.622425 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:25:14.636295 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:25:14.642791 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:25:14.650754 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 17 17:25:14.658736 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:25:14.670655 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:25:14.679144 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:25:14.679358 systemd[1]: Reached target time-set.target - System Time Set. Mar 17 17:25:14.691509 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 17 17:25:14.700672 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:25:14.702578 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:25:14.710451 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 17:25:14.710784 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 17 17:25:14.717633 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:25:14.717767 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:25:14.726272 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:25:14.726454 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:25:14.739586 systemd[1]: Finished ensure-sysext.service. Mar 17 17:25:14.747866 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:25:14.747953 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:25:14.835881 systemd-resolved[1601]: Positive Trust Anchors: Mar 17 17:25:14.836238 systemd-resolved[1601]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 17:25:14.836317 systemd-resolved[1601]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 17:25:14.839968 systemd-resolved[1601]: Using system hostname 'ci-4152.2.2-a-e33ca1f69b'. Mar 17 17:25:14.841446 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 17:25:14.847914 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:25:14.925023 systemd-networkd[1501]: lo: Link UP Mar 17 17:25:14.925040 systemd-networkd[1501]: lo: Gained carrier Mar 17 17:25:14.927049 systemd-networkd[1501]: Enumeration completed Mar 17 17:25:14.927162 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 17:25:14.928157 systemd-networkd[1501]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:25:14.928164 systemd-networkd[1501]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:25:14.933820 systemd[1]: Reached target network.target - Network. Mar 17 17:25:14.945731 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 17 17:25:14.985152 augenrules[1657]: No rules Mar 17 17:25:14.985563 kernel: mlx5_core 642c:00:02.0 enP25644s1: Link up Mar 17 17:25:14.985573 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 17:25:14.986612 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 17:25:15.016557 kernel: hv_netvsc 00224876-f62f-0022-4876-f62f00224876 eth0: Data path switched to VF: enP25644s1 Mar 17 17:25:15.016233 systemd-networkd[1501]: enP25644s1: Link UP Mar 17 17:25:15.016347 systemd-networkd[1501]: eth0: Link UP Mar 17 17:25:15.016350 systemd-networkd[1501]: eth0: Gained carrier Mar 17 17:25:15.016367 systemd-networkd[1501]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:25:15.020968 systemd-networkd[1501]: enP25644s1: Gained carrier Mar 17 17:25:15.022157 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:25:15.033647 systemd-networkd[1501]: eth0: DHCPv4 address 10.200.20.35/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 17 17:25:16.068269 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 17 17:25:16.075906 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 17:25:16.728717 systemd-networkd[1501]: enP25644s1: Gained IPv6LL Mar 17 17:25:16.920780 systemd-networkd[1501]: eth0: Gained IPv6LL Mar 17 17:25:16.923767 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 17 17:25:16.932317 systemd[1]: Reached target network-online.target - Network is Online. Mar 17 17:25:19.356885 ldconfig[1288]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 17 17:25:19.367721 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 17 17:25:19.379923 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 17 17:25:19.395294 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 17 17:25:19.402217 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 17:25:19.408382 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 17 17:25:19.416061 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 17 17:25:19.424081 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 17 17:25:19.430662 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 17 17:25:19.438989 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 17 17:25:19.447689 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 17 17:25:19.447743 systemd[1]: Reached target paths.target - Path Units. Mar 17 17:25:19.453079 systemd[1]: Reached target timers.target - Timer Units. Mar 17 17:25:19.473787 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 17 17:25:19.482099 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 17 17:25:19.492570 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 17 17:25:19.499505 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 17 17:25:19.506039 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 17:25:19.512024 systemd[1]: Reached target basic.target - Basic System. Mar 17 17:25:19.517526 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 17 17:25:19.517573 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 17 17:25:19.527666 systemd[1]: Starting chronyd.service - NTP client/server... Mar 17 17:25:19.534683 systemd[1]: Starting containerd.service - containerd container runtime... Mar 17 17:25:19.544807 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 17 17:25:19.563878 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 17 17:25:19.575115 (chronyd)[1673]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Mar 17 17:25:19.576109 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 17 17:25:19.583290 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 17 17:25:19.589485 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 17 17:25:19.589540 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Mar 17 17:25:19.592768 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Mar 17 17:25:19.599028 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Mar 17 17:25:19.600209 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:25:19.608979 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 17 17:25:19.617784 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 17 17:25:19.622307 KVP[1682]: KVP starting; pid is:1682 Mar 17 17:25:19.633591 chronyd[1689]: chronyd version 4.6 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Mar 17 17:25:19.634479 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 17 17:25:19.643785 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 17 17:25:19.654755 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 17 17:25:19.670365 chronyd[1689]: Timezone right/UTC failed leap second check, ignoring Mar 17 17:25:19.671813 chronyd[1689]: Loaded seccomp filter (level 2) Mar 17 17:25:19.673362 jq[1680]: false Mar 17 17:25:19.672770 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 17 17:25:19.688394 kernel: hv_utils: KVP IC version 4.0 Mar 17 17:25:19.686230 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 17 17:25:19.684436 KVP[1682]: KVP LIC Version: 3.1 Mar 17 17:25:19.687887 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 17 17:25:19.700736 systemd[1]: Starting update-engine.service - Update Engine... Mar 17 17:25:19.711079 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 17 17:25:19.718788 extend-filesystems[1681]: Found loop4 Mar 17 17:25:19.725689 extend-filesystems[1681]: Found loop5 Mar 17 17:25:19.725689 extend-filesystems[1681]: Found loop6 Mar 17 17:25:19.725689 extend-filesystems[1681]: Found loop7 Mar 17 17:25:19.725689 extend-filesystems[1681]: Found sda Mar 17 17:25:19.725689 extend-filesystems[1681]: Found sda1 Mar 17 17:25:19.725689 extend-filesystems[1681]: Found sda2 Mar 17 17:25:19.725689 extend-filesystems[1681]: Found sda3 Mar 17 17:25:19.725689 extend-filesystems[1681]: Found usr Mar 17 17:25:19.725689 extend-filesystems[1681]: Found sda4 Mar 17 17:25:19.725689 extend-filesystems[1681]: Found sda6 Mar 17 17:25:19.725689 extend-filesystems[1681]: Found sda7 Mar 17 17:25:19.725689 extend-filesystems[1681]: Found sda9 Mar 17 17:25:19.725689 extend-filesystems[1681]: Checking size of /dev/sda9 Mar 17 17:25:19.873645 dbus-daemon[1676]: [system] SELinux support is enabled Mar 17 17:25:19.726141 systemd[1]: Started chronyd.service - NTP client/server. Mar 17 17:25:19.956745 update_engine[1698]: I20250317 17:25:19.781159 1698 main.cc:92] Flatcar Update Engine starting Mar 17 17:25:19.956745 update_engine[1698]: I20250317 17:25:19.880048 1698 update_check_scheduler.cc:74] Next update check in 3m10s Mar 17 17:25:19.956912 extend-filesystems[1681]: Old size kept for /dev/sda9 Mar 17 17:25:19.956912 extend-filesystems[1681]: Found sr0 Mar 17 17:25:20.010591 jq[1703]: true Mar 17 17:25:19.942303 dbus-daemon[1676]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 17 17:25:19.754869 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 17 17:25:19.755061 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 17 17:25:19.758455 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 17 17:25:20.012129 tar[1708]: linux-arm64/LICENSE Mar 17 17:25:20.012129 tar[1708]: linux-arm64/helm Mar 17 17:25:20.053778 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (1714) Mar 17 17:25:19.785477 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 17 17:25:20.053899 jq[1731]: true Mar 17 17:25:19.785669 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 17 17:25:19.789969 systemd-logind[1694]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 17 17:25:19.791063 systemd-logind[1694]: New seat seat0. Mar 17 17:25:19.804506 systemd[1]: Started systemd-logind.service - User Login Management. Mar 17 17:25:19.840639 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 17 17:25:19.840823 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 17 17:25:19.889846 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 17 17:25:19.940235 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 17 17:25:19.940264 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 17 17:25:19.969928 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 17 17:25:19.969950 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 17 17:25:20.042732 systemd[1]: Started update-engine.service - Update Engine. Mar 17 17:25:20.068750 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 17 17:25:20.229919 coreos-metadata[1675]: Mar 17 17:25:20.229 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Mar 17 17:25:20.233198 coreos-metadata[1675]: Mar 17 17:25:20.232 INFO Fetch successful Mar 17 17:25:20.233198 coreos-metadata[1675]: Mar 17 17:25:20.232 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Mar 17 17:25:20.240729 coreos-metadata[1675]: Mar 17 17:25:20.240 INFO Fetch successful Mar 17 17:25:20.241171 coreos-metadata[1675]: Mar 17 17:25:20.241 INFO Fetching http://168.63.129.16/machine/d1544e7d-240a-4f65-b967-3fde5311a9f0/fe6a1b4e%2Dfc43%2D48c8%2Da023%2D1354e3e66efd.%5Fci%2D4152.2.2%2Da%2De33ca1f69b?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Mar 17 17:25:20.243754 coreos-metadata[1675]: Mar 17 17:25:20.243 INFO Fetch successful Mar 17 17:25:20.243754 coreos-metadata[1675]: Mar 17 17:25:20.243 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Mar 17 17:25:20.257926 coreos-metadata[1675]: Mar 17 17:25:20.256 INFO Fetch successful Mar 17 17:25:20.323376 bash[1790]: Updated "/home/core/.ssh/authorized_keys" Mar 17 17:25:20.313396 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 17 17:25:20.322754 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 17 17:25:20.351072 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 17 17:25:20.358010 (ntainerd)[1808]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 17 17:25:20.362516 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 17 17:25:20.364266 systemd[1]: motdgen.service: Deactivated successfully. Mar 17 17:25:20.366645 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 17 17:25:20.532584 locksmithd[1770]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 17 17:25:20.750572 tar[1708]: linux-arm64/README.md Mar 17 17:25:20.766972 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 17 17:25:20.880724 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:25:20.893118 (kubelet)[1826]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:25:21.274369 kubelet[1826]: E0317 17:25:21.274316 1826 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:25:21.276624 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:25:21.276760 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:25:21.309793 sshd_keygen[1699]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 17 17:25:21.327963 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 17 17:25:21.340799 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 17 17:25:21.347796 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Mar 17 17:25:21.354101 systemd[1]: issuegen.service: Deactivated successfully. Mar 17 17:25:21.354289 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 17 17:25:21.372093 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 17 17:25:21.387340 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Mar 17 17:25:21.396062 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 17 17:25:21.411130 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 17 17:25:21.418945 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Mar 17 17:25:21.426988 systemd[1]: Reached target getty.target - Login Prompts. Mar 17 17:25:21.502000 containerd[1808]: time="2025-03-17T17:25:21.501875900Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Mar 17 17:25:21.526100 containerd[1808]: time="2025-03-17T17:25:21.525971780Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:25:21.527564 containerd[1808]: time="2025-03-17T17:25:21.527495020Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.83-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:25:21.527624 containerd[1808]: time="2025-03-17T17:25:21.527564140Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 17 17:25:21.527624 containerd[1808]: time="2025-03-17T17:25:21.527584900Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 17 17:25:21.527795 containerd[1808]: time="2025-03-17T17:25:21.527768220Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 17 17:25:21.527819 containerd[1808]: time="2025-03-17T17:25:21.527796940Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 17 17:25:21.527887 containerd[1808]: time="2025-03-17T17:25:21.527864980Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:25:21.527887 containerd[1808]: time="2025-03-17T17:25:21.527883820Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:25:21.528079 containerd[1808]: time="2025-03-17T17:25:21.528053500Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:25:21.528102 containerd[1808]: time="2025-03-17T17:25:21.528077340Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 17 17:25:21.528102 containerd[1808]: time="2025-03-17T17:25:21.528091900Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:25:21.528156 containerd[1808]: time="2025-03-17T17:25:21.528101660Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 17 17:25:21.528222 containerd[1808]: time="2025-03-17T17:25:21.528202820Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:25:21.528426 containerd[1808]: time="2025-03-17T17:25:21.528402740Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:25:21.528544 containerd[1808]: time="2025-03-17T17:25:21.528509100Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:25:21.528592 containerd[1808]: time="2025-03-17T17:25:21.528574700Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 17 17:25:21.528685 containerd[1808]: time="2025-03-17T17:25:21.528663820Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 17 17:25:21.528737 containerd[1808]: time="2025-03-17T17:25:21.528718780Z" level=info msg="metadata content store policy set" policy=shared Mar 17 17:25:21.610628 containerd[1808]: time="2025-03-17T17:25:21.610512380Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 17 17:25:21.610628 containerd[1808]: time="2025-03-17T17:25:21.610608980Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 17 17:25:21.610628 containerd[1808]: time="2025-03-17T17:25:21.610636220Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 17 17:25:21.610843 containerd[1808]: time="2025-03-17T17:25:21.610654180Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 17 17:25:21.610843 containerd[1808]: time="2025-03-17T17:25:21.610669100Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 17 17:25:21.610883 containerd[1808]: time="2025-03-17T17:25:21.610861820Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 17 17:25:21.611146 containerd[1808]: time="2025-03-17T17:25:21.611122100Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 17 17:25:21.611271 containerd[1808]: time="2025-03-17T17:25:21.611246980Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 17 17:25:21.611307 containerd[1808]: time="2025-03-17T17:25:21.611271180Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 17 17:25:21.611307 containerd[1808]: time="2025-03-17T17:25:21.611285980Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 17 17:25:21.611307 containerd[1808]: time="2025-03-17T17:25:21.611299220Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 17 17:25:21.611355 containerd[1808]: time="2025-03-17T17:25:21.611313900Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 17 17:25:21.611355 containerd[1808]: time="2025-03-17T17:25:21.611327260Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 17 17:25:21.611355 containerd[1808]: time="2025-03-17T17:25:21.611341260Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 17 17:25:21.611411 containerd[1808]: time="2025-03-17T17:25:21.611357100Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 17 17:25:21.611411 containerd[1808]: time="2025-03-17T17:25:21.611371940Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 17 17:25:21.611411 containerd[1808]: time="2025-03-17T17:25:21.611383980Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 17 17:25:21.611411 containerd[1808]: time="2025-03-17T17:25:21.611395140Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 17 17:25:21.611481 containerd[1808]: time="2025-03-17T17:25:21.611415380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 17 17:25:21.611481 containerd[1808]: time="2025-03-17T17:25:21.611429420Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 17 17:25:21.611481 containerd[1808]: time="2025-03-17T17:25:21.611441660Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 17 17:25:21.611481 containerd[1808]: time="2025-03-17T17:25:21.611454740Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 17 17:25:21.611481 containerd[1808]: time="2025-03-17T17:25:21.611466340Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 17 17:25:21.611481 containerd[1808]: time="2025-03-17T17:25:21.611479860Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 17 17:25:21.611606 containerd[1808]: time="2025-03-17T17:25:21.611491380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 17 17:25:21.611606 containerd[1808]: time="2025-03-17T17:25:21.611504140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 17 17:25:21.611606 containerd[1808]: time="2025-03-17T17:25:21.611517300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 17 17:25:21.611606 containerd[1808]: time="2025-03-17T17:25:21.611557700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 17 17:25:21.611606 containerd[1808]: time="2025-03-17T17:25:21.611570980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 17 17:25:21.611606 containerd[1808]: time="2025-03-17T17:25:21.611582300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 17 17:25:21.611606 containerd[1808]: time="2025-03-17T17:25:21.611594460Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 17 17:25:21.611606 containerd[1808]: time="2025-03-17T17:25:21.611609580Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 17 17:25:21.611845 containerd[1808]: time="2025-03-17T17:25:21.611634220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 17 17:25:21.611845 containerd[1808]: time="2025-03-17T17:25:21.611647340Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 17 17:25:21.611845 containerd[1808]: time="2025-03-17T17:25:21.611658620Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 17 17:25:21.611845 containerd[1808]: time="2025-03-17T17:25:21.611728180Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 17 17:25:21.611845 containerd[1808]: time="2025-03-17T17:25:21.611747900Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 17 17:25:21.611845 containerd[1808]: time="2025-03-17T17:25:21.611758860Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 17 17:25:21.611845 containerd[1808]: time="2025-03-17T17:25:21.611769780Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 17 17:25:21.611845 containerd[1808]: time="2025-03-17T17:25:21.611779660Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 17 17:25:21.611845 containerd[1808]: time="2025-03-17T17:25:21.611792700Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 17 17:25:21.611845 containerd[1808]: time="2025-03-17T17:25:21.611802020Z" level=info msg="NRI interface is disabled by configuration." Mar 17 17:25:21.611845 containerd[1808]: time="2025-03-17T17:25:21.611812420Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 17 17:25:21.612146 containerd[1808]: time="2025-03-17T17:25:21.612088180Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 17 17:25:21.612260 containerd[1808]: time="2025-03-17T17:25:21.612155900Z" level=info msg="Connect containerd service" Mar 17 17:25:21.612260 containerd[1808]: time="2025-03-17T17:25:21.612189020Z" level=info msg="using legacy CRI server" Mar 17 17:25:21.612260 containerd[1808]: time="2025-03-17T17:25:21.612195100Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 17 17:25:21.612366 containerd[1808]: time="2025-03-17T17:25:21.612341060Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 17 17:25:21.613097 containerd[1808]: time="2025-03-17T17:25:21.613055380Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 17:25:21.613247 containerd[1808]: time="2025-03-17T17:25:21.613210460Z" level=info msg="Start subscribing containerd event" Mar 17 17:25:21.613816 containerd[1808]: time="2025-03-17T17:25:21.613292420Z" level=info msg="Start recovering state" Mar 17 17:25:21.613816 containerd[1808]: time="2025-03-17T17:25:21.613382780Z" level=info msg="Start event monitor" Mar 17 17:25:21.613816 containerd[1808]: time="2025-03-17T17:25:21.613391020Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 17 17:25:21.613816 containerd[1808]: time="2025-03-17T17:25:21.613397860Z" level=info msg="Start snapshots syncer" Mar 17 17:25:21.613816 containerd[1808]: time="2025-03-17T17:25:21.613417660Z" level=info msg="Start cni network conf syncer for default" Mar 17 17:25:21.613816 containerd[1808]: time="2025-03-17T17:25:21.613426140Z" level=info msg="Start streaming server" Mar 17 17:25:21.613816 containerd[1808]: time="2025-03-17T17:25:21.613440860Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 17 17:25:21.613816 containerd[1808]: time="2025-03-17T17:25:21.613494340Z" level=info msg="containerd successfully booted in 0.113800s" Mar 17 17:25:21.613718 systemd[1]: Started containerd.service - containerd container runtime. Mar 17 17:25:21.621454 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 17 17:25:21.629140 systemd[1]: Startup finished in 742ms (kernel) + 13.179s (initrd) + 13.921s (userspace) = 27.843s. Mar 17 17:25:22.178202 login[1861]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Mar 17 17:25:22.179684 login[1862]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:25:22.190137 systemd-logind[1694]: New session 2 of user core. Mar 17 17:25:22.192347 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 17 17:25:22.198893 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 17 17:25:22.210322 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 17 17:25:22.218859 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 17 17:25:22.221962 (systemd)[1872]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 17 17:25:22.491922 systemd[1872]: Queued start job for default target default.target. Mar 17 17:25:22.499647 systemd[1872]: Created slice app.slice - User Application Slice. Mar 17 17:25:22.499685 systemd[1872]: Reached target paths.target - Paths. Mar 17 17:25:22.499699 systemd[1872]: Reached target timers.target - Timers. Mar 17 17:25:22.501110 systemd[1872]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 17 17:25:22.514800 systemd[1872]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 17 17:25:22.514923 systemd[1872]: Reached target sockets.target - Sockets. Mar 17 17:25:22.514938 systemd[1872]: Reached target basic.target - Basic System. Mar 17 17:25:22.514992 systemd[1872]: Reached target default.target - Main User Target. Mar 17 17:25:22.515021 systemd[1872]: Startup finished in 286ms. Mar 17 17:25:22.515235 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 17 17:25:22.516589 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 17 17:25:23.178978 login[1861]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:25:23.184066 systemd-logind[1694]: New session 1 of user core. Mar 17 17:25:23.192860 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 17 17:25:24.191565 waagent[1859]: 2025-03-17T17:25:24.188780Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Mar 17 17:25:24.195655 waagent[1859]: 2025-03-17T17:25:24.195569Z INFO Daemon Daemon OS: flatcar 4152.2.2 Mar 17 17:25:24.200617 waagent[1859]: 2025-03-17T17:25:24.200548Z INFO Daemon Daemon Python: 3.11.10 Mar 17 17:25:24.207149 waagent[1859]: 2025-03-17T17:25:24.207066Z INFO Daemon Daemon Run daemon Mar 17 17:25:24.213591 waagent[1859]: 2025-03-17T17:25:24.212539Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4152.2.2' Mar 17 17:25:24.222356 waagent[1859]: 2025-03-17T17:25:24.222248Z INFO Daemon Daemon Using waagent for provisioning Mar 17 17:25:24.228088 waagent[1859]: 2025-03-17T17:25:24.228010Z INFO Daemon Daemon Activate resource disk Mar 17 17:25:24.233239 waagent[1859]: 2025-03-17T17:25:24.233138Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Mar 17 17:25:24.247221 waagent[1859]: 2025-03-17T17:25:24.247113Z INFO Daemon Daemon Found device: None Mar 17 17:25:24.251950 waagent[1859]: 2025-03-17T17:25:24.251849Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Mar 17 17:25:24.261310 waagent[1859]: 2025-03-17T17:25:24.261239Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Mar 17 17:25:24.273647 waagent[1859]: 2025-03-17T17:25:24.273580Z INFO Daemon Daemon Clean protocol and wireserver endpoint Mar 17 17:25:24.280343 waagent[1859]: 2025-03-17T17:25:24.280248Z INFO Daemon Daemon Running default provisioning handler Mar 17 17:25:24.292897 waagent[1859]: 2025-03-17T17:25:24.292805Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Mar 17 17:25:24.308762 waagent[1859]: 2025-03-17T17:25:24.308675Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Mar 17 17:25:24.319300 waagent[1859]: 2025-03-17T17:25:24.319193Z INFO Daemon Daemon cloud-init is enabled: False Mar 17 17:25:24.325025 waagent[1859]: 2025-03-17T17:25:24.324932Z INFO Daemon Daemon Copying ovf-env.xml Mar 17 17:25:24.396984 waagent[1859]: 2025-03-17T17:25:24.394167Z INFO Daemon Daemon Successfully mounted dvd Mar 17 17:25:26.279284 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Mar 17 17:25:26.280219 waagent[1859]: 2025-03-17T17:25:26.279783Z INFO Daemon Daemon Detect protocol endpoint Mar 17 17:25:26.285135 waagent[1859]: 2025-03-17T17:25:26.285049Z INFO Daemon Daemon Clean protocol and wireserver endpoint Mar 17 17:25:26.291962 waagent[1859]: 2025-03-17T17:25:26.291867Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Mar 17 17:25:26.300469 waagent[1859]: 2025-03-17T17:25:26.300369Z INFO Daemon Daemon Test for route to 168.63.129.16 Mar 17 17:25:26.306995 waagent[1859]: 2025-03-17T17:25:26.306904Z INFO Daemon Daemon Route to 168.63.129.16 exists Mar 17 17:25:26.313191 waagent[1859]: 2025-03-17T17:25:26.313106Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Mar 17 17:25:26.425234 waagent[1859]: 2025-03-17T17:25:26.425178Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Mar 17 17:25:26.433107 waagent[1859]: 2025-03-17T17:25:26.433064Z INFO Daemon Daemon Wire protocol version:2012-11-30 Mar 17 17:25:26.439731 waagent[1859]: 2025-03-17T17:25:26.439633Z INFO Daemon Daemon Server preferred version:2015-04-05 Mar 17 17:25:27.369568 waagent[1859]: 2025-03-17T17:25:27.368825Z INFO Daemon Daemon Initializing goal state during protocol detection Mar 17 17:25:27.376229 waagent[1859]: 2025-03-17T17:25:27.376142Z INFO Daemon Daemon Forcing an update of the goal state. Mar 17 17:25:27.385888 waagent[1859]: 2025-03-17T17:25:27.385828Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Mar 17 17:25:27.409023 waagent[1859]: 2025-03-17T17:25:27.408960Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.164 Mar 17 17:25:27.417283 waagent[1859]: 2025-03-17T17:25:27.416648Z INFO Daemon Mar 17 17:25:27.419948 waagent[1859]: 2025-03-17T17:25:27.419888Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 5bcd2729-a70b-464f-ac75-7449fd0eb910 eTag: 14464969737864302951 source: Fabric] Mar 17 17:25:27.432380 waagent[1859]: 2025-03-17T17:25:27.432327Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Mar 17 17:25:27.440060 waagent[1859]: 2025-03-17T17:25:27.440006Z INFO Daemon Mar 17 17:25:27.443266 waagent[1859]: 2025-03-17T17:25:27.443211Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Mar 17 17:25:27.461216 waagent[1859]: 2025-03-17T17:25:27.461167Z INFO Daemon Daemon Downloading artifacts profile blob Mar 17 17:25:27.558086 waagent[1859]: 2025-03-17T17:25:27.557985Z INFO Daemon Downloaded certificate {'thumbprint': 'CF76A22CCED062FF8C39A5D3BDACD242FDD34149', 'hasPrivateKey': True} Mar 17 17:25:27.568954 waagent[1859]: 2025-03-17T17:25:27.568882Z INFO Daemon Downloaded certificate {'thumbprint': '41518CF6F2B5C7CDB6D029B351552E19A8139D74', 'hasPrivateKey': False} Mar 17 17:25:27.579972 waagent[1859]: 2025-03-17T17:25:27.579900Z INFO Daemon Fetch goal state completed Mar 17 17:25:27.592613 waagent[1859]: 2025-03-17T17:25:27.592558Z INFO Daemon Daemon Starting provisioning Mar 17 17:25:27.598165 waagent[1859]: 2025-03-17T17:25:27.598090Z INFO Daemon Daemon Handle ovf-env.xml. Mar 17 17:25:27.602998 waagent[1859]: 2025-03-17T17:25:27.602929Z INFO Daemon Daemon Set hostname [ci-4152.2.2-a-e33ca1f69b] Mar 17 17:25:28.068485 waagent[1859]: 2025-03-17T17:25:28.068399Z INFO Daemon Daemon Publish hostname [ci-4152.2.2-a-e33ca1f69b] Mar 17 17:25:28.075825 waagent[1859]: 2025-03-17T17:25:28.075753Z INFO Daemon Daemon Examine /proc/net/route for primary interface Mar 17 17:25:28.082519 waagent[1859]: 2025-03-17T17:25:28.082455Z INFO Daemon Daemon Primary interface is [eth0] Mar 17 17:25:28.135232 systemd-networkd[1501]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:25:28.135243 systemd-networkd[1501]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:25:28.135272 systemd-networkd[1501]: eth0: DHCP lease lost Mar 17 17:25:28.136664 waagent[1859]: 2025-03-17T17:25:28.136558Z INFO Daemon Daemon Create user account if not exists Mar 17 17:25:28.142964 waagent[1859]: 2025-03-17T17:25:28.142894Z INFO Daemon Daemon User core already exists, skip useradd Mar 17 17:25:28.153999 waagent[1859]: 2025-03-17T17:25:28.149463Z INFO Daemon Daemon Configure sudoer Mar 17 17:25:28.149603 systemd-networkd[1501]: eth0: DHCPv6 lease lost Mar 17 17:25:28.154797 waagent[1859]: 2025-03-17T17:25:28.154722Z INFO Daemon Daemon Configure sshd Mar 17 17:25:28.159785 waagent[1859]: 2025-03-17T17:25:28.159714Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Mar 17 17:25:28.175192 waagent[1859]: 2025-03-17T17:25:28.175102Z INFO Daemon Daemon Deploy ssh public key. Mar 17 17:25:28.182614 systemd-networkd[1501]: eth0: DHCPv4 address 10.200.20.35/24, gateway 10.200.20.1 acquired from 168.63.129.16 Mar 17 17:25:29.281556 waagent[1859]: 2025-03-17T17:25:29.280879Z INFO Daemon Daemon Provisioning complete Mar 17 17:25:29.299256 waagent[1859]: 2025-03-17T17:25:29.299196Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Mar 17 17:25:29.306090 waagent[1859]: 2025-03-17T17:25:29.306007Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Mar 17 17:25:29.316216 waagent[1859]: 2025-03-17T17:25:29.316147Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Mar 17 17:25:29.458408 waagent[1927]: 2025-03-17T17:25:29.458319Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Mar 17 17:25:29.459336 waagent[1927]: 2025-03-17T17:25:29.458917Z INFO ExtHandler ExtHandler OS: flatcar 4152.2.2 Mar 17 17:25:29.459336 waagent[1927]: 2025-03-17T17:25:29.458999Z INFO ExtHandler ExtHandler Python: 3.11.10 Mar 17 17:25:29.500594 waagent[1927]: 2025-03-17T17:25:29.499202Z INFO ExtHandler ExtHandler Distro: flatcar-4152.2.2; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.10; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Mar 17 17:25:29.500594 waagent[1927]: 2025-03-17T17:25:29.499468Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 17 17:25:29.500594 waagent[1927]: 2025-03-17T17:25:29.499552Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 17 17:25:29.510735 waagent[1927]: 2025-03-17T17:25:29.510651Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Mar 17 17:25:29.516820 waagent[1927]: 2025-03-17T17:25:29.516769Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.164 Mar 17 17:25:29.517386 waagent[1927]: 2025-03-17T17:25:29.517338Z INFO ExtHandler Mar 17 17:25:29.517457 waagent[1927]: 2025-03-17T17:25:29.517426Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 6ee82606-88aa-44d4-baa6-ee30a5b7e063 eTag: 14464969737864302951 source: Fabric] Mar 17 17:25:29.517778 waagent[1927]: 2025-03-17T17:25:29.517734Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Mar 17 17:25:29.518349 waagent[1927]: 2025-03-17T17:25:29.518300Z INFO ExtHandler Mar 17 17:25:29.518415 waagent[1927]: 2025-03-17T17:25:29.518383Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Mar 17 17:25:29.522921 waagent[1927]: 2025-03-17T17:25:29.522874Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Mar 17 17:25:29.609180 waagent[1927]: 2025-03-17T17:25:29.609025Z INFO ExtHandler Downloaded certificate {'thumbprint': 'CF76A22CCED062FF8C39A5D3BDACD242FDD34149', 'hasPrivateKey': True} Mar 17 17:25:29.609620 waagent[1927]: 2025-03-17T17:25:29.609560Z INFO ExtHandler Downloaded certificate {'thumbprint': '41518CF6F2B5C7CDB6D029B351552E19A8139D74', 'hasPrivateKey': False} Mar 17 17:25:29.610090 waagent[1927]: 2025-03-17T17:25:29.610039Z INFO ExtHandler Fetch goal state completed Mar 17 17:25:29.626140 waagent[1927]: 2025-03-17T17:25:29.626059Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1927 Mar 17 17:25:29.626308 waagent[1927]: 2025-03-17T17:25:29.626265Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Mar 17 17:25:29.628091 waagent[1927]: 2025-03-17T17:25:29.628035Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4152.2.2', '', 'Flatcar Container Linux by Kinvolk'] Mar 17 17:25:29.628495 waagent[1927]: 2025-03-17T17:25:29.628451Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Mar 17 17:25:29.651364 waagent[1927]: 2025-03-17T17:25:29.651311Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Mar 17 17:25:29.651607 waagent[1927]: 2025-03-17T17:25:29.651559Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Mar 17 17:25:29.658313 waagent[1927]: 2025-03-17T17:25:29.658250Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Mar 17 17:25:29.665754 systemd[1]: Reloading requested from client PID 1942 ('systemctl') (unit waagent.service)... Mar 17 17:25:29.665775 systemd[1]: Reloading... Mar 17 17:25:29.760576 zram_generator::config[1982]: No configuration found. Mar 17 17:25:29.869319 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:25:29.950358 systemd[1]: Reloading finished in 284 ms. Mar 17 17:25:29.976874 waagent[1927]: 2025-03-17T17:25:29.976741Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Mar 17 17:25:29.983011 systemd[1]: Reloading requested from client PID 2030 ('systemctl') (unit waagent.service)... Mar 17 17:25:29.983040 systemd[1]: Reloading... Mar 17 17:25:30.068488 zram_generator::config[2064]: No configuration found. Mar 17 17:25:30.183512 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:25:30.265186 systemd[1]: Reloading finished in 281 ms. Mar 17 17:25:30.287604 waagent[1927]: 2025-03-17T17:25:30.287104Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Mar 17 17:25:30.287604 waagent[1927]: 2025-03-17T17:25:30.287296Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Mar 17 17:25:30.610594 waagent[1927]: 2025-03-17T17:25:30.610041Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Mar 17 17:25:30.610931 waagent[1927]: 2025-03-17T17:25:30.610746Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Mar 17 17:25:30.611695 waagent[1927]: 2025-03-17T17:25:30.611592Z INFO ExtHandler ExtHandler Starting env monitor service. Mar 17 17:25:30.612264 waagent[1927]: 2025-03-17T17:25:30.612077Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Mar 17 17:25:30.612548 waagent[1927]: 2025-03-17T17:25:30.612483Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 17 17:25:30.613489 waagent[1927]: 2025-03-17T17:25:30.612638Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Mar 17 17:25:30.613489 waagent[1927]: 2025-03-17T17:25:30.612737Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 17 17:25:30.613489 waagent[1927]: 2025-03-17T17:25:30.612879Z INFO EnvHandler ExtHandler Configure routes Mar 17 17:25:30.613489 waagent[1927]: 2025-03-17T17:25:30.612943Z INFO EnvHandler ExtHandler Gateway:None Mar 17 17:25:30.613489 waagent[1927]: 2025-03-17T17:25:30.612986Z INFO EnvHandler ExtHandler Routes:None Mar 17 17:25:30.613830 waagent[1927]: 2025-03-17T17:25:30.613774Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Mar 17 17:25:30.613987 waagent[1927]: 2025-03-17T17:25:30.613922Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Mar 17 17:25:30.614331 waagent[1927]: 2025-03-17T17:25:30.614285Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Mar 17 17:25:30.614695 waagent[1927]: 2025-03-17T17:25:30.614646Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Mar 17 17:25:30.614992 waagent[1927]: 2025-03-17T17:25:30.614946Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Mar 17 17:25:30.614992 waagent[1927]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Mar 17 17:25:30.614992 waagent[1927]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Mar 17 17:25:30.614992 waagent[1927]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Mar 17 17:25:30.614992 waagent[1927]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Mar 17 17:25:30.614992 waagent[1927]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 17 17:25:30.614992 waagent[1927]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Mar 17 17:25:30.615853 waagent[1927]: 2025-03-17T17:25:30.615764Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Mar 17 17:25:30.616651 waagent[1927]: 2025-03-17T17:25:30.615706Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Mar 17 17:25:30.616651 waagent[1927]: 2025-03-17T17:25:30.616383Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Mar 17 17:25:30.629319 waagent[1927]: 2025-03-17T17:25:30.629251Z INFO ExtHandler ExtHandler Mar 17 17:25:30.629437 waagent[1927]: 2025-03-17T17:25:30.629389Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 6f3537cc-b5a7-4420-a715-619eea7da194 correlation 70df159f-fe58-4c06-b1d2-f0431f5df022 created: 2025-03-17T17:24:05.416656Z] Mar 17 17:25:30.629885 waagent[1927]: 2025-03-17T17:25:30.629834Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Mar 17 17:25:30.630498 waagent[1927]: 2025-03-17T17:25:30.630454Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Mar 17 17:25:30.671785 waagent[1927]: 2025-03-17T17:25:30.671710Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 098F32BD-5955-4F7D-A8FB-9718C2078128;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Mar 17 17:25:30.706800 waagent[1927]: 2025-03-17T17:25:30.706705Z INFO MonitorHandler ExtHandler Network interfaces: Mar 17 17:25:30.706800 waagent[1927]: Executing ['ip', '-a', '-o', 'link']: Mar 17 17:25:30.706800 waagent[1927]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Mar 17 17:25:30.706800 waagent[1927]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:76:f6:2f brd ff:ff:ff:ff:ff:ff Mar 17 17:25:30.706800 waagent[1927]: 3: enP25644s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:76:f6:2f brd ff:ff:ff:ff:ff:ff\ altname enP25644p0s2 Mar 17 17:25:30.706800 waagent[1927]: Executing ['ip', '-4', '-a', '-o', 'address']: Mar 17 17:25:30.706800 waagent[1927]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Mar 17 17:25:30.706800 waagent[1927]: 2: eth0 inet 10.200.20.35/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Mar 17 17:25:30.706800 waagent[1927]: Executing ['ip', '-6', '-a', '-o', 'address']: Mar 17 17:25:30.706800 waagent[1927]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Mar 17 17:25:30.706800 waagent[1927]: 2: eth0 inet6 fe80::222:48ff:fe76:f62f/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Mar 17 17:25:30.706800 waagent[1927]: 3: enP25644s1 inet6 fe80::222:48ff:fe76:f62f/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Mar 17 17:25:30.739565 waagent[1927]: 2025-03-17T17:25:30.739166Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Mar 17 17:25:30.739565 waagent[1927]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Mar 17 17:25:30.739565 waagent[1927]: pkts bytes target prot opt in out source destination Mar 17 17:25:30.739565 waagent[1927]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Mar 17 17:25:30.739565 waagent[1927]: pkts bytes target prot opt in out source destination Mar 17 17:25:30.739565 waagent[1927]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Mar 17 17:25:30.739565 waagent[1927]: pkts bytes target prot opt in out source destination Mar 17 17:25:30.739565 waagent[1927]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Mar 17 17:25:30.739565 waagent[1927]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Mar 17 17:25:30.739565 waagent[1927]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Mar 17 17:25:30.744321 waagent[1927]: 2025-03-17T17:25:30.743831Z INFO EnvHandler ExtHandler Current Firewall rules: Mar 17 17:25:30.744321 waagent[1927]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Mar 17 17:25:30.744321 waagent[1927]: pkts bytes target prot opt in out source destination Mar 17 17:25:30.744321 waagent[1927]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Mar 17 17:25:30.744321 waagent[1927]: pkts bytes target prot opt in out source destination Mar 17 17:25:30.744321 waagent[1927]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Mar 17 17:25:30.744321 waagent[1927]: pkts bytes target prot opt in out source destination Mar 17 17:25:30.744321 waagent[1927]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Mar 17 17:25:30.744321 waagent[1927]: 4 415 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Mar 17 17:25:30.744321 waagent[1927]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Mar 17 17:25:30.744321 waagent[1927]: 2025-03-17T17:25:30.744151Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Mar 17 17:25:31.516129 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 17 17:25:31.523744 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:25:31.626329 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:25:31.630908 (kubelet)[2161]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:25:31.736952 kubelet[2161]: E0317 17:25:31.736866 2161 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:25:31.739512 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:25:31.739666 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:25:41.766307 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 17 17:25:41.773732 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:25:41.880378 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:25:41.884917 (kubelet)[2176]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:25:41.981879 kubelet[2176]: E0317 17:25:41.981829 2176 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:25:41.983996 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:25:41.984297 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:25:43.473647 chronyd[1689]: Selected source PHC0 Mar 17 17:25:52.016299 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 17 17:25:52.025880 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:25:52.170810 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:25:52.182846 (kubelet)[2191]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:25:52.219487 kubelet[2191]: E0317 17:25:52.219407 2191 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:25:52.221795 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:25:52.221950 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:26:01.835858 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Mar 17 17:26:02.266218 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 17 17:26:02.274799 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:26:02.374190 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:26:02.385113 (kubelet)[2206]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:26:02.484917 kubelet[2206]: E0317 17:26:02.484851 2206 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:26:02.487362 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:26:02.487514 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:26:04.817196 update_engine[1698]: I20250317 17:26:04.816574 1698 update_attempter.cc:509] Updating boot flags... Mar 17 17:26:06.156555 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (2228) Mar 17 17:26:06.282824 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (2228) Mar 17 17:26:12.291140 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 17 17:26:12.297892 systemd[1]: Started sshd@0-10.200.20.35:22-10.200.16.10:41148.service - OpenSSH per-connection server daemon (10.200.16.10:41148). Mar 17 17:26:12.516073 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Mar 17 17:26:12.521731 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:26:12.625857 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:26:12.638837 (kubelet)[2338]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:26:12.676340 kubelet[2338]: E0317 17:26:12.676276 2338 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:26:12.678605 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:26:12.678757 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:26:12.895922 sshd[2328]: Accepted publickey for core from 10.200.16.10 port 41148 ssh2: RSA SHA256:Vv+Gx/xgYWEBj55H1UdRAcw683xVG5W8/4UU5IxNHAc Mar 17 17:26:12.897160 sshd-session[2328]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:26:12.901974 systemd-logind[1694]: New session 3 of user core. Mar 17 17:26:12.912721 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 17 17:26:13.321805 systemd[1]: Started sshd@1-10.200.20.35:22-10.200.16.10:41160.service - OpenSSH per-connection server daemon (10.200.16.10:41160). Mar 17 17:26:13.764548 sshd[2348]: Accepted publickey for core from 10.200.16.10 port 41160 ssh2: RSA SHA256:Vv+Gx/xgYWEBj55H1UdRAcw683xVG5W8/4UU5IxNHAc Mar 17 17:26:13.765961 sshd-session[2348]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:26:13.769933 systemd-logind[1694]: New session 4 of user core. Mar 17 17:26:13.779764 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 17 17:26:14.086333 sshd[2350]: Connection closed by 10.200.16.10 port 41160 Mar 17 17:26:14.085908 sshd-session[2348]: pam_unix(sshd:session): session closed for user core Mar 17 17:26:14.088808 systemd[1]: sshd@1-10.200.20.35:22-10.200.16.10:41160.service: Deactivated successfully. Mar 17 17:26:14.090503 systemd[1]: session-4.scope: Deactivated successfully. Mar 17 17:26:14.091979 systemd-logind[1694]: Session 4 logged out. Waiting for processes to exit. Mar 17 17:26:14.093234 systemd-logind[1694]: Removed session 4. Mar 17 17:26:14.163697 systemd[1]: Started sshd@2-10.200.20.35:22-10.200.16.10:41164.service - OpenSSH per-connection server daemon (10.200.16.10:41164). Mar 17 17:26:14.598472 sshd[2355]: Accepted publickey for core from 10.200.16.10 port 41164 ssh2: RSA SHA256:Vv+Gx/xgYWEBj55H1UdRAcw683xVG5W8/4UU5IxNHAc Mar 17 17:26:14.599813 sshd-session[2355]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:26:14.603801 systemd-logind[1694]: New session 5 of user core. Mar 17 17:26:14.614779 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 17 17:26:14.907648 sshd[2357]: Connection closed by 10.200.16.10 port 41164 Mar 17 17:26:14.908178 sshd-session[2355]: pam_unix(sshd:session): session closed for user core Mar 17 17:26:14.912011 systemd[1]: sshd@2-10.200.20.35:22-10.200.16.10:41164.service: Deactivated successfully. Mar 17 17:26:14.913798 systemd[1]: session-5.scope: Deactivated successfully. Mar 17 17:26:14.915379 systemd-logind[1694]: Session 5 logged out. Waiting for processes to exit. Mar 17 17:26:14.916409 systemd-logind[1694]: Removed session 5. Mar 17 17:26:15.002813 systemd[1]: Started sshd@3-10.200.20.35:22-10.200.16.10:41168.service - OpenSSH per-connection server daemon (10.200.16.10:41168). Mar 17 17:26:15.473717 sshd[2362]: Accepted publickey for core from 10.200.16.10 port 41168 ssh2: RSA SHA256:Vv+Gx/xgYWEBj55H1UdRAcw683xVG5W8/4UU5IxNHAc Mar 17 17:26:15.475033 sshd-session[2362]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:26:15.478985 systemd-logind[1694]: New session 6 of user core. Mar 17 17:26:15.489766 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 17 17:26:15.828748 sshd[2364]: Connection closed by 10.200.16.10 port 41168 Mar 17 17:26:15.829313 sshd-session[2362]: pam_unix(sshd:session): session closed for user core Mar 17 17:26:15.832494 systemd-logind[1694]: Session 6 logged out. Waiting for processes to exit. Mar 17 17:26:15.832760 systemd[1]: sshd@3-10.200.20.35:22-10.200.16.10:41168.service: Deactivated successfully. Mar 17 17:26:15.834316 systemd[1]: session-6.scope: Deactivated successfully. Mar 17 17:26:15.836261 systemd-logind[1694]: Removed session 6. Mar 17 17:26:15.908264 systemd[1]: Started sshd@4-10.200.20.35:22-10.200.16.10:41178.service - OpenSSH per-connection server daemon (10.200.16.10:41178). Mar 17 17:26:16.340634 sshd[2369]: Accepted publickey for core from 10.200.16.10 port 41178 ssh2: RSA SHA256:Vv+Gx/xgYWEBj55H1UdRAcw683xVG5W8/4UU5IxNHAc Mar 17 17:26:16.341968 sshd-session[2369]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:26:16.346734 systemd-logind[1694]: New session 7 of user core. Mar 17 17:26:16.352758 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 17 17:26:16.750992 sudo[2372]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 17 17:26:16.751270 sudo[2372]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:26:16.763464 sudo[2372]: pam_unix(sudo:session): session closed for user root Mar 17 17:26:16.832246 sshd[2371]: Connection closed by 10.200.16.10 port 41178 Mar 17 17:26:16.831414 sshd-session[2369]: pam_unix(sshd:session): session closed for user core Mar 17 17:26:16.834722 systemd[1]: sshd@4-10.200.20.35:22-10.200.16.10:41178.service: Deactivated successfully. Mar 17 17:26:16.836405 systemd[1]: session-7.scope: Deactivated successfully. Mar 17 17:26:16.837899 systemd-logind[1694]: Session 7 logged out. Waiting for processes to exit. Mar 17 17:26:16.839428 systemd-logind[1694]: Removed session 7. Mar 17 17:26:16.909774 systemd[1]: Started sshd@5-10.200.20.35:22-10.200.16.10:41194.service - OpenSSH per-connection server daemon (10.200.16.10:41194). Mar 17 17:26:17.343290 sshd[2377]: Accepted publickey for core from 10.200.16.10 port 41194 ssh2: RSA SHA256:Vv+Gx/xgYWEBj55H1UdRAcw683xVG5W8/4UU5IxNHAc Mar 17 17:26:17.412740 sshd-session[2377]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:26:17.416973 systemd-logind[1694]: New session 8 of user core. Mar 17 17:26:17.427779 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 17 17:26:17.610664 sudo[2381]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 17 17:26:17.611569 sudo[2381]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:26:17.615324 sudo[2381]: pam_unix(sudo:session): session closed for user root Mar 17 17:26:17.621135 sudo[2380]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 17 17:26:17.621419 sudo[2380]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:26:17.641150 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 17:26:17.665898 augenrules[2403]: No rules Mar 17 17:26:17.667218 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 17:26:17.668632 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 17:26:17.670336 sudo[2380]: pam_unix(sudo:session): session closed for user root Mar 17 17:26:17.739564 sshd[2379]: Connection closed by 10.200.16.10 port 41194 Mar 17 17:26:17.738644 sshd-session[2377]: pam_unix(sshd:session): session closed for user core Mar 17 17:26:17.741716 systemd[1]: sshd@5-10.200.20.35:22-10.200.16.10:41194.service: Deactivated successfully. Mar 17 17:26:17.743597 systemd[1]: session-8.scope: Deactivated successfully. Mar 17 17:26:17.745923 systemd-logind[1694]: Session 8 logged out. Waiting for processes to exit. Mar 17 17:26:17.747071 systemd-logind[1694]: Removed session 8. Mar 17 17:26:17.816073 systemd[1]: Started sshd@6-10.200.20.35:22-10.200.16.10:41206.service - OpenSSH per-connection server daemon (10.200.16.10:41206). Mar 17 17:26:18.248738 sshd[2411]: Accepted publickey for core from 10.200.16.10 port 41206 ssh2: RSA SHA256:Vv+Gx/xgYWEBj55H1UdRAcw683xVG5W8/4UU5IxNHAc Mar 17 17:26:18.250120 sshd-session[2411]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:26:18.254313 systemd-logind[1694]: New session 9 of user core. Mar 17 17:26:18.261713 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 17 17:26:18.493474 sudo[2414]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 17 17:26:18.493826 sudo[2414]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:26:19.688852 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 17 17:26:19.689917 (dockerd)[2431]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 17 17:26:20.340551 dockerd[2431]: time="2025-03-17T17:26:20.338579185Z" level=info msg="Starting up" Mar 17 17:26:20.659591 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2935275483-merged.mount: Deactivated successfully. Mar 17 17:26:20.724135 dockerd[2431]: time="2025-03-17T17:26:20.724076325Z" level=info msg="Loading containers: start." Mar 17 17:26:20.909572 kernel: Initializing XFRM netlink socket Mar 17 17:26:21.017172 systemd-networkd[1501]: docker0: Link UP Mar 17 17:26:21.060116 dockerd[2431]: time="2025-03-17T17:26:21.060061698Z" level=info msg="Loading containers: done." Mar 17 17:26:21.101606 dockerd[2431]: time="2025-03-17T17:26:21.101383990Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 17 17:26:21.101606 dockerd[2431]: time="2025-03-17T17:26:21.101509910Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Mar 17 17:26:21.101843 dockerd[2431]: time="2025-03-17T17:26:21.101686790Z" level=info msg="Daemon has completed initialization" Mar 17 17:26:21.170688 dockerd[2431]: time="2025-03-17T17:26:21.170439544Z" level=info msg="API listen on /run/docker.sock" Mar 17 17:26:21.170878 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 17 17:26:21.787865 containerd[1808]: time="2025-03-17T17:26:21.787819287Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.3\"" Mar 17 17:26:22.683497 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Mar 17 17:26:22.692688 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:26:22.693932 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3548713862.mount: Deactivated successfully. Mar 17 17:26:22.912932 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:26:22.917950 (kubelet)[2628]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:26:22.956586 kubelet[2628]: E0317 17:26:22.955613 2628 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:26:22.958488 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:26:22.958656 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:26:24.587379 containerd[1808]: time="2025-03-17T17:26:24.587315724Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:26:24.591558 containerd[1808]: time="2025-03-17T17:26:24.591264002Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.3: active requests=0, bytes read=26231950" Mar 17 17:26:24.594540 containerd[1808]: time="2025-03-17T17:26:24.594495680Z" level=info msg="ImageCreate event name:\"sha256:25dd33975ea35cef2fa9b105778dbe3369de267e9ddf81427b7b82e98ff374e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:26:24.604680 containerd[1808]: time="2025-03-17T17:26:24.604631393Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:279e45cf07e4f56925c3c5237179eb63616788426a96e94df5fedf728b18926e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:26:24.606103 containerd[1808]: time="2025-03-17T17:26:24.605907632Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.3\" with image id \"sha256:25dd33975ea35cef2fa9b105778dbe3369de267e9ddf81427b7b82e98ff374e5\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:279e45cf07e4f56925c3c5237179eb63616788426a96e94df5fedf728b18926e\", size \"26228750\" in 2.818043305s" Mar 17 17:26:24.606103 containerd[1808]: time="2025-03-17T17:26:24.605961592Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.3\" returns image reference \"sha256:25dd33975ea35cef2fa9b105778dbe3369de267e9ddf81427b7b82e98ff374e5\"" Mar 17 17:26:24.607039 containerd[1808]: time="2025-03-17T17:26:24.606704711Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.3\"" Mar 17 17:26:26.091982 containerd[1808]: time="2025-03-17T17:26:26.091324158Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:26:26.094053 containerd[1808]: time="2025-03-17T17:26:26.093769596Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.3: active requests=0, bytes read=22530032" Mar 17 17:26:26.098938 containerd[1808]: time="2025-03-17T17:26:26.098871032Z" level=info msg="ImageCreate event name:\"sha256:9e29b4db8c5cdf9970961ed3a47137ea71ad067643b8e5cccb58085f22a9b315\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:26:26.109832 containerd[1808]: time="2025-03-17T17:26:26.109723585Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:54456a96a1bbdc35dcc2e70fcc1355bf655af67694e40b650ac12e83521f6411\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:26:26.111287 containerd[1808]: time="2025-03-17T17:26:26.110923664Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.3\" with image id \"sha256:9e29b4db8c5cdf9970961ed3a47137ea71ad067643b8e5cccb58085f22a9b315\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:54456a96a1bbdc35dcc2e70fcc1355bf655af67694e40b650ac12e83521f6411\", size \"23970828\" in 1.504178873s" Mar 17 17:26:26.111287 containerd[1808]: time="2025-03-17T17:26:26.110964104Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.3\" returns image reference \"sha256:9e29b4db8c5cdf9970961ed3a47137ea71ad067643b8e5cccb58085f22a9b315\"" Mar 17 17:26:26.111785 containerd[1808]: time="2025-03-17T17:26:26.111600024Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.3\"" Mar 17 17:26:27.643588 containerd[1808]: time="2025-03-17T17:26:27.642704479Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:26:27.646445 containerd[1808]: time="2025-03-17T17:26:27.646389117Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.3: active requests=0, bytes read=17482561" Mar 17 17:26:27.651710 containerd[1808]: time="2025-03-17T17:26:27.651670033Z" level=info msg="ImageCreate event name:\"sha256:6b8dfebcc65dc9d4765a91d2923c304e13beca7111c57dfc99f1c3267a6e9f30\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:26:27.658874 containerd[1808]: time="2025-03-17T17:26:27.658793068Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:aafae2e3a8d65bc6dc3a0c6095c24bc72b1ff608e1417f0f5e860ce4a61c27df\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:26:27.660141 containerd[1808]: time="2025-03-17T17:26:27.659987387Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.3\" with image id \"sha256:6b8dfebcc65dc9d4765a91d2923c304e13beca7111c57dfc99f1c3267a6e9f30\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:aafae2e3a8d65bc6dc3a0c6095c24bc72b1ff608e1417f0f5e860ce4a61c27df\", size \"18923375\" in 1.548351123s" Mar 17 17:26:27.660141 containerd[1808]: time="2025-03-17T17:26:27.660031227Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.3\" returns image reference \"sha256:6b8dfebcc65dc9d4765a91d2923c304e13beca7111c57dfc99f1c3267a6e9f30\"" Mar 17 17:26:27.660835 containerd[1808]: time="2025-03-17T17:26:27.660628987Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.3\"" Mar 17 17:26:28.836696 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2685327072.mount: Deactivated successfully. Mar 17 17:26:29.239695 containerd[1808]: time="2025-03-17T17:26:29.238984450Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:26:29.242151 containerd[1808]: time="2025-03-17T17:26:29.242068368Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.3: active requests=0, bytes read=27370095" Mar 17 17:26:29.246018 containerd[1808]: time="2025-03-17T17:26:29.245936246Z" level=info msg="ImageCreate event name:\"sha256:2a637602f3e88e76046aa1a75bccdb37b25b2fcba99a380412e2c27ccd55c547\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:26:29.251620 containerd[1808]: time="2025-03-17T17:26:29.251546922Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:5015269547a0b7dd2c062758e9a64467b58978ff2502cad4c3f5cdf4aa554ad3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:26:29.252407 containerd[1808]: time="2025-03-17T17:26:29.252245162Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.3\" with image id \"sha256:2a637602f3e88e76046aa1a75bccdb37b25b2fcba99a380412e2c27ccd55c547\", repo tag \"registry.k8s.io/kube-proxy:v1.32.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:5015269547a0b7dd2c062758e9a64467b58978ff2502cad4c3f5cdf4aa554ad3\", size \"27369114\" in 1.591582295s" Mar 17 17:26:29.252407 containerd[1808]: time="2025-03-17T17:26:29.252289962Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.3\" returns image reference \"sha256:2a637602f3e88e76046aa1a75bccdb37b25b2fcba99a380412e2c27ccd55c547\"" Mar 17 17:26:29.253164 containerd[1808]: time="2025-03-17T17:26:29.253038001Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Mar 17 17:26:30.046654 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3838405824.mount: Deactivated successfully. Mar 17 17:26:31.364833 containerd[1808]: time="2025-03-17T17:26:31.364768987Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:26:31.367559 containerd[1808]: time="2025-03-17T17:26:31.367473906Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Mar 17 17:26:31.374434 containerd[1808]: time="2025-03-17T17:26:31.374376901Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:26:31.383120 containerd[1808]: time="2025-03-17T17:26:31.383032895Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:26:31.384774 containerd[1808]: time="2025-03-17T17:26:31.384310614Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 2.131233213s" Mar 17 17:26:31.384774 containerd[1808]: time="2025-03-17T17:26:31.384644414Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Mar 17 17:26:31.388223 containerd[1808]: time="2025-03-17T17:26:31.387921132Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 17 17:26:32.025422 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2212609521.mount: Deactivated successfully. Mar 17 17:26:32.052605 containerd[1808]: time="2025-03-17T17:26:32.052140570Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:26:32.056221 containerd[1808]: time="2025-03-17T17:26:32.055968407Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Mar 17 17:26:32.101703 containerd[1808]: time="2025-03-17T17:26:32.101615617Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:26:32.765989 containerd[1808]: time="2025-03-17T17:26:32.765900135Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:26:32.767483 containerd[1808]: time="2025-03-17T17:26:32.766742935Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 1.378769723s" Mar 17 17:26:32.767483 containerd[1808]: time="2025-03-17T17:26:32.766782735Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Mar 17 17:26:32.767483 containerd[1808]: time="2025-03-17T17:26:32.767321494Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Mar 17 17:26:33.016133 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Mar 17 17:26:33.021889 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:26:33.132468 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:26:33.142841 (kubelet)[2764]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:26:33.180584 kubelet[2764]: E0317 17:26:33.180442 2764 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:26:33.182472 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:26:33.182633 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:26:33.911770 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2394510843.mount: Deactivated successfully. Mar 17 17:26:38.212890 containerd[1808]: time="2025-03-17T17:26:38.212820314Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:26:38.216621 containerd[1808]: time="2025-03-17T17:26:38.216548671Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812429" Mar 17 17:26:38.221505 containerd[1808]: time="2025-03-17T17:26:38.221441828Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:26:38.229711 containerd[1808]: time="2025-03-17T17:26:38.229618423Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:26:38.231418 containerd[1808]: time="2025-03-17T17:26:38.231035422Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 5.463686048s" Mar 17 17:26:38.231418 containerd[1808]: time="2025-03-17T17:26:38.231081462Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Mar 17 17:26:43.218925 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Mar 17 17:26:43.226959 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:26:43.241178 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 17 17:26:43.241269 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 17 17:26:43.242587 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:26:43.249854 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:26:43.277808 systemd[1]: Reloading requested from client PID 2857 ('systemctl') (unit session-9.scope)... Mar 17 17:26:43.277824 systemd[1]: Reloading... Mar 17 17:26:43.399095 zram_generator::config[2900]: No configuration found. Mar 17 17:26:43.510176 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:26:43.590009 systemd[1]: Reloading finished in 311 ms. Mar 17 17:26:44.674248 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 17 17:26:44.674343 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 17 17:26:44.674969 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:26:44.682911 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:26:47.122673 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:26:47.132076 (kubelet)[2961]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 17:26:47.172734 kubelet[2961]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:26:47.173117 kubelet[2961]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 17 17:26:47.173164 kubelet[2961]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:26:47.173331 kubelet[2961]: I0317 17:26:47.173297 2961 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 17:26:49.709731 kubelet[2961]: I0317 17:26:49.709160 2961 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Mar 17 17:26:49.709731 kubelet[2961]: I0317 17:26:49.709201 2961 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 17:26:49.709731 kubelet[2961]: I0317 17:26:49.709564 2961 server.go:954] "Client rotation is on, will bootstrap in background" Mar 17 17:26:49.729618 kubelet[2961]: E0317 17:26:49.729571 2961 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.35:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.35:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:26:49.731782 kubelet[2961]: I0317 17:26:49.731743 2961 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 17:26:49.743326 kubelet[2961]: E0317 17:26:49.743276 2961 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 17 17:26:49.743326 kubelet[2961]: I0317 17:26:49.743326 2961 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 17 17:26:49.746570 kubelet[2961]: I0317 17:26:49.746524 2961 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 17:26:49.746828 kubelet[2961]: I0317 17:26:49.746790 2961 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 17:26:49.747011 kubelet[2961]: I0317 17:26:49.746826 2961 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4152.2.2-a-e33ca1f69b","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 17 17:26:49.747109 kubelet[2961]: I0317 17:26:49.747022 2961 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 17:26:49.747109 kubelet[2961]: I0317 17:26:49.747031 2961 container_manager_linux.go:304] "Creating device plugin manager" Mar 17 17:26:49.747200 kubelet[2961]: I0317 17:26:49.747179 2961 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:26:49.749956 kubelet[2961]: I0317 17:26:49.749924 2961 kubelet.go:446] "Attempting to sync node with API server" Mar 17 17:26:49.750067 kubelet[2961]: I0317 17:26:49.750047 2961 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 17:26:49.750093 kubelet[2961]: I0317 17:26:49.750080 2961 kubelet.go:352] "Adding apiserver pod source" Mar 17 17:26:49.750093 kubelet[2961]: I0317 17:26:49.750092 2961 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 17:26:49.755576 kubelet[2961]: W0317 17:26:49.754763 2961 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152.2.2-a-e33ca1f69b&limit=500&resourceVersion=0": dial tcp 10.200.20.35:6443: connect: connection refused Mar 17 17:26:49.755576 kubelet[2961]: E0317 17:26:49.754827 2961 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152.2.2-a-e33ca1f69b&limit=500&resourceVersion=0\": dial tcp 10.200.20.35:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:26:49.755576 kubelet[2961]: W0317 17:26:49.755201 2961 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.35:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.35:6443: connect: connection refused Mar 17 17:26:49.755576 kubelet[2961]: E0317 17:26:49.755238 2961 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.35:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.35:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:26:49.755576 kubelet[2961]: I0317 17:26:49.755340 2961 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 17 17:26:49.756146 kubelet[2961]: I0317 17:26:49.756123 2961 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 17:26:49.756265 kubelet[2961]: W0317 17:26:49.756254 2961 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 17 17:26:49.757718 kubelet[2961]: I0317 17:26:49.757697 2961 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 17 17:26:49.757863 kubelet[2961]: I0317 17:26:49.757852 2961 server.go:1287] "Started kubelet" Mar 17 17:26:49.759004 kubelet[2961]: I0317 17:26:49.758955 2961 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 17:26:49.759937 kubelet[2961]: I0317 17:26:49.759909 2961 server.go:490] "Adding debug handlers to kubelet server" Mar 17 17:26:49.761303 kubelet[2961]: I0317 17:26:49.760683 2961 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 17:26:49.761303 kubelet[2961]: I0317 17:26:49.760997 2961 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 17:26:49.761303 kubelet[2961]: E0317 17:26:49.761170 2961 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.35:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.35:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4152.2.2-a-e33ca1f69b.182da7255bc9c638 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4152.2.2-a-e33ca1f69b,UID:ci-4152.2.2-a-e33ca1f69b,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4152.2.2-a-e33ca1f69b,},FirstTimestamp:2025-03-17 17:26:49.757828664 +0000 UTC m=+2.622231973,LastTimestamp:2025-03-17 17:26:49.757828664 +0000 UTC m=+2.622231973,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152.2.2-a-e33ca1f69b,}" Mar 17 17:26:49.763897 kubelet[2961]: I0317 17:26:49.763244 2961 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 17:26:49.764931 kubelet[2961]: I0317 17:26:49.764255 2961 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 17 17:26:49.767554 kubelet[2961]: E0317 17:26:49.767502 2961 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4152.2.2-a-e33ca1f69b\" not found" Mar 17 17:26:49.768375 kubelet[2961]: I0317 17:26:49.767997 2961 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 17 17:26:49.768375 kubelet[2961]: I0317 17:26:49.768229 2961 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 17:26:49.768375 kubelet[2961]: I0317 17:26:49.768300 2961 reconciler.go:26] "Reconciler: start to sync state" Mar 17 17:26:49.770607 kubelet[2961]: W0317 17:26:49.769989 2961 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.35:6443: connect: connection refused Mar 17 17:26:49.770607 kubelet[2961]: E0317 17:26:49.770063 2961 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.35:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:26:49.771068 kubelet[2961]: I0317 17:26:49.771033 2961 factory.go:221] Registration of the systemd container factory successfully Mar 17 17:26:49.773558 kubelet[2961]: I0317 17:26:49.771357 2961 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 17:26:49.773558 kubelet[2961]: E0317 17:26:49.772941 2961 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 17:26:49.773558 kubelet[2961]: I0317 17:26:49.773096 2961 factory.go:221] Registration of the containerd container factory successfully Mar 17 17:26:49.775321 kubelet[2961]: E0317 17:26:49.774963 2961 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152.2.2-a-e33ca1f69b?timeout=10s\": dial tcp 10.200.20.35:6443: connect: connection refused" interval="200ms" Mar 17 17:26:49.810114 kubelet[2961]: I0317 17:26:49.810051 2961 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 17 17:26:49.810114 kubelet[2961]: I0317 17:26:49.810074 2961 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 17 17:26:49.810292 kubelet[2961]: I0317 17:26:49.810147 2961 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:26:49.868367 kubelet[2961]: E0317 17:26:49.868323 2961 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4152.2.2-a-e33ca1f69b\" not found" Mar 17 17:26:49.968814 kubelet[2961]: E0317 17:26:49.968704 2961 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4152.2.2-a-e33ca1f69b\" not found" Mar 17 17:26:49.976276 kubelet[2961]: E0317 17:26:49.976230 2961 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152.2.2-a-e33ca1f69b?timeout=10s\": dial tcp 10.200.20.35:6443: connect: connection refused" interval="400ms" Mar 17 17:26:50.069401 kubelet[2961]: E0317 17:26:50.069358 2961 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4152.2.2-a-e33ca1f69b\" not found" Mar 17 17:26:50.169577 kubelet[2961]: E0317 17:26:50.169507 2961 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4152.2.2-a-e33ca1f69b\" not found" Mar 17 17:26:50.270485 kubelet[2961]: E0317 17:26:50.270436 2961 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4152.2.2-a-e33ca1f69b\" not found" Mar 17 17:26:50.371030 kubelet[2961]: E0317 17:26:50.370981 2961 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4152.2.2-a-e33ca1f69b\" not found" Mar 17 17:26:50.377752 kubelet[2961]: E0317 17:26:50.377703 2961 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152.2.2-a-e33ca1f69b?timeout=10s\": dial tcp 10.200.20.35:6443: connect: connection refused" interval="800ms" Mar 17 17:26:50.471998 kubelet[2961]: E0317 17:26:50.471956 2961 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4152.2.2-a-e33ca1f69b\" not found" Mar 17 17:26:50.572506 kubelet[2961]: E0317 17:26:50.572398 2961 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4152.2.2-a-e33ca1f69b\" not found" Mar 17 17:26:50.673003 kubelet[2961]: E0317 17:26:50.672954 2961 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4152.2.2-a-e33ca1f69b\" not found" Mar 17 17:26:50.773424 kubelet[2961]: E0317 17:26:50.773381 2961 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4152.2.2-a-e33ca1f69b\" not found" Mar 17 17:26:50.808131 kubelet[2961]: W0317 17:26:50.808057 2961 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.35:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.35:6443: connect: connection refused Mar 17 17:26:50.808240 kubelet[2961]: E0317 17:26:50.808135 2961 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.35:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.35:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:26:50.873621 kubelet[2961]: E0317 17:26:50.873495 2961 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4152.2.2-a-e33ca1f69b\" not found" Mar 17 17:26:50.894238 kubelet[2961]: W0317 17:26:50.894169 2961 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.35:6443: connect: connection refused Mar 17 17:26:50.894315 kubelet[2961]: E0317 17:26:50.894248 2961 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.35:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:26:50.973763 kubelet[2961]: E0317 17:26:50.973708 2961 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4152.2.2-a-e33ca1f69b\" not found" Mar 17 17:26:51.074262 kubelet[2961]: E0317 17:26:51.074223 2961 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4152.2.2-a-e33ca1f69b\" not found" Mar 17 17:26:51.157001 kubelet[2961]: W0317 17:26:51.156864 2961 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152.2.2-a-e33ca1f69b&limit=500&resourceVersion=0": dial tcp 10.200.20.35:6443: connect: connection refused Mar 17 17:26:51.157001 kubelet[2961]: E0317 17:26:51.156931 2961 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152.2.2-a-e33ca1f69b&limit=500&resourceVersion=0\": dial tcp 10.200.20.35:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:26:51.174391 kubelet[2961]: E0317 17:26:51.174353 2961 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4152.2.2-a-e33ca1f69b\" not found" Mar 17 17:26:51.178974 kubelet[2961]: E0317 17:26:51.178935 2961 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152.2.2-a-e33ca1f69b?timeout=10s\": dial tcp 10.200.20.35:6443: connect: connection refused" interval="1.6s" Mar 17 17:26:51.274809 kubelet[2961]: E0317 17:26:51.274740 2961 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4152.2.2-a-e33ca1f69b\" not found" Mar 17 17:26:51.375288 kubelet[2961]: E0317 17:26:51.375238 2961 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4152.2.2-a-e33ca1f69b\" not found" Mar 17 17:26:51.475759 kubelet[2961]: E0317 17:26:51.475632 2961 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4152.2.2-a-e33ca1f69b\" not found" Mar 17 17:26:51.576032 kubelet[2961]: E0317 17:26:51.575979 2961 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4152.2.2-a-e33ca1f69b\" not found" Mar 17 17:26:51.676546 kubelet[2961]: E0317 17:26:51.676499 2961 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4152.2.2-a-e33ca1f69b\" not found" Mar 17 17:26:51.741991 kubelet[2961]: E0317 17:26:51.741876 2961 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.35:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.35:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:26:51.777110 kubelet[2961]: E0317 17:26:51.777063 2961 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4152.2.2-a-e33ca1f69b\" not found" Mar 17 17:26:51.877594 kubelet[2961]: E0317 17:26:51.877549 2961 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4152.2.2-a-e33ca1f69b\" not found" Mar 17 17:26:51.925236 kubelet[2961]: E0317 17:26:51.925115 2961 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.35:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.35:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4152.2.2-a-e33ca1f69b.182da7255bc9c638 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4152.2.2-a-e33ca1f69b,UID:ci-4152.2.2-a-e33ca1f69b,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4152.2.2-a-e33ca1f69b,},FirstTimestamp:2025-03-17 17:26:49.757828664 +0000 UTC m=+2.622231973,LastTimestamp:2025-03-17 17:26:49.757828664 +0000 UTC m=+2.622231973,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152.2.2-a-e33ca1f69b,}" Mar 17 17:26:51.978574 kubelet[2961]: E0317 17:26:51.978509 2961 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4152.2.2-a-e33ca1f69b\" not found" Mar 17 17:26:52.079109 kubelet[2961]: E0317 17:26:52.079058 2961 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4152.2.2-a-e33ca1f69b\" not found" Mar 17 17:26:52.179719 kubelet[2961]: E0317 17:26:52.179666 2961 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4152.2.2-a-e33ca1f69b\" not found" Mar 17 17:26:52.280713 kubelet[2961]: E0317 17:26:52.280659 2961 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4152.2.2-a-e33ca1f69b\" not found" Mar 17 17:26:52.289358 kubelet[2961]: I0317 17:26:52.289148 2961 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 17:26:52.291131 kubelet[2961]: I0317 17:26:52.290753 2961 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 17:26:52.291131 kubelet[2961]: I0317 17:26:52.290790 2961 status_manager.go:227] "Starting to sync pod status with apiserver" Mar 17 17:26:52.291131 kubelet[2961]: I0317 17:26:52.290826 2961 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 17 17:26:52.291131 kubelet[2961]: I0317 17:26:52.290836 2961 kubelet.go:2388] "Starting kubelet main sync loop" Mar 17 17:26:52.291131 kubelet[2961]: E0317 17:26:52.290890 2961 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 17:26:52.294142 kubelet[2961]: W0317 17:26:52.293949 2961 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.35:6443: connect: connection refused Mar 17 17:26:52.294142 kubelet[2961]: E0317 17:26:52.294021 2961 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.35:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:26:54.457522 kubelet[2961]: E0317 17:26:52.381645 2961 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4152.2.2-a-e33ca1f69b\" not found" Mar 17 17:26:54.457522 kubelet[2961]: E0317 17:26:52.391811 2961 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 17 17:26:54.457522 kubelet[2961]: E0317 17:26:52.482303 2961 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4152.2.2-a-e33ca1f69b\" not found" Mar 17 17:26:54.457522 kubelet[2961]: E0317 17:26:52.582913 2961 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4152.2.2-a-e33ca1f69b\" not found" Mar 17 17:26:54.457522 kubelet[2961]: E0317 17:26:52.592109 2961 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 17 17:26:54.457522 kubelet[2961]: W0317 17:26:52.592593 2961 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.35:6443: connect: connection refused Mar 17 17:26:54.457522 kubelet[2961]: E0317 17:26:52.592633 2961 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.35:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:26:54.457522 kubelet[2961]: E0317 17:26:52.683300 2961 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4152.2.2-a-e33ca1f69b\" not found" Mar 17 17:26:54.457522 kubelet[2961]: E0317 17:26:52.779841 2961 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152.2.2-a-e33ca1f69b?timeout=10s\": dial tcp 10.200.20.35:6443: connect: connection refused" interval="3.2s" Mar 17 17:26:54.457522 kubelet[2961]: E0317 17:26:52.784017 2961 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4152.2.2-a-e33ca1f69b\" not found" Mar 17 17:26:54.458140 kubelet[2961]: E0317 17:26:52.884522 2961 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4152.2.2-a-e33ca1f69b\" not found" Mar 17 17:26:54.458140 kubelet[2961]: E0317 17:26:52.984970 2961 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4152.2.2-a-e33ca1f69b\" not found" Mar 17 17:26:54.458140 kubelet[2961]: E0317 17:26:52.993123 2961 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 17 17:26:54.458140 kubelet[2961]: W0317 17:26:53.070902 2961 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.35:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.35:6443: connect: connection refused Mar 17 17:26:54.458140 kubelet[2961]: E0317 17:26:53.070945 2961 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.35:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.35:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:26:54.458140 kubelet[2961]: E0317 17:26:53.085340 2961 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4152.2.2-a-e33ca1f69b\" not found" Mar 17 17:26:54.458140 kubelet[2961]: E0317 17:26:53.185884 2961 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4152.2.2-a-e33ca1f69b\" not found" Mar 17 17:26:54.458140 kubelet[2961]: E0317 17:26:53.286760 2961 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4152.2.2-a-e33ca1f69b\" not found" Mar 17 17:26:54.458140 kubelet[2961]: E0317 17:26:53.387405 2961 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4152.2.2-a-e33ca1f69b\" not found" Mar 17 17:26:54.458140 kubelet[2961]: W0317 17:26:53.420950 2961 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152.2.2-a-e33ca1f69b&limit=500&resourceVersion=0": dial tcp 10.200.20.35:6443: connect: connection refused Mar 17 17:26:54.458394 kubelet[2961]: E0317 17:26:53.420991 2961 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152.2.2-a-e33ca1f69b&limit=500&resourceVersion=0\": dial tcp 10.200.20.35:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:26:54.458394 kubelet[2961]: W0317 17:26:53.481963 2961 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.35:6443: connect: connection refused Mar 17 17:26:54.458394 kubelet[2961]: E0317 17:26:53.482002 2961 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.35:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:26:54.458394 kubelet[2961]: E0317 17:26:53.488284 2961 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4152.2.2-a-e33ca1f69b\" not found" Mar 17 17:26:54.458394 kubelet[2961]: E0317 17:26:53.588762 2961 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4152.2.2-a-e33ca1f69b\" not found" Mar 17 17:26:54.458394 kubelet[2961]: E0317 17:26:53.689388 2961 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4152.2.2-a-e33ca1f69b\" not found" Mar 17 17:26:54.458394 kubelet[2961]: E0317 17:26:53.789940 2961 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4152.2.2-a-e33ca1f69b\" not found" Mar 17 17:26:54.458394 kubelet[2961]: E0317 17:26:53.794167 2961 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 17 17:26:54.458394 kubelet[2961]: E0317 17:26:53.890706 2961 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4152.2.2-a-e33ca1f69b\" not found" Mar 17 17:26:54.458617 kubelet[2961]: E0317 17:26:53.991241 2961 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4152.2.2-a-e33ca1f69b\" not found" Mar 17 17:26:54.458617 kubelet[2961]: E0317 17:26:54.091622 2961 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4152.2.2-a-e33ca1f69b\" not found" Mar 17 17:26:54.458617 kubelet[2961]: E0317 17:26:54.192142 2961 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4152.2.2-a-e33ca1f69b\" not found" Mar 17 17:26:54.458617 kubelet[2961]: E0317 17:26:54.293180 2961 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4152.2.2-a-e33ca1f69b\" not found" Mar 17 17:26:54.458617 kubelet[2961]: E0317 17:26:54.393495 2961 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4152.2.2-a-e33ca1f69b\" not found" Mar 17 17:26:54.494117 kubelet[2961]: E0317 17:26:54.494062 2961 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4152.2.2-a-e33ca1f69b\" not found" Mar 17 17:26:54.510777 kubelet[2961]: I0317 17:26:54.510243 2961 policy_none.go:49] "None policy: Start" Mar 17 17:26:54.510777 kubelet[2961]: I0317 17:26:54.510287 2961 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 17 17:26:54.510777 kubelet[2961]: I0317 17:26:54.510305 2961 state_mem.go:35] "Initializing new in-memory state store" Mar 17 17:26:54.594550 kubelet[2961]: E0317 17:26:54.594477 2961 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4152.2.2-a-e33ca1f69b\" not found" Mar 17 17:26:54.695404 kubelet[2961]: E0317 17:26:54.695306 2961 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4152.2.2-a-e33ca1f69b\" not found" Mar 17 17:26:55.102351 kubelet[2961]: E0317 17:26:54.795993 2961 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4152.2.2-a-e33ca1f69b\" not found" Mar 17 17:26:55.102351 kubelet[2961]: E0317 17:26:54.896528 2961 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4152.2.2-a-e33ca1f69b\" not found" Mar 17 17:26:55.102351 kubelet[2961]: E0317 17:26:54.997071 2961 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4152.2.2-a-e33ca1f69b\" not found" Mar 17 17:26:55.102351 kubelet[2961]: E0317 17:26:55.097685 2961 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4152.2.2-a-e33ca1f69b\" not found" Mar 17 17:26:55.198428 kubelet[2961]: E0317 17:26:55.198379 2961 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4152.2.2-a-e33ca1f69b\" not found" Mar 17 17:26:55.263967 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 17 17:26:55.274748 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 17 17:26:55.279060 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 17 17:26:55.292107 kubelet[2961]: I0317 17:26:55.291625 2961 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 17:26:55.292107 kubelet[2961]: I0317 17:26:55.291857 2961 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 17 17:26:55.292107 kubelet[2961]: I0317 17:26:55.291869 2961 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 17:26:55.292107 kubelet[2961]: I0317 17:26:55.292133 2961 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 17:26:55.293944 kubelet[2961]: E0317 17:26:55.293863 2961 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 17 17:26:55.293944 kubelet[2961]: E0317 17:26:55.293919 2961 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4152.2.2-a-e33ca1f69b\" not found" Mar 17 17:26:55.359736 kubelet[2961]: W0317 17:26:55.359594 2961 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.35:6443: connect: connection refused Mar 17 17:26:55.359736 kubelet[2961]: E0317 17:26:55.359664 2961 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.35:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:26:55.394099 kubelet[2961]: I0317 17:26:55.394054 2961 kubelet_node_status.go:76] "Attempting to register node" node="ci-4152.2.2-a-e33ca1f69b" Mar 17 17:26:55.395185 kubelet[2961]: E0317 17:26:55.394742 2961 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.200.20.35:6443/api/v1/nodes\": dial tcp 10.200.20.35:6443: connect: connection refused" node="ci-4152.2.2-a-e33ca1f69b" Mar 17 17:26:55.407106 systemd[1]: Created slice kubepods-burstable-pod208e3eb6a82c94a7b78b20bda2cb86eb.slice - libcontainer container kubepods-burstable-pod208e3eb6a82c94a7b78b20bda2cb86eb.slice. Mar 17 17:26:55.428623 kubelet[2961]: E0317 17:26:55.428581 2961 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4152.2.2-a-e33ca1f69b\" not found" node="ci-4152.2.2-a-e33ca1f69b" Mar 17 17:26:55.432734 systemd[1]: Created slice kubepods-burstable-pod09e1dd325249be7f02f8254ea6d1f788.slice - libcontainer container kubepods-burstable-pod09e1dd325249be7f02f8254ea6d1f788.slice. Mar 17 17:26:55.440822 kubelet[2961]: E0317 17:26:55.440779 2961 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4152.2.2-a-e33ca1f69b\" not found" node="ci-4152.2.2-a-e33ca1f69b" Mar 17 17:26:55.443923 systemd[1]: Created slice kubepods-burstable-pod729df02f0e47087b57988a4b56ce518e.slice - libcontainer container kubepods-burstable-pod729df02f0e47087b57988a4b56ce518e.slice. Mar 17 17:26:55.446205 kubelet[2961]: E0317 17:26:55.446165 2961 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4152.2.2-a-e33ca1f69b\" not found" node="ci-4152.2.2-a-e33ca1f69b" Mar 17 17:26:55.500895 kubelet[2961]: I0317 17:26:55.500649 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/208e3eb6a82c94a7b78b20bda2cb86eb-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4152.2.2-a-e33ca1f69b\" (UID: \"208e3eb6a82c94a7b78b20bda2cb86eb\") " pod="kube-system/kube-apiserver-ci-4152.2.2-a-e33ca1f69b" Mar 17 17:26:55.500895 kubelet[2961]: I0317 17:26:55.500691 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/09e1dd325249be7f02f8254ea6d1f788-ca-certs\") pod \"kube-controller-manager-ci-4152.2.2-a-e33ca1f69b\" (UID: \"09e1dd325249be7f02f8254ea6d1f788\") " pod="kube-system/kube-controller-manager-ci-4152.2.2-a-e33ca1f69b" Mar 17 17:26:55.500895 kubelet[2961]: I0317 17:26:55.500711 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/09e1dd325249be7f02f8254ea6d1f788-k8s-certs\") pod \"kube-controller-manager-ci-4152.2.2-a-e33ca1f69b\" (UID: \"09e1dd325249be7f02f8254ea6d1f788\") " pod="kube-system/kube-controller-manager-ci-4152.2.2-a-e33ca1f69b" Mar 17 17:26:55.500895 kubelet[2961]: I0317 17:26:55.500727 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/09e1dd325249be7f02f8254ea6d1f788-kubeconfig\") pod \"kube-controller-manager-ci-4152.2.2-a-e33ca1f69b\" (UID: \"09e1dd325249be7f02f8254ea6d1f788\") " pod="kube-system/kube-controller-manager-ci-4152.2.2-a-e33ca1f69b" Mar 17 17:26:55.500895 kubelet[2961]: I0317 17:26:55.500746 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/09e1dd325249be7f02f8254ea6d1f788-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4152.2.2-a-e33ca1f69b\" (UID: \"09e1dd325249be7f02f8254ea6d1f788\") " pod="kube-system/kube-controller-manager-ci-4152.2.2-a-e33ca1f69b" Mar 17 17:26:55.501363 kubelet[2961]: I0317 17:26:55.500764 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/208e3eb6a82c94a7b78b20bda2cb86eb-ca-certs\") pod \"kube-apiserver-ci-4152.2.2-a-e33ca1f69b\" (UID: \"208e3eb6a82c94a7b78b20bda2cb86eb\") " pod="kube-system/kube-apiserver-ci-4152.2.2-a-e33ca1f69b" Mar 17 17:26:55.501363 kubelet[2961]: I0317 17:26:55.500781 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/208e3eb6a82c94a7b78b20bda2cb86eb-k8s-certs\") pod \"kube-apiserver-ci-4152.2.2-a-e33ca1f69b\" (UID: \"208e3eb6a82c94a7b78b20bda2cb86eb\") " pod="kube-system/kube-apiserver-ci-4152.2.2-a-e33ca1f69b" Mar 17 17:26:55.501363 kubelet[2961]: I0317 17:26:55.500796 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/09e1dd325249be7f02f8254ea6d1f788-flexvolume-dir\") pod \"kube-controller-manager-ci-4152.2.2-a-e33ca1f69b\" (UID: \"09e1dd325249be7f02f8254ea6d1f788\") " pod="kube-system/kube-controller-manager-ci-4152.2.2-a-e33ca1f69b" Mar 17 17:26:55.501363 kubelet[2961]: I0317 17:26:55.500811 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/729df02f0e47087b57988a4b56ce518e-kubeconfig\") pod \"kube-scheduler-ci-4152.2.2-a-e33ca1f69b\" (UID: \"729df02f0e47087b57988a4b56ce518e\") " pod="kube-system/kube-scheduler-ci-4152.2.2-a-e33ca1f69b" Mar 17 17:26:55.597241 kubelet[2961]: I0317 17:26:55.596839 2961 kubelet_node_status.go:76] "Attempting to register node" node="ci-4152.2.2-a-e33ca1f69b" Mar 17 17:26:55.597485 kubelet[2961]: E0317 17:26:55.597446 2961 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.200.20.35:6443/api/v1/nodes\": dial tcp 10.200.20.35:6443: connect: connection refused" node="ci-4152.2.2-a-e33ca1f69b" Mar 17 17:26:55.730354 containerd[1808]: time="2025-03-17T17:26:55.730067953Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4152.2.2-a-e33ca1f69b,Uid:208e3eb6a82c94a7b78b20bda2cb86eb,Namespace:kube-system,Attempt:0,}" Mar 17 17:26:55.741830 containerd[1808]: time="2025-03-17T17:26:55.741737346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4152.2.2-a-e33ca1f69b,Uid:09e1dd325249be7f02f8254ea6d1f788,Namespace:kube-system,Attempt:0,}" Mar 17 17:26:55.748149 containerd[1808]: time="2025-03-17T17:26:55.748030062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4152.2.2-a-e33ca1f69b,Uid:729df02f0e47087b57988a4b56ce518e,Namespace:kube-system,Attempt:0,}" Mar 17 17:26:55.956991 kubelet[2961]: E0317 17:26:55.956928 2961 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.35:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.35:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:26:55.981221 kubelet[2961]: E0317 17:26:55.981074 2961 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152.2.2-a-e33ca1f69b?timeout=10s\": dial tcp 10.200.20.35:6443: connect: connection refused" interval="6.4s" Mar 17 17:26:55.999709 kubelet[2961]: I0317 17:26:55.999662 2961 kubelet_node_status.go:76] "Attempting to register node" node="ci-4152.2.2-a-e33ca1f69b" Mar 17 17:26:56.000079 kubelet[2961]: E0317 17:26:56.000042 2961 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.200.20.35:6443/api/v1/nodes\": dial tcp 10.200.20.35:6443: connect: connection refused" node="ci-4152.2.2-a-e33ca1f69b" Mar 17 17:26:56.502979 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2429315765.mount: Deactivated successfully. Mar 17 17:26:56.536397 containerd[1808]: time="2025-03-17T17:26:56.536330113Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:26:56.553888 containerd[1808]: time="2025-03-17T17:26:56.553821702Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Mar 17 17:26:56.572603 containerd[1808]: time="2025-03-17T17:26:56.572077570Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:26:56.591355 containerd[1808]: time="2025-03-17T17:26:56.590610638Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:26:56.601043 containerd[1808]: time="2025-03-17T17:26:56.600942631Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 17 17:26:56.605260 containerd[1808]: time="2025-03-17T17:26:56.605134189Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:26:56.606685 containerd[1808]: time="2025-03-17T17:26:56.606628548Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 876.465395ms" Mar 17 17:26:56.612567 containerd[1808]: time="2025-03-17T17:26:56.611975584Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:26:56.628439 containerd[1808]: time="2025-03-17T17:26:56.627892494Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 17 17:26:56.628884 containerd[1808]: time="2025-03-17T17:26:56.628841613Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 886.991707ms" Mar 17 17:26:56.661409 containerd[1808]: time="2025-03-17T17:26:56.661352272Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 913.22401ms" Mar 17 17:26:56.802644 kubelet[2961]: I0317 17:26:56.802381 2961 kubelet_node_status.go:76] "Attempting to register node" node="ci-4152.2.2-a-e33ca1f69b" Mar 17 17:26:56.803193 kubelet[2961]: E0317 17:26:56.803149 2961 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.200.20.35:6443/api/v1/nodes\": dial tcp 10.200.20.35:6443: connect: connection refused" node="ci-4152.2.2-a-e33ca1f69b" Mar 17 17:26:56.939299 kubelet[2961]: W0317 17:26:56.939260 2961 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152.2.2-a-e33ca1f69b&limit=500&resourceVersion=0": dial tcp 10.200.20.35:6443: connect: connection refused Mar 17 17:26:56.939447 kubelet[2961]: E0317 17:26:56.939314 2961 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152.2.2-a-e33ca1f69b&limit=500&resourceVersion=0\": dial tcp 10.200.20.35:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:26:57.273453 containerd[1808]: time="2025-03-17T17:26:57.273161357Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:26:57.273453 containerd[1808]: time="2025-03-17T17:26:57.273240317Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:26:57.273453 containerd[1808]: time="2025-03-17T17:26:57.273255997Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:26:57.273453 containerd[1808]: time="2025-03-17T17:26:57.273345517Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:26:57.279857 containerd[1808]: time="2025-03-17T17:26:57.279549673Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:26:57.279857 containerd[1808]: time="2025-03-17T17:26:57.279623873Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:26:57.279857 containerd[1808]: time="2025-03-17T17:26:57.279697273Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:26:57.280784 containerd[1808]: time="2025-03-17T17:26:57.280471953Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:26:57.281884 containerd[1808]: time="2025-03-17T17:26:57.279373433Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:26:57.282106 containerd[1808]: time="2025-03-17T17:26:57.281872432Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:26:57.282106 containerd[1808]: time="2025-03-17T17:26:57.281894152Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:26:57.282412 containerd[1808]: time="2025-03-17T17:26:57.282348231Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:26:57.329788 systemd[1]: Started cri-containerd-2127550a3e1e3e07466e5e14d1e314a99baa295ee4ef6b152bccee0af6a52629.scope - libcontainer container 2127550a3e1e3e07466e5e14d1e314a99baa295ee4ef6b152bccee0af6a52629. Mar 17 17:26:57.331137 systemd[1]: Started cri-containerd-9b0aad55a81907509e2536e17a29377d5cbdae0b84e39b80aa690d6a1e01e71f.scope - libcontainer container 9b0aad55a81907509e2536e17a29377d5cbdae0b84e39b80aa690d6a1e01e71f. Mar 17 17:26:57.334310 systemd[1]: Started cri-containerd-fea1d924766eaddbec6924b1ac7007bdecfa80294a942432967e14f5862c9e3f.scope - libcontainer container fea1d924766eaddbec6924b1ac7007bdecfa80294a942432967e14f5862c9e3f. Mar 17 17:26:57.399609 containerd[1808]: time="2025-03-17T17:26:57.398968116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4152.2.2-a-e33ca1f69b,Uid:208e3eb6a82c94a7b78b20bda2cb86eb,Namespace:kube-system,Attempt:0,} returns sandbox id \"9b0aad55a81907509e2536e17a29377d5cbdae0b84e39b80aa690d6a1e01e71f\"" Mar 17 17:26:57.400182 containerd[1808]: time="2025-03-17T17:26:57.399216596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4152.2.2-a-e33ca1f69b,Uid:09e1dd325249be7f02f8254ea6d1f788,Namespace:kube-system,Attempt:0,} returns sandbox id \"fea1d924766eaddbec6924b1ac7007bdecfa80294a942432967e14f5862c9e3f\"" Mar 17 17:26:57.404109 containerd[1808]: time="2025-03-17T17:26:57.402772634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4152.2.2-a-e33ca1f69b,Uid:729df02f0e47087b57988a4b56ce518e,Namespace:kube-system,Attempt:0,} returns sandbox id \"2127550a3e1e3e07466e5e14d1e314a99baa295ee4ef6b152bccee0af6a52629\"" Mar 17 17:26:57.405734 containerd[1808]: time="2025-03-17T17:26:57.405699312Z" level=info msg="CreateContainer within sandbox \"9b0aad55a81907509e2536e17a29377d5cbdae0b84e39b80aa690d6a1e01e71f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 17 17:26:57.407340 containerd[1808]: time="2025-03-17T17:26:57.407295071Z" level=info msg="CreateContainer within sandbox \"fea1d924766eaddbec6924b1ac7007bdecfa80294a942432967e14f5862c9e3f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 17 17:26:57.410436 containerd[1808]: time="2025-03-17T17:26:57.410387029Z" level=info msg="CreateContainer within sandbox \"2127550a3e1e3e07466e5e14d1e314a99baa295ee4ef6b152bccee0af6a52629\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 17 17:26:57.482230 kubelet[2961]: W0317 17:26:57.482138 2961 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.35:6443: connect: connection refused Mar 17 17:26:57.482230 kubelet[2961]: E0317 17:26:57.482190 2961 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.35:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:26:57.496877 containerd[1808]: time="2025-03-17T17:26:57.496748573Z" level=info msg="CreateContainer within sandbox \"9b0aad55a81907509e2536e17a29377d5cbdae0b84e39b80aa690d6a1e01e71f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0ae8af7f587a6c731c211290bb95964701430809f41e771490a8cbe8679f4744\"" Mar 17 17:26:57.498663 containerd[1808]: time="2025-03-17T17:26:57.497424892Z" level=info msg="StartContainer for \"0ae8af7f587a6c731c211290bb95964701430809f41e771490a8cbe8679f4744\"" Mar 17 17:26:57.513959 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1649016817.mount: Deactivated successfully. Mar 17 17:26:57.540814 systemd[1]: Started cri-containerd-0ae8af7f587a6c731c211290bb95964701430809f41e771490a8cbe8679f4744.scope - libcontainer container 0ae8af7f587a6c731c211290bb95964701430809f41e771490a8cbe8679f4744. Mar 17 17:26:57.550649 containerd[1808]: time="2025-03-17T17:26:57.550477658Z" level=info msg="CreateContainer within sandbox \"2127550a3e1e3e07466e5e14d1e314a99baa295ee4ef6b152bccee0af6a52629\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7c27073e0b97c53f9dc31555465822219bdb09385a8ee0ba5c064078a0e64850\"" Mar 17 17:26:57.551502 containerd[1808]: time="2025-03-17T17:26:57.551273338Z" level=info msg="StartContainer for \"7c27073e0b97c53f9dc31555465822219bdb09385a8ee0ba5c064078a0e64850\"" Mar 17 17:26:57.557082 containerd[1808]: time="2025-03-17T17:26:57.556962574Z" level=info msg="CreateContainer within sandbox \"fea1d924766eaddbec6924b1ac7007bdecfa80294a942432967e14f5862c9e3f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"323c6835d6b0b7dc9d9ac3fdb9e0e91671462cc751bfb36359cbc126d4cf7ec7\"" Mar 17 17:26:57.558180 containerd[1808]: time="2025-03-17T17:26:57.558057093Z" level=info msg="StartContainer for \"323c6835d6b0b7dc9d9ac3fdb9e0e91671462cc751bfb36359cbc126d4cf7ec7\"" Mar 17 17:26:57.590784 systemd[1]: Started cri-containerd-7c27073e0b97c53f9dc31555465822219bdb09385a8ee0ba5c064078a0e64850.scope - libcontainer container 7c27073e0b97c53f9dc31555465822219bdb09385a8ee0ba5c064078a0e64850. Mar 17 17:26:57.599552 containerd[1808]: time="2025-03-17T17:26:57.599437907Z" level=info msg="StartContainer for \"0ae8af7f587a6c731c211290bb95964701430809f41e771490a8cbe8679f4744\" returns successfully" Mar 17 17:26:57.612781 systemd[1]: Started cri-containerd-323c6835d6b0b7dc9d9ac3fdb9e0e91671462cc751bfb36359cbc126d4cf7ec7.scope - libcontainer container 323c6835d6b0b7dc9d9ac3fdb9e0e91671462cc751bfb36359cbc126d4cf7ec7. Mar 17 17:26:57.677875 containerd[1808]: time="2025-03-17T17:26:57.677280656Z" level=info msg="StartContainer for \"323c6835d6b0b7dc9d9ac3fdb9e0e91671462cc751bfb36359cbc126d4cf7ec7\" returns successfully" Mar 17 17:26:57.677875 containerd[1808]: time="2025-03-17T17:26:57.677281296Z" level=info msg="StartContainer for \"7c27073e0b97c53f9dc31555465822219bdb09385a8ee0ba5c064078a0e64850\" returns successfully" Mar 17 17:26:58.310057 kubelet[2961]: E0317 17:26:58.309983 2961 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4152.2.2-a-e33ca1f69b\" not found" node="ci-4152.2.2-a-e33ca1f69b" Mar 17 17:26:58.314036 kubelet[2961]: E0317 17:26:58.313764 2961 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4152.2.2-a-e33ca1f69b\" not found" node="ci-4152.2.2-a-e33ca1f69b" Mar 17 17:26:58.320569 kubelet[2961]: E0317 17:26:58.320014 2961 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4152.2.2-a-e33ca1f69b\" not found" node="ci-4152.2.2-a-e33ca1f69b" Mar 17 17:26:58.405149 kubelet[2961]: I0317 17:26:58.405109 2961 kubelet_node_status.go:76] "Attempting to register node" node="ci-4152.2.2-a-e33ca1f69b" Mar 17 17:26:59.319053 kubelet[2961]: E0317 17:26:59.318813 2961 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4152.2.2-a-e33ca1f69b\" not found" node="ci-4152.2.2-a-e33ca1f69b" Mar 17 17:26:59.320724 kubelet[2961]: E0317 17:26:59.320469 2961 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4152.2.2-a-e33ca1f69b\" not found" node="ci-4152.2.2-a-e33ca1f69b" Mar 17 17:26:59.809631 kubelet[2961]: I0317 17:26:59.809563 2961 kubelet_node_status.go:79] "Successfully registered node" node="ci-4152.2.2-a-e33ca1f69b" Mar 17 17:26:59.809631 kubelet[2961]: E0317 17:26:59.809628 2961 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"ci-4152.2.2-a-e33ca1f69b\": node \"ci-4152.2.2-a-e33ca1f69b\" not found" Mar 17 17:26:59.835852 kubelet[2961]: E0317 17:26:59.835803 2961 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4152.2.2-a-e33ca1f69b\" not found" Mar 17 17:26:59.936404 kubelet[2961]: E0317 17:26:59.936346 2961 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4152.2.2-a-e33ca1f69b\" not found" Mar 17 17:27:00.037081 kubelet[2961]: E0317 17:27:00.037023 2961 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4152.2.2-a-e33ca1f69b\" not found" Mar 17 17:27:00.137773 kubelet[2961]: E0317 17:27:00.137643 2961 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4152.2.2-a-e33ca1f69b\" not found" Mar 17 17:27:00.238324 kubelet[2961]: E0317 17:27:00.238270 2961 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4152.2.2-a-e33ca1f69b\" not found" Mar 17 17:27:00.338789 kubelet[2961]: E0317 17:27:00.338747 2961 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4152.2.2-a-e33ca1f69b\" not found" Mar 17 17:27:00.439554 kubelet[2961]: E0317 17:27:00.439419 2961 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4152.2.2-a-e33ca1f69b\" not found" Mar 17 17:27:00.540125 kubelet[2961]: E0317 17:27:00.540079 2961 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4152.2.2-a-e33ca1f69b\" not found" Mar 17 17:27:00.640976 kubelet[2961]: E0317 17:27:00.640928 2961 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4152.2.2-a-e33ca1f69b\" not found" Mar 17 17:27:00.742007 kubelet[2961]: E0317 17:27:00.741870 2961 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4152.2.2-a-e33ca1f69b\" not found" Mar 17 17:27:00.842099 kubelet[2961]: E0317 17:27:00.842029 2961 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4152.2.2-a-e33ca1f69b\" not found" Mar 17 17:27:00.942791 kubelet[2961]: E0317 17:27:00.942734 2961 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4152.2.2-a-e33ca1f69b\" not found" Mar 17 17:27:01.043967 kubelet[2961]: E0317 17:27:01.043829 2961 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4152.2.2-a-e33ca1f69b\" not found" Mar 17 17:27:01.144351 kubelet[2961]: E0317 17:27:01.144297 2961 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4152.2.2-a-e33ca1f69b\" not found" Mar 17 17:27:01.245423 kubelet[2961]: E0317 17:27:01.245287 2961 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4152.2.2-a-e33ca1f69b\" not found" Mar 17 17:27:01.346433 kubelet[2961]: E0317 17:27:01.346306 2961 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4152.2.2-a-e33ca1f69b\" not found" Mar 17 17:27:01.447576 kubelet[2961]: E0317 17:27:01.447127 2961 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4152.2.2-a-e33ca1f69b\" not found" Mar 17 17:27:01.548032 kubelet[2961]: E0317 17:27:01.547972 2961 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4152.2.2-a-e33ca1f69b\" not found" Mar 17 17:27:01.648567 kubelet[2961]: E0317 17:27:01.648398 2961 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4152.2.2-a-e33ca1f69b\" not found" Mar 17 17:27:01.749558 kubelet[2961]: E0317 17:27:01.749488 2961 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4152.2.2-a-e33ca1f69b\" not found" Mar 17 17:27:01.851712 kubelet[2961]: E0317 17:27:01.851665 2961 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4152.2.2-a-e33ca1f69b\" not found" Mar 17 17:27:01.951516 kubelet[2961]: E0317 17:27:01.951161 2961 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4152.2.2-a-e33ca1f69b\" not found" node="ci-4152.2.2-a-e33ca1f69b" Mar 17 17:27:01.951893 kubelet[2961]: E0317 17:27:01.951872 2961 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4152.2.2-a-e33ca1f69b\" not found" Mar 17 17:27:02.052295 kubelet[2961]: E0317 17:27:02.052223 2961 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4152.2.2-a-e33ca1f69b\" not found" Mar 17 17:27:02.073796 systemd[1]: Reloading requested from client PID 3234 ('systemctl') (unit session-9.scope)... Mar 17 17:27:02.074180 systemd[1]: Reloading... Mar 17 17:27:02.153382 kubelet[2961]: E0317 17:27:02.153134 2961 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4152.2.2-a-e33ca1f69b\" not found" Mar 17 17:27:02.218579 zram_generator::config[3277]: No configuration found. Mar 17 17:27:02.254313 kubelet[2961]: E0317 17:27:02.254252 2961 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4152.2.2-a-e33ca1f69b\" not found" Mar 17 17:27:02.333410 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:27:02.354983 kubelet[2961]: E0317 17:27:02.354928 2961 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4152.2.2-a-e33ca1f69b\" not found" Mar 17 17:27:02.428196 systemd[1]: Reloading finished in 353 ms. Mar 17 17:27:02.459545 kubelet[2961]: E0317 17:27:02.455598 2961 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4152.2.2-a-e33ca1f69b\" not found" Mar 17 17:27:02.464238 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:27:02.483166 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 17:27:02.483565 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:27:02.483713 systemd[1]: kubelet.service: Consumed 1.801s CPU time, 125.0M memory peak, 0B memory swap peak. Mar 17 17:27:02.489844 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:27:02.665872 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:27:02.674988 (kubelet)[3337]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 17:27:02.954088 kubelet[3337]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:27:02.954088 kubelet[3337]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 17 17:27:02.954088 kubelet[3337]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:27:02.954088 kubelet[3337]: I0317 17:27:02.736751 3337 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 17:27:02.954088 kubelet[3337]: I0317 17:27:02.743321 3337 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Mar 17 17:27:02.954088 kubelet[3337]: I0317 17:27:02.743352 3337 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 17:27:02.954088 kubelet[3337]: I0317 17:27:02.743899 3337 server.go:954] "Client rotation is on, will bootstrap in background" Mar 17 17:27:02.955334 kubelet[3337]: I0317 17:27:02.955040 3337 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 17 17:27:02.959147 kubelet[3337]: I0317 17:27:02.959107 3337 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 17:27:02.964664 kubelet[3337]: E0317 17:27:02.964084 3337 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 17 17:27:02.964664 kubelet[3337]: I0317 17:27:02.964120 3337 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 17 17:27:02.967619 kubelet[3337]: I0317 17:27:02.967579 3337 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 17:27:02.967966 kubelet[3337]: I0317 17:27:02.967923 3337 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 17:27:02.968208 kubelet[3337]: I0317 17:27:02.967965 3337 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4152.2.2-a-e33ca1f69b","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 17 17:27:02.968297 kubelet[3337]: I0317 17:27:02.968217 3337 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 17:27:02.968297 kubelet[3337]: I0317 17:27:02.968229 3337 container_manager_linux.go:304] "Creating device plugin manager" Mar 17 17:27:02.968297 kubelet[3337]: I0317 17:27:02.968276 3337 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:27:02.968424 kubelet[3337]: I0317 17:27:02.968406 3337 kubelet.go:446] "Attempting to sync node with API server" Mar 17 17:27:02.968424 kubelet[3337]: I0317 17:27:02.968420 3337 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 17:27:02.968593 kubelet[3337]: I0317 17:27:02.968440 3337 kubelet.go:352] "Adding apiserver pod source" Mar 17 17:27:02.968593 kubelet[3337]: I0317 17:27:02.968450 3337 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 17:27:02.970646 kubelet[3337]: I0317 17:27:02.970615 3337 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 17 17:27:02.972576 kubelet[3337]: I0317 17:27:02.971261 3337 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 17:27:02.972576 kubelet[3337]: I0317 17:27:02.971752 3337 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 17 17:27:02.972576 kubelet[3337]: I0317 17:27:02.971785 3337 server.go:1287] "Started kubelet" Mar 17 17:27:02.977447 kubelet[3337]: I0317 17:27:02.977402 3337 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 17:27:02.982108 kubelet[3337]: I0317 17:27:02.981999 3337 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 17:27:02.984338 kubelet[3337]: I0317 17:27:02.983876 3337 server.go:490] "Adding debug handlers to kubelet server" Mar 17 17:27:02.986386 kubelet[3337]: I0317 17:27:02.986309 3337 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 17:27:02.986684 kubelet[3337]: I0317 17:27:02.986658 3337 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 17:27:02.991560 kubelet[3337]: I0317 17:27:02.991434 3337 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 17 17:27:02.996232 kubelet[3337]: I0317 17:27:02.995248 3337 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 17 17:27:03.005552 kubelet[3337]: I0317 17:27:03.004541 3337 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 17:27:03.005552 kubelet[3337]: I0317 17:27:03.004774 3337 reconciler.go:26] "Reconciler: start to sync state" Mar 17 17:27:03.009145 kubelet[3337]: I0317 17:27:03.008277 3337 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 17:27:03.009666 kubelet[3337]: I0317 17:27:03.009472 3337 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 17:27:03.009666 kubelet[3337]: I0317 17:27:03.009519 3337 status_manager.go:227] "Starting to sync pod status with apiserver" Mar 17 17:27:03.009666 kubelet[3337]: I0317 17:27:03.009667 3337 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 17 17:27:03.009760 kubelet[3337]: I0317 17:27:03.009677 3337 kubelet.go:2388] "Starting kubelet main sync loop" Mar 17 17:27:03.009760 kubelet[3337]: E0317 17:27:03.009740 3337 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 17:27:03.027852 kubelet[3337]: E0317 17:27:03.027805 3337 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 17:27:03.028140 kubelet[3337]: I0317 17:27:03.028103 3337 factory.go:221] Registration of the containerd container factory successfully Mar 17 17:27:03.028140 kubelet[3337]: I0317 17:27:03.028125 3337 factory.go:221] Registration of the systemd container factory successfully Mar 17 17:27:03.028249 kubelet[3337]: I0317 17:27:03.028219 3337 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 17:27:03.087421 kubelet[3337]: I0317 17:27:03.087095 3337 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 17 17:27:03.087677 kubelet[3337]: I0317 17:27:03.087656 3337 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 17 17:27:03.088453 kubelet[3337]: I0317 17:27:03.088430 3337 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:27:03.089139 kubelet[3337]: I0317 17:27:03.088800 3337 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 17 17:27:03.089294 kubelet[3337]: I0317 17:27:03.089250 3337 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 17 17:27:03.089350 kubelet[3337]: I0317 17:27:03.089342 3337 policy_none.go:49] "None policy: Start" Mar 17 17:27:03.089406 kubelet[3337]: I0317 17:27:03.089397 3337 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 17 17:27:03.089462 kubelet[3337]: I0317 17:27:03.089454 3337 state_mem.go:35] "Initializing new in-memory state store" Mar 17 17:27:03.089695 kubelet[3337]: I0317 17:27:03.089679 3337 state_mem.go:75] "Updated machine memory state" Mar 17 17:27:03.094639 kubelet[3337]: I0317 17:27:03.094609 3337 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 17:27:03.095717 kubelet[3337]: I0317 17:27:03.094960 3337 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 17 17:27:03.095717 kubelet[3337]: I0317 17:27:03.094984 3337 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 17:27:03.095717 kubelet[3337]: I0317 17:27:03.095234 3337 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 17:27:03.100069 sudo[3369]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 17 17:27:03.100410 sudo[3369]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 17 17:27:03.108620 kubelet[3337]: E0317 17:27:03.107175 3337 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 17 17:27:03.111799 kubelet[3337]: I0317 17:27:03.111254 3337 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4152.2.2-a-e33ca1f69b" Mar 17 17:27:03.114429 kubelet[3337]: I0317 17:27:03.114098 3337 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4152.2.2-a-e33ca1f69b" Mar 17 17:27:03.118913 kubelet[3337]: I0317 17:27:03.118636 3337 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4152.2.2-a-e33ca1f69b" Mar 17 17:27:03.140605 kubelet[3337]: W0317 17:27:03.140526 3337 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 17 17:27:03.145634 kubelet[3337]: W0317 17:27:03.145593 3337 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 17 17:27:03.146355 kubelet[3337]: W0317 17:27:03.146321 3337 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 17 17:27:03.216275 kubelet[3337]: I0317 17:27:03.215899 3337 kubelet_node_status.go:76] "Attempting to register node" node="ci-4152.2.2-a-e33ca1f69b" Mar 17 17:27:03.237055 kubelet[3337]: I0317 17:27:03.237008 3337 kubelet_node_status.go:125] "Node was previously registered" node="ci-4152.2.2-a-e33ca1f69b" Mar 17 17:27:03.237201 kubelet[3337]: I0317 17:27:03.237114 3337 kubelet_node_status.go:79] "Successfully registered node" node="ci-4152.2.2-a-e33ca1f69b" Mar 17 17:27:03.306486 kubelet[3337]: I0317 17:27:03.305942 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/09e1dd325249be7f02f8254ea6d1f788-k8s-certs\") pod \"kube-controller-manager-ci-4152.2.2-a-e33ca1f69b\" (UID: \"09e1dd325249be7f02f8254ea6d1f788\") " pod="kube-system/kube-controller-manager-ci-4152.2.2-a-e33ca1f69b" Mar 17 17:27:03.306486 kubelet[3337]: I0317 17:27:03.305994 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/09e1dd325249be7f02f8254ea6d1f788-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4152.2.2-a-e33ca1f69b\" (UID: \"09e1dd325249be7f02f8254ea6d1f788\") " pod="kube-system/kube-controller-manager-ci-4152.2.2-a-e33ca1f69b" Mar 17 17:27:03.306486 kubelet[3337]: I0317 17:27:03.306114 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/729df02f0e47087b57988a4b56ce518e-kubeconfig\") pod \"kube-scheduler-ci-4152.2.2-a-e33ca1f69b\" (UID: \"729df02f0e47087b57988a4b56ce518e\") " pod="kube-system/kube-scheduler-ci-4152.2.2-a-e33ca1f69b" Mar 17 17:27:03.306486 kubelet[3337]: I0317 17:27:03.306146 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/208e3eb6a82c94a7b78b20bda2cb86eb-k8s-certs\") pod \"kube-apiserver-ci-4152.2.2-a-e33ca1f69b\" (UID: \"208e3eb6a82c94a7b78b20bda2cb86eb\") " pod="kube-system/kube-apiserver-ci-4152.2.2-a-e33ca1f69b" Mar 17 17:27:03.306486 kubelet[3337]: I0317 17:27:03.306169 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/09e1dd325249be7f02f8254ea6d1f788-ca-certs\") pod \"kube-controller-manager-ci-4152.2.2-a-e33ca1f69b\" (UID: \"09e1dd325249be7f02f8254ea6d1f788\") " pod="kube-system/kube-controller-manager-ci-4152.2.2-a-e33ca1f69b" Mar 17 17:27:03.306946 kubelet[3337]: I0317 17:27:03.306187 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/09e1dd325249be7f02f8254ea6d1f788-flexvolume-dir\") pod \"kube-controller-manager-ci-4152.2.2-a-e33ca1f69b\" (UID: \"09e1dd325249be7f02f8254ea6d1f788\") " pod="kube-system/kube-controller-manager-ci-4152.2.2-a-e33ca1f69b" Mar 17 17:27:03.306946 kubelet[3337]: I0317 17:27:03.306202 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/208e3eb6a82c94a7b78b20bda2cb86eb-ca-certs\") pod \"kube-apiserver-ci-4152.2.2-a-e33ca1f69b\" (UID: \"208e3eb6a82c94a7b78b20bda2cb86eb\") " pod="kube-system/kube-apiserver-ci-4152.2.2-a-e33ca1f69b" Mar 17 17:27:03.306946 kubelet[3337]: I0317 17:27:03.306218 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/208e3eb6a82c94a7b78b20bda2cb86eb-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4152.2.2-a-e33ca1f69b\" (UID: \"208e3eb6a82c94a7b78b20bda2cb86eb\") " pod="kube-system/kube-apiserver-ci-4152.2.2-a-e33ca1f69b" Mar 17 17:27:03.306946 kubelet[3337]: I0317 17:27:03.306239 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/09e1dd325249be7f02f8254ea6d1f788-kubeconfig\") pod \"kube-controller-manager-ci-4152.2.2-a-e33ca1f69b\" (UID: \"09e1dd325249be7f02f8254ea6d1f788\") " pod="kube-system/kube-controller-manager-ci-4152.2.2-a-e33ca1f69b" Mar 17 17:27:03.598362 sudo[3369]: pam_unix(sudo:session): session closed for user root Mar 17 17:27:03.969520 kubelet[3337]: I0317 17:27:03.969156 3337 apiserver.go:52] "Watching apiserver" Mar 17 17:27:04.005503 kubelet[3337]: I0317 17:27:04.005463 3337 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 17:27:04.123671 kubelet[3337]: I0317 17:27:04.123362 3337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4152.2.2-a-e33ca1f69b" podStartSLOduration=1.123304251 podStartE2EDuration="1.123304251s" podCreationTimestamp="2025-03-17 17:27:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:27:04.107607661 +0000 UTC m=+1.427748436" watchObservedRunningTime="2025-03-17 17:27:04.123304251 +0000 UTC m=+1.443445026" Mar 17 17:27:04.145336 kubelet[3337]: I0317 17:27:04.145100 3337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4152.2.2-a-e33ca1f69b" podStartSLOduration=1.145079677 podStartE2EDuration="1.145079677s" podCreationTimestamp="2025-03-17 17:27:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:27:04.125332529 +0000 UTC m=+1.445473304" watchObservedRunningTime="2025-03-17 17:27:04.145079677 +0000 UTC m=+1.465220452" Mar 17 17:27:04.165414 kubelet[3337]: I0317 17:27:04.165301 3337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4152.2.2-a-e33ca1f69b" podStartSLOduration=1.165280063 podStartE2EDuration="1.165280063s" podCreationTimestamp="2025-03-17 17:27:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:27:04.146611196 +0000 UTC m=+1.466751971" watchObservedRunningTime="2025-03-17 17:27:04.165280063 +0000 UTC m=+1.485420838" Mar 17 17:27:05.113297 sudo[2414]: pam_unix(sudo:session): session closed for user root Mar 17 17:27:05.181052 sshd[2413]: Connection closed by 10.200.16.10 port 41206 Mar 17 17:27:05.181523 sshd-session[2411]: pam_unix(sshd:session): session closed for user core Mar 17 17:27:05.185925 systemd[1]: sshd@6-10.200.20.35:22-10.200.16.10:41206.service: Deactivated successfully. Mar 17 17:27:05.188227 systemd[1]: session-9.scope: Deactivated successfully. Mar 17 17:27:05.188557 systemd[1]: session-9.scope: Consumed 6.604s CPU time, 157.6M memory peak, 0B memory swap peak. Mar 17 17:27:05.189326 systemd-logind[1694]: Session 9 logged out. Waiting for processes to exit. Mar 17 17:27:05.190458 systemd-logind[1694]: Removed session 9. Mar 17 17:27:07.304265 kubelet[3337]: I0317 17:27:07.304210 3337 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 17 17:27:07.305351 containerd[1808]: time="2025-03-17T17:27:07.304631014Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 17 17:27:07.306037 kubelet[3337]: I0317 17:27:07.305620 3337 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 17 17:27:07.860740 systemd[1]: Created slice kubepods-besteffort-pod55a336a3_5746_4867_8db3_68b4c6412429.slice - libcontainer container kubepods-besteffort-pod55a336a3_5746_4867_8db3_68b4c6412429.slice. Mar 17 17:27:07.866409 kubelet[3337]: W0317 17:27:07.866324 3337 reflector.go:569] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4152.2.2-a-e33ca1f69b" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4152.2.2-a-e33ca1f69b' and this object Mar 17 17:27:07.866409 kubelet[3337]: E0317 17:27:07.866375 3337 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4152.2.2-a-e33ca1f69b\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4152.2.2-a-e33ca1f69b' and this object" logger="UnhandledError" Mar 17 17:27:07.866777 kubelet[3337]: W0317 17:27:07.866586 3337 reflector.go:569] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-4152.2.2-a-e33ca1f69b" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4152.2.2-a-e33ca1f69b' and this object Mar 17 17:27:07.866777 kubelet[3337]: E0317 17:27:07.866607 3337 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:ci-4152.2.2-a-e33ca1f69b\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4152.2.2-a-e33ca1f69b' and this object" logger="UnhandledError" Mar 17 17:27:07.868572 kubelet[3337]: I0317 17:27:07.866323 3337 status_manager.go:890] "Failed to get status for pod" podUID="55a336a3-5746-4867-8db3-68b4c6412429" pod="kube-system/kube-proxy-b7mks" err="pods \"kube-proxy-b7mks\" is forbidden: User \"system:node:ci-4152.2.2-a-e33ca1f69b\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4152.2.2-a-e33ca1f69b' and this object" Mar 17 17:27:07.880716 systemd[1]: Created slice kubepods-burstable-podeb5574a5_4148_41b0_b2b5_243de437e748.slice - libcontainer container kubepods-burstable-podeb5574a5_4148_41b0_b2b5_243de437e748.slice. Mar 17 17:27:07.934373 kubelet[3337]: I0317 17:27:07.934327 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eb5574a5-4148-41b0-b2b5-243de437e748-xtables-lock\") pod \"cilium-6zbzb\" (UID: \"eb5574a5-4148-41b0-b2b5-243de437e748\") " pod="kube-system/cilium-6zbzb" Mar 17 17:27:07.934373 kubelet[3337]: I0317 17:27:07.934371 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/eb5574a5-4148-41b0-b2b5-243de437e748-hubble-tls\") pod \"cilium-6zbzb\" (UID: \"eb5574a5-4148-41b0-b2b5-243de437e748\") " pod="kube-system/cilium-6zbzb" Mar 17 17:27:07.934523 kubelet[3337]: I0317 17:27:07.934392 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2ttf\" (UniqueName: \"kubernetes.io/projected/55a336a3-5746-4867-8db3-68b4c6412429-kube-api-access-s2ttf\") pod \"kube-proxy-b7mks\" (UID: \"55a336a3-5746-4867-8db3-68b4c6412429\") " pod="kube-system/kube-proxy-b7mks" Mar 17 17:27:07.934523 kubelet[3337]: I0317 17:27:07.934409 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/eb5574a5-4148-41b0-b2b5-243de437e748-cilium-run\") pod \"cilium-6zbzb\" (UID: \"eb5574a5-4148-41b0-b2b5-243de437e748\") " pod="kube-system/cilium-6zbzb" Mar 17 17:27:07.934523 kubelet[3337]: I0317 17:27:07.934424 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/eb5574a5-4148-41b0-b2b5-243de437e748-cni-path\") pod \"cilium-6zbzb\" (UID: \"eb5574a5-4148-41b0-b2b5-243de437e748\") " pod="kube-system/cilium-6zbzb" Mar 17 17:27:07.934523 kubelet[3337]: I0317 17:27:07.934439 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/eb5574a5-4148-41b0-b2b5-243de437e748-etc-cni-netd\") pod \"cilium-6zbzb\" (UID: \"eb5574a5-4148-41b0-b2b5-243de437e748\") " pod="kube-system/cilium-6zbzb" Mar 17 17:27:07.934523 kubelet[3337]: I0317 17:27:07.934455 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/eb5574a5-4148-41b0-b2b5-243de437e748-cilium-config-path\") pod \"cilium-6zbzb\" (UID: \"eb5574a5-4148-41b0-b2b5-243de437e748\") " pod="kube-system/cilium-6zbzb" Mar 17 17:27:07.934723 kubelet[3337]: I0317 17:27:07.934471 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6r4bv\" (UniqueName: \"kubernetes.io/projected/eb5574a5-4148-41b0-b2b5-243de437e748-kube-api-access-6r4bv\") pod \"cilium-6zbzb\" (UID: \"eb5574a5-4148-41b0-b2b5-243de437e748\") " pod="kube-system/cilium-6zbzb" Mar 17 17:27:07.934723 kubelet[3337]: I0317 17:27:07.934487 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/eb5574a5-4148-41b0-b2b5-243de437e748-clustermesh-secrets\") pod \"cilium-6zbzb\" (UID: \"eb5574a5-4148-41b0-b2b5-243de437e748\") " pod="kube-system/cilium-6zbzb" Mar 17 17:27:07.934723 kubelet[3337]: I0317 17:27:07.934501 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/eb5574a5-4148-41b0-b2b5-243de437e748-host-proc-sys-kernel\") pod \"cilium-6zbzb\" (UID: \"eb5574a5-4148-41b0-b2b5-243de437e748\") " pod="kube-system/cilium-6zbzb" Mar 17 17:27:07.934723 kubelet[3337]: I0317 17:27:07.934518 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/eb5574a5-4148-41b0-b2b5-243de437e748-bpf-maps\") pod \"cilium-6zbzb\" (UID: \"eb5574a5-4148-41b0-b2b5-243de437e748\") " pod="kube-system/cilium-6zbzb" Mar 17 17:27:07.934723 kubelet[3337]: I0317 17:27:07.934600 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/eb5574a5-4148-41b0-b2b5-243de437e748-hostproc\") pod \"cilium-6zbzb\" (UID: \"eb5574a5-4148-41b0-b2b5-243de437e748\") " pod="kube-system/cilium-6zbzb" Mar 17 17:27:07.934723 kubelet[3337]: I0317 17:27:07.934620 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/eb5574a5-4148-41b0-b2b5-243de437e748-cilium-cgroup\") pod \"cilium-6zbzb\" (UID: \"eb5574a5-4148-41b0-b2b5-243de437e748\") " pod="kube-system/cilium-6zbzb" Mar 17 17:27:07.934888 kubelet[3337]: I0317 17:27:07.934640 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/55a336a3-5746-4867-8db3-68b4c6412429-kube-proxy\") pod \"kube-proxy-b7mks\" (UID: \"55a336a3-5746-4867-8db3-68b4c6412429\") " pod="kube-system/kube-proxy-b7mks" Mar 17 17:27:07.934888 kubelet[3337]: I0317 17:27:07.934656 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/55a336a3-5746-4867-8db3-68b4c6412429-xtables-lock\") pod \"kube-proxy-b7mks\" (UID: \"55a336a3-5746-4867-8db3-68b4c6412429\") " pod="kube-system/kube-proxy-b7mks" Mar 17 17:27:07.934888 kubelet[3337]: I0317 17:27:07.934671 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/55a336a3-5746-4867-8db3-68b4c6412429-lib-modules\") pod \"kube-proxy-b7mks\" (UID: \"55a336a3-5746-4867-8db3-68b4c6412429\") " pod="kube-system/kube-proxy-b7mks" Mar 17 17:27:07.934888 kubelet[3337]: I0317 17:27:07.934694 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eb5574a5-4148-41b0-b2b5-243de437e748-lib-modules\") pod \"cilium-6zbzb\" (UID: \"eb5574a5-4148-41b0-b2b5-243de437e748\") " pod="kube-system/cilium-6zbzb" Mar 17 17:27:07.934888 kubelet[3337]: I0317 17:27:07.934715 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/eb5574a5-4148-41b0-b2b5-243de437e748-host-proc-sys-net\") pod \"cilium-6zbzb\" (UID: \"eb5574a5-4148-41b0-b2b5-243de437e748\") " pod="kube-system/cilium-6zbzb" Mar 17 17:27:08.298693 systemd[1]: Created slice kubepods-besteffort-pod257fac58_5384_444c_b517_deacc2d28da3.slice - libcontainer container kubepods-besteffort-pod257fac58_5384_444c_b517_deacc2d28da3.slice. Mar 17 17:27:08.336856 kubelet[3337]: I0317 17:27:08.336768 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/257fac58-5384-444c-b517-deacc2d28da3-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-vswnp\" (UID: \"257fac58-5384-444c-b517-deacc2d28da3\") " pod="kube-system/cilium-operator-6c4d7847fc-vswnp" Mar 17 17:27:08.337348 kubelet[3337]: I0317 17:27:08.337275 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmjcl\" (UniqueName: \"kubernetes.io/projected/257fac58-5384-444c-b517-deacc2d28da3-kube-api-access-fmjcl\") pod \"cilium-operator-6c4d7847fc-vswnp\" (UID: \"257fac58-5384-444c-b517-deacc2d28da3\") " pod="kube-system/cilium-operator-6c4d7847fc-vswnp" Mar 17 17:27:08.792153 containerd[1808]: time="2025-03-17T17:27:08.792100963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6zbzb,Uid:eb5574a5-4148-41b0-b2b5-243de437e748,Namespace:kube-system,Attempt:0,}" Mar 17 17:27:08.834450 containerd[1808]: time="2025-03-17T17:27:08.834084055Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:27:08.834450 containerd[1808]: time="2025-03-17T17:27:08.834154775Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:27:08.834450 containerd[1808]: time="2025-03-17T17:27:08.834170495Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:27:08.834450 containerd[1808]: time="2025-03-17T17:27:08.834254095Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:27:08.851765 systemd[1]: Started cri-containerd-7ab02314498ec8b3b5b8864f221a5a29c93bd0489dd272992a050eedde9df17e.scope - libcontainer container 7ab02314498ec8b3b5b8864f221a5a29c93bd0489dd272992a050eedde9df17e. Mar 17 17:27:08.877073 containerd[1808]: time="2025-03-17T17:27:08.877009667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6zbzb,Uid:eb5574a5-4148-41b0-b2b5-243de437e748,Namespace:kube-system,Attempt:0,} returns sandbox id \"7ab02314498ec8b3b5b8864f221a5a29c93bd0489dd272992a050eedde9df17e\"" Mar 17 17:27:08.879714 containerd[1808]: time="2025-03-17T17:27:08.879636025Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 17 17:27:08.903971 containerd[1808]: time="2025-03-17T17:27:08.903575930Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-vswnp,Uid:257fac58-5384-444c-b517-deacc2d28da3,Namespace:kube-system,Attempt:0,}" Mar 17 17:27:08.971083 containerd[1808]: time="2025-03-17T17:27:08.970956486Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:27:08.971083 containerd[1808]: time="2025-03-17T17:27:08.971036166Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:27:08.971083 containerd[1808]: time="2025-03-17T17:27:08.971054886Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:27:08.971350 containerd[1808]: time="2025-03-17T17:27:08.971147406Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:27:08.997780 systemd[1]: Started cri-containerd-96ddba37f22720cc2cacc7cbd94ca806453faa888bf9a036e4f4f890b0c2b24a.scope - libcontainer container 96ddba37f22720cc2cacc7cbd94ca806453faa888bf9a036e4f4f890b0c2b24a. Mar 17 17:27:09.034496 containerd[1808]: time="2025-03-17T17:27:09.034444284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-vswnp,Uid:257fac58-5384-444c-b517-deacc2d28da3,Namespace:kube-system,Attempt:0,} returns sandbox id \"96ddba37f22720cc2cacc7cbd94ca806453faa888bf9a036e4f4f890b0c2b24a\"" Mar 17 17:27:09.035687 kubelet[3337]: E0317 17:27:09.035648 3337 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 17 17:27:09.035807 kubelet[3337]: E0317 17:27:09.035755 3337 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/55a336a3-5746-4867-8db3-68b4c6412429-kube-proxy podName:55a336a3-5746-4867-8db3-68b4c6412429 nodeName:}" failed. No retries permitted until 2025-03-17 17:27:09.535727124 +0000 UTC m=+6.855867899 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/55a336a3-5746-4867-8db3-68b4c6412429-kube-proxy") pod "kube-proxy-b7mks" (UID: "55a336a3-5746-4867-8db3-68b4c6412429") : failed to sync configmap cache: timed out waiting for the condition Mar 17 17:27:09.669152 containerd[1808]: time="2025-03-17T17:27:09.669105710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-b7mks,Uid:55a336a3-5746-4867-8db3-68b4c6412429,Namespace:kube-system,Attempt:0,}" Mar 17 17:27:09.731523 containerd[1808]: time="2025-03-17T17:27:09.730962150Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:27:09.731523 containerd[1808]: time="2025-03-17T17:27:09.731032030Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:27:09.732026 containerd[1808]: time="2025-03-17T17:27:09.731052070Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:27:09.732026 containerd[1808]: time="2025-03-17T17:27:09.731923149Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:27:09.749890 systemd[1]: run-containerd-runc-k8s.io-a305f55594073f067a263d9f70b3d64be2fe77167dea9d7d19c884fbde519f83-runc.laZeha.mount: Deactivated successfully. Mar 17 17:27:09.760816 systemd[1]: Started cri-containerd-a305f55594073f067a263d9f70b3d64be2fe77167dea9d7d19c884fbde519f83.scope - libcontainer container a305f55594073f067a263d9f70b3d64be2fe77167dea9d7d19c884fbde519f83. Mar 17 17:27:09.783342 containerd[1808]: time="2025-03-17T17:27:09.783290835Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-b7mks,Uid:55a336a3-5746-4867-8db3-68b4c6412429,Namespace:kube-system,Attempt:0,} returns sandbox id \"a305f55594073f067a263d9f70b3d64be2fe77167dea9d7d19c884fbde519f83\"" Mar 17 17:27:09.788015 containerd[1808]: time="2025-03-17T17:27:09.787962472Z" level=info msg="CreateContainer within sandbox \"a305f55594073f067a263d9f70b3d64be2fe77167dea9d7d19c884fbde519f83\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 17 17:27:09.842031 containerd[1808]: time="2025-03-17T17:27:09.841980037Z" level=info msg="CreateContainer within sandbox \"a305f55594073f067a263d9f70b3d64be2fe77167dea9d7d19c884fbde519f83\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a7155c2492cf6fb69a0cf1c434fbf94c5d5d6ae353a05440a2308501ff3d9559\"" Mar 17 17:27:09.843845 containerd[1808]: time="2025-03-17T17:27:09.843786156Z" level=info msg="StartContainer for \"a7155c2492cf6fb69a0cf1c434fbf94c5d5d6ae353a05440a2308501ff3d9559\"" Mar 17 17:27:09.874825 systemd[1]: Started cri-containerd-a7155c2492cf6fb69a0cf1c434fbf94c5d5d6ae353a05440a2308501ff3d9559.scope - libcontainer container a7155c2492cf6fb69a0cf1c434fbf94c5d5d6ae353a05440a2308501ff3d9559. Mar 17 17:27:09.909449 containerd[1808]: time="2025-03-17T17:27:09.909395753Z" level=info msg="StartContainer for \"a7155c2492cf6fb69a0cf1c434fbf94c5d5d6ae353a05440a2308501ff3d9559\" returns successfully" Mar 17 17:27:11.981346 kubelet[3337]: I0317 17:27:11.980986 3337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-b7mks" podStartSLOduration=4.980963212 podStartE2EDuration="4.980963212s" podCreationTimestamp="2025-03-17 17:27:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:27:10.100295549 +0000 UTC m=+7.420436364" watchObservedRunningTime="2025-03-17 17:27:11.980963212 +0000 UTC m=+9.301103987" Mar 17 17:27:19.847981 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount437380264.mount: Deactivated successfully. Mar 17 17:27:33.827735 containerd[1808]: time="2025-03-17T17:27:33.827680822Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:27:33.832975 containerd[1808]: time="2025-03-17T17:27:33.832927220Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Mar 17 17:27:33.840520 containerd[1808]: time="2025-03-17T17:27:33.840453896Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:27:33.842413 containerd[1808]: time="2025-03-17T17:27:33.842254015Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 24.96257171s" Mar 17 17:27:33.842413 containerd[1808]: time="2025-03-17T17:27:33.842310175Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Mar 17 17:27:33.844800 containerd[1808]: time="2025-03-17T17:27:33.844487494Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 17 17:27:33.846177 containerd[1808]: time="2025-03-17T17:27:33.845898773Z" level=info msg="CreateContainer within sandbox \"7ab02314498ec8b3b5b8864f221a5a29c93bd0489dd272992a050eedde9df17e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 17:27:33.896779 containerd[1808]: time="2025-03-17T17:27:33.894657630Z" level=info msg="CreateContainer within sandbox \"7ab02314498ec8b3b5b8864f221a5a29c93bd0489dd272992a050eedde9df17e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2d67fec0ad10552c0ed2e4ce19088031aaa1406c90250c6d1bfb01e5680d0947\"" Mar 17 17:27:33.896779 containerd[1808]: time="2025-03-17T17:27:33.895683590Z" level=info msg="StartContainer for \"2d67fec0ad10552c0ed2e4ce19088031aaa1406c90250c6d1bfb01e5680d0947\"" Mar 17 17:27:33.933759 systemd[1]: Started cri-containerd-2d67fec0ad10552c0ed2e4ce19088031aaa1406c90250c6d1bfb01e5680d0947.scope - libcontainer container 2d67fec0ad10552c0ed2e4ce19088031aaa1406c90250c6d1bfb01e5680d0947. Mar 17 17:27:33.963542 containerd[1808]: time="2025-03-17T17:27:33.963474758Z" level=info msg="StartContainer for \"2d67fec0ad10552c0ed2e4ce19088031aaa1406c90250c6d1bfb01e5680d0947\" returns successfully" Mar 17 17:27:33.971652 systemd[1]: cri-containerd-2d67fec0ad10552c0ed2e4ce19088031aaa1406c90250c6d1bfb01e5680d0947.scope: Deactivated successfully. Mar 17 17:27:34.879590 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2d67fec0ad10552c0ed2e4ce19088031aaa1406c90250c6d1bfb01e5680d0947-rootfs.mount: Deactivated successfully. Mar 17 17:27:35.593253 containerd[1808]: time="2025-03-17T17:27:35.593096267Z" level=info msg="shim disconnected" id=2d67fec0ad10552c0ed2e4ce19088031aaa1406c90250c6d1bfb01e5680d0947 namespace=k8s.io Mar 17 17:27:35.593253 containerd[1808]: time="2025-03-17T17:27:35.593184026Z" level=warning msg="cleaning up after shim disconnected" id=2d67fec0ad10552c0ed2e4ce19088031aaa1406c90250c6d1bfb01e5680d0947 namespace=k8s.io Mar 17 17:27:35.593253 containerd[1808]: time="2025-03-17T17:27:35.593193306Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:27:36.136118 containerd[1808]: time="2025-03-17T17:27:36.135590326Z" level=info msg="CreateContainer within sandbox \"7ab02314498ec8b3b5b8864f221a5a29c93bd0489dd272992a050eedde9df17e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 17:27:36.208109 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2461611212.mount: Deactivated successfully. Mar 17 17:27:36.251070 containerd[1808]: time="2025-03-17T17:27:36.251016978Z" level=info msg="CreateContainer within sandbox \"7ab02314498ec8b3b5b8864f221a5a29c93bd0489dd272992a050eedde9df17e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"79464e27ce5f39a0c8467f4ab5aa2b494331b58d07a1cd846dac5cdec3d0c4ce\"" Mar 17 17:27:36.252201 containerd[1808]: time="2025-03-17T17:27:36.251951018Z" level=info msg="StartContainer for \"79464e27ce5f39a0c8467f4ab5aa2b494331b58d07a1cd846dac5cdec3d0c4ce\"" Mar 17 17:27:36.287745 systemd[1]: Started cri-containerd-79464e27ce5f39a0c8467f4ab5aa2b494331b58d07a1cd846dac5cdec3d0c4ce.scope - libcontainer container 79464e27ce5f39a0c8467f4ab5aa2b494331b58d07a1cd846dac5cdec3d0c4ce. Mar 17 17:27:36.320992 containerd[1808]: time="2025-03-17T17:27:36.320781042Z" level=info msg="StartContainer for \"79464e27ce5f39a0c8467f4ab5aa2b494331b58d07a1cd846dac5cdec3d0c4ce\" returns successfully" Mar 17 17:27:36.324408 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 17:27:36.324716 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:27:36.324782 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:27:36.330086 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:27:36.331511 systemd[1]: cri-containerd-79464e27ce5f39a0c8467f4ab5aa2b494331b58d07a1cd846dac5cdec3d0c4ce.scope: Deactivated successfully. Mar 17 17:27:36.354920 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:27:36.371685 containerd[1808]: time="2025-03-17T17:27:36.371567710Z" level=info msg="shim disconnected" id=79464e27ce5f39a0c8467f4ab5aa2b494331b58d07a1cd846dac5cdec3d0c4ce namespace=k8s.io Mar 17 17:27:36.371685 containerd[1808]: time="2025-03-17T17:27:36.371631190Z" level=warning msg="cleaning up after shim disconnected" id=79464e27ce5f39a0c8467f4ab5aa2b494331b58d07a1cd846dac5cdec3d0c4ce namespace=k8s.io Mar 17 17:27:36.371685 containerd[1808]: time="2025-03-17T17:27:36.371641430Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:27:37.138573 containerd[1808]: time="2025-03-17T17:27:37.138426528Z" level=info msg="CreateContainer within sandbox \"7ab02314498ec8b3b5b8864f221a5a29c93bd0489dd272992a050eedde9df17e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 17:27:37.205308 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-79464e27ce5f39a0c8467f4ab5aa2b494331b58d07a1cd846dac5cdec3d0c4ce-rootfs.mount: Deactivated successfully. Mar 17 17:27:37.214949 containerd[1808]: time="2025-03-17T17:27:37.214852110Z" level=info msg="CreateContainer within sandbox \"7ab02314498ec8b3b5b8864f221a5a29c93bd0489dd272992a050eedde9df17e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ed37bea3a8dad646d78960b0a14429e9ec43b2b89e708d193564b3dc0fba7381\"" Mar 17 17:27:37.215930 containerd[1808]: time="2025-03-17T17:27:37.215833869Z" level=info msg="StartContainer for \"ed37bea3a8dad646d78960b0a14429e9ec43b2b89e708d193564b3dc0fba7381\"" Mar 17 17:27:37.249751 systemd[1]: Started cri-containerd-ed37bea3a8dad646d78960b0a14429e9ec43b2b89e708d193564b3dc0fba7381.scope - libcontainer container ed37bea3a8dad646d78960b0a14429e9ec43b2b89e708d193564b3dc0fba7381. Mar 17 17:27:37.282753 systemd[1]: cri-containerd-ed37bea3a8dad646d78960b0a14429e9ec43b2b89e708d193564b3dc0fba7381.scope: Deactivated successfully. Mar 17 17:27:37.286830 containerd[1808]: time="2025-03-17T17:27:37.286726252Z" level=info msg="StartContainer for \"ed37bea3a8dad646d78960b0a14429e9ec43b2b89e708d193564b3dc0fba7381\" returns successfully" Mar 17 17:27:37.311944 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ed37bea3a8dad646d78960b0a14429e9ec43b2b89e708d193564b3dc0fba7381-rootfs.mount: Deactivated successfully. Mar 17 17:27:37.331488 containerd[1808]: time="2025-03-17T17:27:37.331405762Z" level=info msg="shim disconnected" id=ed37bea3a8dad646d78960b0a14429e9ec43b2b89e708d193564b3dc0fba7381 namespace=k8s.io Mar 17 17:27:37.331488 containerd[1808]: time="2025-03-17T17:27:37.331463362Z" level=warning msg="cleaning up after shim disconnected" id=ed37bea3a8dad646d78960b0a14429e9ec43b2b89e708d193564b3dc0fba7381 namespace=k8s.io Mar 17 17:27:37.331488 containerd[1808]: time="2025-03-17T17:27:37.331472162Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:27:38.142046 containerd[1808]: time="2025-03-17T17:27:38.141928209Z" level=info msg="CreateContainer within sandbox \"7ab02314498ec8b3b5b8864f221a5a29c93bd0489dd272992a050eedde9df17e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 17:27:38.200367 containerd[1808]: time="2025-03-17T17:27:38.200312676Z" level=info msg="CreateContainer within sandbox \"7ab02314498ec8b3b5b8864f221a5a29c93bd0489dd272992a050eedde9df17e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b84c47a2fc7317b07f273c6794822779faaf75fda9c5019697740baa0a29fce1\"" Mar 17 17:27:38.201369 containerd[1808]: time="2025-03-17T17:27:38.201076795Z" level=info msg="StartContainer for \"b84c47a2fc7317b07f273c6794822779faaf75fda9c5019697740baa0a29fce1\"" Mar 17 17:27:38.230790 systemd[1]: Started cri-containerd-b84c47a2fc7317b07f273c6794822779faaf75fda9c5019697740baa0a29fce1.scope - libcontainer container b84c47a2fc7317b07f273c6794822779faaf75fda9c5019697740baa0a29fce1. Mar 17 17:27:38.258135 systemd[1]: cri-containerd-b84c47a2fc7317b07f273c6794822779faaf75fda9c5019697740baa0a29fce1.scope: Deactivated successfully. Mar 17 17:27:38.264031 containerd[1808]: time="2025-03-17T17:27:38.263970820Z" level=info msg="StartContainer for \"b84c47a2fc7317b07f273c6794822779faaf75fda9c5019697740baa0a29fce1\" returns successfully" Mar 17 17:27:38.282744 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b84c47a2fc7317b07f273c6794822779faaf75fda9c5019697740baa0a29fce1-rootfs.mount: Deactivated successfully. Mar 17 17:27:38.296286 containerd[1808]: time="2025-03-17T17:27:38.296212853Z" level=info msg="shim disconnected" id=b84c47a2fc7317b07f273c6794822779faaf75fda9c5019697740baa0a29fce1 namespace=k8s.io Mar 17 17:27:38.296286 containerd[1808]: time="2025-03-17T17:27:38.296274293Z" level=warning msg="cleaning up after shim disconnected" id=b84c47a2fc7317b07f273c6794822779faaf75fda9c5019697740baa0a29fce1 namespace=k8s.io Mar 17 17:27:38.296286 containerd[1808]: time="2025-03-17T17:27:38.296285533Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:27:39.045732 containerd[1808]: time="2025-03-17T17:27:39.045657195Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:27:39.053125 containerd[1808]: time="2025-03-17T17:27:39.052870953Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Mar 17 17:27:39.062376 containerd[1808]: time="2025-03-17T17:27:39.062319591Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:27:39.065248 containerd[1808]: time="2025-03-17T17:27:39.065105430Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 5.220577776s" Mar 17 17:27:39.065248 containerd[1808]: time="2025-03-17T17:27:39.065159030Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Mar 17 17:27:39.068062 containerd[1808]: time="2025-03-17T17:27:39.067772430Z" level=info msg="CreateContainer within sandbox \"96ddba37f22720cc2cacc7cbd94ca806453faa888bf9a036e4f4f890b0c2b24a\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 17 17:27:39.127587 containerd[1808]: time="2025-03-17T17:27:39.127508055Z" level=info msg="CreateContainer within sandbox \"96ddba37f22720cc2cacc7cbd94ca806453faa888bf9a036e4f4f890b0c2b24a\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"2a22cc849244e6edab9b7623fa93d22df271d5eb9bc1845ef5e41d359f736295\"" Mar 17 17:27:39.128431 containerd[1808]: time="2025-03-17T17:27:39.128404295Z" level=info msg="StartContainer for \"2a22cc849244e6edab9b7623fa93d22df271d5eb9bc1845ef5e41d359f736295\"" Mar 17 17:27:39.151258 containerd[1808]: time="2025-03-17T17:27:39.150826490Z" level=info msg="CreateContainer within sandbox \"7ab02314498ec8b3b5b8864f221a5a29c93bd0489dd272992a050eedde9df17e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 17:27:39.166749 systemd[1]: Started cri-containerd-2a22cc849244e6edab9b7623fa93d22df271d5eb9bc1845ef5e41d359f736295.scope - libcontainer container 2a22cc849244e6edab9b7623fa93d22df271d5eb9bc1845ef5e41d359f736295. Mar 17 17:27:39.204876 containerd[1808]: time="2025-03-17T17:27:39.204793557Z" level=info msg="StartContainer for \"2a22cc849244e6edab9b7623fa93d22df271d5eb9bc1845ef5e41d359f736295\" returns successfully" Mar 17 17:27:39.210306 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount998104611.mount: Deactivated successfully. Mar 17 17:27:39.216972 containerd[1808]: time="2025-03-17T17:27:39.216862834Z" level=info msg="CreateContainer within sandbox \"7ab02314498ec8b3b5b8864f221a5a29c93bd0489dd272992a050eedde9df17e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"05969b4c34305d8280deeee47c057d3e094802e47d13203cf5e64991f7dc6918\"" Mar 17 17:27:39.218637 containerd[1808]: time="2025-03-17T17:27:39.217695994Z" level=info msg="StartContainer for \"05969b4c34305d8280deeee47c057d3e094802e47d13203cf5e64991f7dc6918\"" Mar 17 17:27:39.254819 systemd[1]: Started cri-containerd-05969b4c34305d8280deeee47c057d3e094802e47d13203cf5e64991f7dc6918.scope - libcontainer container 05969b4c34305d8280deeee47c057d3e094802e47d13203cf5e64991f7dc6918. Mar 17 17:27:39.298695 containerd[1808]: time="2025-03-17T17:27:39.298557935Z" level=info msg="StartContainer for \"05969b4c34305d8280deeee47c057d3e094802e47d13203cf5e64991f7dc6918\" returns successfully" Mar 17 17:27:39.386158 kubelet[3337]: I0317 17:27:39.386118 3337 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Mar 17 17:27:39.448808 systemd[1]: Created slice kubepods-burstable-pod5b5fe741_345b_4446_9ad8_14e6493b7708.slice - libcontainer container kubepods-burstable-pod5b5fe741_345b_4446_9ad8_14e6493b7708.slice. Mar 17 17:27:39.462603 systemd[1]: Created slice kubepods-burstable-pod320bcd02_bd86_447d_96cb_dc5cdd7fe24b.slice - libcontainer container kubepods-burstable-pod320bcd02_bd86_447d_96cb_dc5cdd7fe24b.slice. Mar 17 17:27:39.551236 kubelet[3337]: I0317 17:27:39.550853 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/320bcd02-bd86-447d-96cb-dc5cdd7fe24b-config-volume\") pod \"coredns-668d6bf9bc-jcq2z\" (UID: \"320bcd02-bd86-447d-96cb-dc5cdd7fe24b\") " pod="kube-system/coredns-668d6bf9bc-jcq2z" Mar 17 17:27:39.551236 kubelet[3337]: I0317 17:27:39.551177 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7hdv\" (UniqueName: \"kubernetes.io/projected/320bcd02-bd86-447d-96cb-dc5cdd7fe24b-kube-api-access-d7hdv\") pod \"coredns-668d6bf9bc-jcq2z\" (UID: \"320bcd02-bd86-447d-96cb-dc5cdd7fe24b\") " pod="kube-system/coredns-668d6bf9bc-jcq2z" Mar 17 17:27:39.551506 kubelet[3337]: I0317 17:27:39.551413 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2mqt\" (UniqueName: \"kubernetes.io/projected/5b5fe741-345b-4446-9ad8-14e6493b7708-kube-api-access-z2mqt\") pod \"coredns-668d6bf9bc-q96nj\" (UID: \"5b5fe741-345b-4446-9ad8-14e6493b7708\") " pod="kube-system/coredns-668d6bf9bc-q96nj" Mar 17 17:27:39.551506 kubelet[3337]: I0317 17:27:39.551465 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5b5fe741-345b-4446-9ad8-14e6493b7708-config-volume\") pod \"coredns-668d6bf9bc-q96nj\" (UID: \"5b5fe741-345b-4446-9ad8-14e6493b7708\") " pod="kube-system/coredns-668d6bf9bc-q96nj" Mar 17 17:27:39.755405 containerd[1808]: time="2025-03-17T17:27:39.755296946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-q96nj,Uid:5b5fe741-345b-4446-9ad8-14e6493b7708,Namespace:kube-system,Attempt:0,}" Mar 17 17:27:39.773029 containerd[1808]: time="2025-03-17T17:27:39.772973382Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jcq2z,Uid:320bcd02-bd86-447d-96cb-dc5cdd7fe24b,Namespace:kube-system,Attempt:0,}" Mar 17 17:27:40.348868 kubelet[3337]: I0317 17:27:40.348796 3337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-6zbzb" podStartSLOduration=8.384300497 podStartE2EDuration="33.348774606s" podCreationTimestamp="2025-03-17 17:27:07 +0000 UTC" firstStartedPulling="2025-03-17 17:27:08.879090746 +0000 UTC m=+6.199231521" lastFinishedPulling="2025-03-17 17:27:33.843564855 +0000 UTC m=+31.163705630" observedRunningTime="2025-03-17 17:27:40.347796286 +0000 UTC m=+37.667937061" watchObservedRunningTime="2025-03-17 17:27:40.348774606 +0000 UTC m=+37.668915381" Mar 17 17:27:40.349073 kubelet[3337]: I0317 17:27:40.348974 3337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-vswnp" podStartSLOduration=2.322292897 podStartE2EDuration="32.348968126s" podCreationTimestamp="2025-03-17 17:27:08 +0000 UTC" firstStartedPulling="2025-03-17 17:27:09.039332881 +0000 UTC m=+6.359473656" lastFinishedPulling="2025-03-17 17:27:39.06600811 +0000 UTC m=+36.386148885" observedRunningTime="2025-03-17 17:27:40.224377915 +0000 UTC m=+37.544518730" watchObservedRunningTime="2025-03-17 17:27:40.348968126 +0000 UTC m=+37.669108901" Mar 17 17:27:41.545802 systemd-networkd[1501]: cilium_host: Link UP Mar 17 17:27:41.548991 systemd-networkd[1501]: cilium_net: Link UP Mar 17 17:27:41.549210 systemd-networkd[1501]: cilium_net: Gained carrier Mar 17 17:27:41.549329 systemd-networkd[1501]: cilium_host: Gained carrier Mar 17 17:27:41.696780 systemd-networkd[1501]: cilium_net: Gained IPv6LL Mar 17 17:27:41.735899 systemd-networkd[1501]: cilium_vxlan: Link UP Mar 17 17:27:41.735906 systemd-networkd[1501]: cilium_vxlan: Gained carrier Mar 17 17:27:41.905727 systemd-networkd[1501]: cilium_host: Gained IPv6LL Mar 17 17:27:42.012668 kernel: NET: Registered PF_ALG protocol family Mar 17 17:27:42.759950 systemd-networkd[1501]: lxc_health: Link UP Mar 17 17:27:42.779667 systemd-networkd[1501]: lxc_health: Gained carrier Mar 17 17:27:42.916960 systemd-networkd[1501]: lxc38af570cf2bd: Link UP Mar 17 17:27:42.928751 kernel: eth0: renamed from tmpfdd74 Mar 17 17:27:42.944221 systemd-networkd[1501]: lxca765994b377f: Link UP Mar 17 17:27:42.946359 systemd-networkd[1501]: lxc38af570cf2bd: Gained carrier Mar 17 17:27:42.954565 kernel: eth0: renamed from tmp45e29 Mar 17 17:27:42.960940 systemd-networkd[1501]: lxca765994b377f: Gained carrier Mar 17 17:27:43.672795 systemd-networkd[1501]: cilium_vxlan: Gained IPv6LL Mar 17 17:27:43.993700 systemd-networkd[1501]: lxc_health: Gained IPv6LL Mar 17 17:27:44.440690 systemd-networkd[1501]: lxc38af570cf2bd: Gained IPv6LL Mar 17 17:27:44.632684 systemd-networkd[1501]: lxca765994b377f: Gained IPv6LL Mar 17 17:27:46.971890 containerd[1808]: time="2025-03-17T17:27:46.971756561Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:27:46.971890 containerd[1808]: time="2025-03-17T17:27:46.971823121Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:27:46.972351 containerd[1808]: time="2025-03-17T17:27:46.971838601Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:27:46.972712 containerd[1808]: time="2025-03-17T17:27:46.972561841Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:27:47.009072 systemd[1]: run-containerd-runc-k8s.io-fdd7459d77a49b83e31c5dea3b051136a7470e5022d8bb3f79f4fcaedd30d167-runc.9TRXdA.mount: Deactivated successfully. Mar 17 17:27:47.020752 systemd[1]: Started cri-containerd-fdd7459d77a49b83e31c5dea3b051136a7470e5022d8bb3f79f4fcaedd30d167.scope - libcontainer container fdd7459d77a49b83e31c5dea3b051136a7470e5022d8bb3f79f4fcaedd30d167. Mar 17 17:27:47.036264 containerd[1808]: time="2025-03-17T17:27:47.035965660Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:27:47.036264 containerd[1808]: time="2025-03-17T17:27:47.036076340Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:27:47.036649 containerd[1808]: time="2025-03-17T17:27:47.036158700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:27:47.037150 containerd[1808]: time="2025-03-17T17:27:47.036567420Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:27:47.067754 systemd[1]: Started cri-containerd-45e29695cf4659c84132d871108eca45d0b272cdc082297a4d0f9138e6570dfa.scope - libcontainer container 45e29695cf4659c84132d871108eca45d0b272cdc082297a4d0f9138e6570dfa. Mar 17 17:27:47.105908 containerd[1808]: time="2025-03-17T17:27:47.105168277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jcq2z,Uid:320bcd02-bd86-447d-96cb-dc5cdd7fe24b,Namespace:kube-system,Attempt:0,} returns sandbox id \"fdd7459d77a49b83e31c5dea3b051136a7470e5022d8bb3f79f4fcaedd30d167\"" Mar 17 17:27:47.110512 containerd[1808]: time="2025-03-17T17:27:47.110445515Z" level=info msg="CreateContainer within sandbox \"fdd7459d77a49b83e31c5dea3b051136a7470e5022d8bb3f79f4fcaedd30d167\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 17:27:47.136942 containerd[1808]: time="2025-03-17T17:27:47.136889867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-q96nj,Uid:5b5fe741-345b-4446-9ad8-14e6493b7708,Namespace:kube-system,Attempt:0,} returns sandbox id \"45e29695cf4659c84132d871108eca45d0b272cdc082297a4d0f9138e6570dfa\"" Mar 17 17:27:47.141978 containerd[1808]: time="2025-03-17T17:27:47.141927705Z" level=info msg="CreateContainer within sandbox \"45e29695cf4659c84132d871108eca45d0b272cdc082297a4d0f9138e6570dfa\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 17:27:47.190671 containerd[1808]: time="2025-03-17T17:27:47.190625609Z" level=info msg="CreateContainer within sandbox \"fdd7459d77a49b83e31c5dea3b051136a7470e5022d8bb3f79f4fcaedd30d167\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b90ce9a4cf68c09c45fb4423de5e86962c9afe44d4512b2c7192334b4a20dd0c\"" Mar 17 17:27:47.192123 containerd[1808]: time="2025-03-17T17:27:47.191842409Z" level=info msg="StartContainer for \"b90ce9a4cf68c09c45fb4423de5e86962c9afe44d4512b2c7192334b4a20dd0c\"" Mar 17 17:27:47.217774 systemd[1]: Started cri-containerd-b90ce9a4cf68c09c45fb4423de5e86962c9afe44d4512b2c7192334b4a20dd0c.scope - libcontainer container b90ce9a4cf68c09c45fb4423de5e86962c9afe44d4512b2c7192334b4a20dd0c. Mar 17 17:27:47.228288 containerd[1808]: time="2025-03-17T17:27:47.228026077Z" level=info msg="CreateContainer within sandbox \"45e29695cf4659c84132d871108eca45d0b272cdc082297a4d0f9138e6570dfa\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5bfb3c83cd0bc02736f61ad047c709d15f4f345b7c099c79d902839dad9032f6\"" Mar 17 17:27:47.229512 containerd[1808]: time="2025-03-17T17:27:47.229235836Z" level=info msg="StartContainer for \"5bfb3c83cd0bc02736f61ad047c709d15f4f345b7c099c79d902839dad9032f6\"" Mar 17 17:27:47.260022 containerd[1808]: time="2025-03-17T17:27:47.259948386Z" level=info msg="StartContainer for \"b90ce9a4cf68c09c45fb4423de5e86962c9afe44d4512b2c7192334b4a20dd0c\" returns successfully" Mar 17 17:27:47.269954 systemd[1]: Started cri-containerd-5bfb3c83cd0bc02736f61ad047c709d15f4f345b7c099c79d902839dad9032f6.scope - libcontainer container 5bfb3c83cd0bc02736f61ad047c709d15f4f345b7c099c79d902839dad9032f6. Mar 17 17:27:47.314557 containerd[1808]: time="2025-03-17T17:27:47.314225728Z" level=info msg="StartContainer for \"5bfb3c83cd0bc02736f61ad047c709d15f4f345b7c099c79d902839dad9032f6\" returns successfully" Mar 17 17:27:48.197016 kubelet[3337]: I0317 17:27:48.196947 3337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-jcq2z" podStartSLOduration=40.196929038 podStartE2EDuration="40.196929038s" podCreationTimestamp="2025-03-17 17:27:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:27:48.194336039 +0000 UTC m=+45.514476814" watchObservedRunningTime="2025-03-17 17:27:48.196929038 +0000 UTC m=+45.517069813" Mar 17 17:27:48.216259 kubelet[3337]: I0317 17:27:48.216185 3337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-q96nj" podStartSLOduration=40.216163792 podStartE2EDuration="40.216163792s" podCreationTimestamp="2025-03-17 17:27:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:27:48.214284273 +0000 UTC m=+45.534425048" watchObservedRunningTime="2025-03-17 17:27:48.216163792 +0000 UTC m=+45.536304567" Mar 17 17:27:57.652591 waagent[1927]: 2025-03-17T17:27:57.651816Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 2] Mar 17 17:27:57.659773 waagent[1927]: 2025-03-17T17:27:57.659713Z INFO ExtHandler Mar 17 17:27:57.659887 waagent[1927]: 2025-03-17T17:27:57.659864Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: f840f152-9279-4279-af76-0b12f72b0364 eTag: 3476014389710211667 source: Fabric] Mar 17 17:27:57.660319 waagent[1927]: 2025-03-17T17:27:57.660267Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Mar 17 17:27:57.660987 waagent[1927]: 2025-03-17T17:27:57.660931Z INFO ExtHandler Mar 17 17:27:57.661128 waagent[1927]: 2025-03-17T17:27:57.661087Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 2] Mar 17 17:27:57.740157 waagent[1927]: 2025-03-17T17:27:57.740086Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Mar 17 17:27:57.898199 waagent[1927]: 2025-03-17T17:27:57.898093Z INFO ExtHandler Downloaded certificate {'thumbprint': 'CF76A22CCED062FF8C39A5D3BDACD242FDD34149', 'hasPrivateKey': True} Mar 17 17:27:57.898705 waagent[1927]: 2025-03-17T17:27:57.898654Z INFO ExtHandler Downloaded certificate {'thumbprint': '41518CF6F2B5C7CDB6D029B351552E19A8139D74', 'hasPrivateKey': False} Mar 17 17:27:57.899136 waagent[1927]: 2025-03-17T17:27:57.899090Z INFO ExtHandler Fetch goal state completed Mar 17 17:27:57.899656 waagent[1927]: 2025-03-17T17:27:57.899606Z INFO ExtHandler ExtHandler Mar 17 17:27:57.899746 waagent[1927]: 2025-03-17T17:27:57.899707Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_2 channel: WireServer source: Fabric activity: bbbcfbde-d37b-4564-80b5-863d989c1369 correlation 70df159f-fe58-4c06-b1d2-f0431f5df022 created: 2025-03-17T17:27:19.839447Z] Mar 17 17:27:57.900100 waagent[1927]: 2025-03-17T17:27:57.900057Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Mar 17 17:27:57.900673 waagent[1927]: 2025-03-17T17:27:57.900633Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_2 1 ms] Mar 17 17:28:29.825622 update_engine[1698]: I20250317 17:28:29.825135 1698 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Mar 17 17:28:29.825622 update_engine[1698]: I20250317 17:28:29.825188 1698 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Mar 17 17:28:29.825622 update_engine[1698]: I20250317 17:28:29.825358 1698 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Mar 17 17:28:29.827092 update_engine[1698]: I20250317 17:28:29.826601 1698 omaha_request_params.cc:62] Current group set to stable Mar 17 17:28:29.827092 update_engine[1698]: I20250317 17:28:29.826732 1698 update_attempter.cc:499] Already updated boot flags. Skipping. Mar 17 17:28:29.827092 update_engine[1698]: I20250317 17:28:29.826742 1698 update_attempter.cc:643] Scheduling an action processor start. Mar 17 17:28:29.827092 update_engine[1698]: I20250317 17:28:29.826761 1698 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 17 17:28:29.827092 update_engine[1698]: I20250317 17:28:29.826790 1698 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Mar 17 17:28:29.827092 update_engine[1698]: I20250317 17:28:29.826858 1698 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 17 17:28:29.827092 update_engine[1698]: I20250317 17:28:29.826866 1698 omaha_request_action.cc:272] Request: Mar 17 17:28:29.827092 update_engine[1698]: Mar 17 17:28:29.827092 update_engine[1698]: Mar 17 17:28:29.827092 update_engine[1698]: Mar 17 17:28:29.827092 update_engine[1698]: Mar 17 17:28:29.827092 update_engine[1698]: Mar 17 17:28:29.827092 update_engine[1698]: Mar 17 17:28:29.827092 update_engine[1698]: Mar 17 17:28:29.827092 update_engine[1698]: Mar 17 17:28:29.827092 update_engine[1698]: I20250317 17:28:29.826874 1698 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 17 17:28:29.827509 locksmithd[1770]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Mar 17 17:28:29.828405 update_engine[1698]: I20250317 17:28:29.828276 1698 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 17 17:28:29.828815 update_engine[1698]: I20250317 17:28:29.828774 1698 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 17 17:28:29.943491 update_engine[1698]: E20250317 17:28:29.943419 1698 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 17 17:28:29.943661 update_engine[1698]: I20250317 17:28:29.943569 1698 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Mar 17 17:28:39.794641 update_engine[1698]: I20250317 17:28:39.794564 1698 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 17 17:28:39.794998 update_engine[1698]: I20250317 17:28:39.794813 1698 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 17 17:28:39.795077 update_engine[1698]: I20250317 17:28:39.795046 1698 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 17 17:28:39.814656 update_engine[1698]: E20250317 17:28:39.814600 1698 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 17 17:28:39.814770 update_engine[1698]: I20250317 17:28:39.814692 1698 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Mar 17 17:28:49.795795 update_engine[1698]: I20250317 17:28:49.795714 1698 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 17 17:28:49.796274 update_engine[1698]: I20250317 17:28:49.795958 1698 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 17 17:28:49.796274 update_engine[1698]: I20250317 17:28:49.796211 1698 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 17 17:28:49.880717 update_engine[1698]: E20250317 17:28:49.880651 1698 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 17 17:28:49.880860 update_engine[1698]: I20250317 17:28:49.880754 1698 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Mar 17 17:28:59.794847 update_engine[1698]: I20250317 17:28:59.794767 1698 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 17 17:28:59.795218 update_engine[1698]: I20250317 17:28:59.795027 1698 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 17 17:28:59.795317 update_engine[1698]: I20250317 17:28:59.795284 1698 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 17 17:28:59.879176 update_engine[1698]: E20250317 17:28:59.879089 1698 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 17 17:28:59.879402 update_engine[1698]: I20250317 17:28:59.879225 1698 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 17 17:28:59.879402 update_engine[1698]: I20250317 17:28:59.879242 1698 omaha_request_action.cc:617] Omaha request response: Mar 17 17:28:59.879402 update_engine[1698]: E20250317 17:28:59.879343 1698 omaha_request_action.cc:636] Omaha request network transfer failed. Mar 17 17:28:59.879402 update_engine[1698]: I20250317 17:28:59.879360 1698 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Mar 17 17:28:59.879402 update_engine[1698]: I20250317 17:28:59.879365 1698 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 17 17:28:59.879402 update_engine[1698]: I20250317 17:28:59.879370 1698 update_attempter.cc:306] Processing Done. Mar 17 17:28:59.879402 update_engine[1698]: E20250317 17:28:59.879387 1698 update_attempter.cc:619] Update failed. Mar 17 17:28:59.879402 update_engine[1698]: I20250317 17:28:59.879393 1698 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Mar 17 17:28:59.879402 update_engine[1698]: I20250317 17:28:59.879397 1698 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Mar 17 17:28:59.879402 update_engine[1698]: I20250317 17:28:59.879403 1698 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Mar 17 17:28:59.879653 update_engine[1698]: I20250317 17:28:59.879474 1698 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 17 17:28:59.879653 update_engine[1698]: I20250317 17:28:59.879495 1698 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 17 17:28:59.879653 update_engine[1698]: I20250317 17:28:59.879500 1698 omaha_request_action.cc:272] Request: Mar 17 17:28:59.879653 update_engine[1698]: Mar 17 17:28:59.879653 update_engine[1698]: Mar 17 17:28:59.879653 update_engine[1698]: Mar 17 17:28:59.879653 update_engine[1698]: Mar 17 17:28:59.879653 update_engine[1698]: Mar 17 17:28:59.879653 update_engine[1698]: Mar 17 17:28:59.879653 update_engine[1698]: I20250317 17:28:59.879508 1698 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 17 17:28:59.879832 update_engine[1698]: I20250317 17:28:59.879686 1698 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 17 17:28:59.880131 update_engine[1698]: I20250317 17:28:59.879940 1698 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 17 17:28:59.880189 locksmithd[1770]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Mar 17 17:28:59.901716 update_engine[1698]: E20250317 17:28:59.901652 1698 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 17 17:28:59.902055 update_engine[1698]: I20250317 17:28:59.901754 1698 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 17 17:28:59.902055 update_engine[1698]: I20250317 17:28:59.901763 1698 omaha_request_action.cc:617] Omaha request response: Mar 17 17:28:59.902055 update_engine[1698]: I20250317 17:28:59.901772 1698 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 17 17:28:59.902055 update_engine[1698]: I20250317 17:28:59.901777 1698 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 17 17:28:59.902055 update_engine[1698]: I20250317 17:28:59.901787 1698 update_attempter.cc:306] Processing Done. Mar 17 17:28:59.902055 update_engine[1698]: I20250317 17:28:59.901795 1698 update_attempter.cc:310] Error event sent. Mar 17 17:28:59.902055 update_engine[1698]: I20250317 17:28:59.901805 1698 update_check_scheduler.cc:74] Next update check in 43m58s Mar 17 17:28:59.902470 locksmithd[1770]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Mar 17 17:29:26.537797 systemd[1]: Started sshd@7-10.200.20.35:22-10.200.16.10:57290.service - OpenSSH per-connection server daemon (10.200.16.10:57290). Mar 17 17:29:27.011507 sshd[4742]: Accepted publickey for core from 10.200.16.10 port 57290 ssh2: RSA SHA256:Vv+Gx/xgYWEBj55H1UdRAcw683xVG5W8/4UU5IxNHAc Mar 17 17:29:27.014138 sshd-session[4742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:29:27.024435 systemd-logind[1694]: New session 10 of user core. Mar 17 17:29:27.029809 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 17 17:29:27.446638 sshd[4744]: Connection closed by 10.200.16.10 port 57290 Mar 17 17:29:27.446085 sshd-session[4742]: pam_unix(sshd:session): session closed for user core Mar 17 17:29:27.449438 systemd[1]: sshd@7-10.200.20.35:22-10.200.16.10:57290.service: Deactivated successfully. Mar 17 17:29:27.452338 systemd[1]: session-10.scope: Deactivated successfully. Mar 17 17:29:27.455749 systemd-logind[1694]: Session 10 logged out. Waiting for processes to exit. Mar 17 17:29:27.457052 systemd-logind[1694]: Removed session 10. Mar 17 17:29:32.529937 systemd[1]: Started sshd@8-10.200.20.35:22-10.200.16.10:44870.service - OpenSSH per-connection server daemon (10.200.16.10:44870). Mar 17 17:29:32.960469 sshd[4756]: Accepted publickey for core from 10.200.16.10 port 44870 ssh2: RSA SHA256:Vv+Gx/xgYWEBj55H1UdRAcw683xVG5W8/4UU5IxNHAc Mar 17 17:29:32.961852 sshd-session[4756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:29:32.966197 systemd-logind[1694]: New session 11 of user core. Mar 17 17:29:32.976898 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 17 17:29:33.345257 sshd[4758]: Connection closed by 10.200.16.10 port 44870 Mar 17 17:29:33.345851 sshd-session[4756]: pam_unix(sshd:session): session closed for user core Mar 17 17:29:33.349458 systemd[1]: sshd@8-10.200.20.35:22-10.200.16.10:44870.service: Deactivated successfully. Mar 17 17:29:33.352209 systemd[1]: session-11.scope: Deactivated successfully. Mar 17 17:29:33.353523 systemd-logind[1694]: Session 11 logged out. Waiting for processes to exit. Mar 17 17:29:33.354497 systemd-logind[1694]: Removed session 11. Mar 17 17:29:38.433148 systemd[1]: Started sshd@9-10.200.20.35:22-10.200.16.10:39822.service - OpenSSH per-connection server daemon (10.200.16.10:39822). Mar 17 17:29:38.862643 sshd[4770]: Accepted publickey for core from 10.200.16.10 port 39822 ssh2: RSA SHA256:Vv+Gx/xgYWEBj55H1UdRAcw683xVG5W8/4UU5IxNHAc Mar 17 17:29:38.864054 sshd-session[4770]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:29:38.868383 systemd-logind[1694]: New session 12 of user core. Mar 17 17:29:38.879721 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 17 17:29:39.236416 sshd[4772]: Connection closed by 10.200.16.10 port 39822 Mar 17 17:29:39.237188 sshd-session[4770]: pam_unix(sshd:session): session closed for user core Mar 17 17:29:39.242170 systemd[1]: sshd@9-10.200.20.35:22-10.200.16.10:39822.service: Deactivated successfully. Mar 17 17:29:39.245732 systemd[1]: session-12.scope: Deactivated successfully. Mar 17 17:29:39.247630 systemd-logind[1694]: Session 12 logged out. Waiting for processes to exit. Mar 17 17:29:39.248955 systemd-logind[1694]: Removed session 12. Mar 17 17:29:44.326870 systemd[1]: Started sshd@10-10.200.20.35:22-10.200.16.10:39828.service - OpenSSH per-connection server daemon (10.200.16.10:39828). Mar 17 17:29:44.800291 sshd[4786]: Accepted publickey for core from 10.200.16.10 port 39828 ssh2: RSA SHA256:Vv+Gx/xgYWEBj55H1UdRAcw683xVG5W8/4UU5IxNHAc Mar 17 17:29:44.802156 sshd-session[4786]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:29:44.808624 systemd-logind[1694]: New session 13 of user core. Mar 17 17:29:44.815764 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 17 17:29:45.226986 sshd[4788]: Connection closed by 10.200.16.10 port 39828 Mar 17 17:29:45.227594 sshd-session[4786]: pam_unix(sshd:session): session closed for user core Mar 17 17:29:45.231718 systemd-logind[1694]: Session 13 logged out. Waiting for processes to exit. Mar 17 17:29:45.232463 systemd[1]: sshd@10-10.200.20.35:22-10.200.16.10:39828.service: Deactivated successfully. Mar 17 17:29:45.235159 systemd[1]: session-13.scope: Deactivated successfully. Mar 17 17:29:45.236840 systemd-logind[1694]: Removed session 13. Mar 17 17:29:50.323823 systemd[1]: Started sshd@11-10.200.20.35:22-10.200.16.10:42114.service - OpenSSH per-connection server daemon (10.200.16.10:42114). Mar 17 17:29:50.814379 sshd[4800]: Accepted publickey for core from 10.200.16.10 port 42114 ssh2: RSA SHA256:Vv+Gx/xgYWEBj55H1UdRAcw683xVG5W8/4UU5IxNHAc Mar 17 17:29:50.816334 sshd-session[4800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:29:50.820514 systemd-logind[1694]: New session 14 of user core. Mar 17 17:29:50.827773 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 17 17:29:51.243885 sshd[4802]: Connection closed by 10.200.16.10 port 42114 Mar 17 17:29:51.244457 sshd-session[4800]: pam_unix(sshd:session): session closed for user core Mar 17 17:29:51.248571 systemd-logind[1694]: Session 14 logged out. Waiting for processes to exit. Mar 17 17:29:51.249261 systemd[1]: sshd@11-10.200.20.35:22-10.200.16.10:42114.service: Deactivated successfully. Mar 17 17:29:51.251912 systemd[1]: session-14.scope: Deactivated successfully. Mar 17 17:29:51.253171 systemd-logind[1694]: Removed session 14. Mar 17 17:29:51.341178 systemd[1]: Started sshd@12-10.200.20.35:22-10.200.16.10:42126.service - OpenSSH per-connection server daemon (10.200.16.10:42126). Mar 17 17:29:51.789248 sshd[4814]: Accepted publickey for core from 10.200.16.10 port 42126 ssh2: RSA SHA256:Vv+Gx/xgYWEBj55H1UdRAcw683xVG5W8/4UU5IxNHAc Mar 17 17:29:51.791051 sshd-session[4814]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:29:51.796725 systemd-logind[1694]: New session 15 of user core. Mar 17 17:29:51.804717 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 17 17:29:52.224664 sshd[4816]: Connection closed by 10.200.16.10 port 42126 Mar 17 17:29:52.225654 sshd-session[4814]: pam_unix(sshd:session): session closed for user core Mar 17 17:29:52.228840 systemd[1]: sshd@12-10.200.20.35:22-10.200.16.10:42126.service: Deactivated successfully. Mar 17 17:29:52.230891 systemd[1]: session-15.scope: Deactivated successfully. Mar 17 17:29:52.232720 systemd-logind[1694]: Session 15 logged out. Waiting for processes to exit. Mar 17 17:29:52.235271 systemd-logind[1694]: Removed session 15. Mar 17 17:29:52.326964 systemd[1]: Started sshd@13-10.200.20.35:22-10.200.16.10:42138.service - OpenSSH per-connection server daemon (10.200.16.10:42138). Mar 17 17:29:52.820500 sshd[4825]: Accepted publickey for core from 10.200.16.10 port 42138 ssh2: RSA SHA256:Vv+Gx/xgYWEBj55H1UdRAcw683xVG5W8/4UU5IxNHAc Mar 17 17:29:52.821957 sshd-session[4825]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:29:52.827060 systemd-logind[1694]: New session 16 of user core. Mar 17 17:29:52.835807 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 17 17:29:53.256670 sshd[4827]: Connection closed by 10.200.16.10 port 42138 Mar 17 17:29:53.257221 sshd-session[4825]: pam_unix(sshd:session): session closed for user core Mar 17 17:29:53.260987 systemd[1]: sshd@13-10.200.20.35:22-10.200.16.10:42138.service: Deactivated successfully. Mar 17 17:29:53.263112 systemd[1]: session-16.scope: Deactivated successfully. Mar 17 17:29:53.263938 systemd-logind[1694]: Session 16 logged out. Waiting for processes to exit. Mar 17 17:29:53.265476 systemd-logind[1694]: Removed session 16. Mar 17 17:29:58.348843 systemd[1]: Started sshd@14-10.200.20.35:22-10.200.16.10:42148.service - OpenSSH per-connection server daemon (10.200.16.10:42148). Mar 17 17:29:58.797250 sshd[4838]: Accepted publickey for core from 10.200.16.10 port 42148 ssh2: RSA SHA256:Vv+Gx/xgYWEBj55H1UdRAcw683xVG5W8/4UU5IxNHAc Mar 17 17:29:58.798658 sshd-session[4838]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:29:58.803185 systemd-logind[1694]: New session 17 of user core. Mar 17 17:29:58.810743 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 17 17:29:59.193022 sshd[4840]: Connection closed by 10.200.16.10 port 42148 Mar 17 17:29:59.193939 sshd-session[4838]: pam_unix(sshd:session): session closed for user core Mar 17 17:29:59.198327 systemd-logind[1694]: Session 17 logged out. Waiting for processes to exit. Mar 17 17:29:59.198874 systemd[1]: sshd@14-10.200.20.35:22-10.200.16.10:42148.service: Deactivated successfully. Mar 17 17:29:59.201520 systemd[1]: session-17.scope: Deactivated successfully. Mar 17 17:29:59.203197 systemd-logind[1694]: Removed session 17. Mar 17 17:30:04.276125 systemd[1]: Started sshd@15-10.200.20.35:22-10.200.16.10:56954.service - OpenSSH per-connection server daemon (10.200.16.10:56954). Mar 17 17:30:04.730318 sshd[4853]: Accepted publickey for core from 10.200.16.10 port 56954 ssh2: RSA SHA256:Vv+Gx/xgYWEBj55H1UdRAcw683xVG5W8/4UU5IxNHAc Mar 17 17:30:04.732217 sshd-session[4853]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:30:04.737028 systemd-logind[1694]: New session 18 of user core. Mar 17 17:30:04.747746 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 17 17:30:05.121776 sshd[4855]: Connection closed by 10.200.16.10 port 56954 Mar 17 17:30:05.122423 sshd-session[4853]: pam_unix(sshd:session): session closed for user core Mar 17 17:30:05.126477 systemd[1]: sshd@15-10.200.20.35:22-10.200.16.10:56954.service: Deactivated successfully. Mar 17 17:30:05.128608 systemd[1]: session-18.scope: Deactivated successfully. Mar 17 17:30:05.129498 systemd-logind[1694]: Session 18 logged out. Waiting for processes to exit. Mar 17 17:30:05.131042 systemd-logind[1694]: Removed session 18. Mar 17 17:30:05.213873 systemd[1]: Started sshd@16-10.200.20.35:22-10.200.16.10:56960.service - OpenSSH per-connection server daemon (10.200.16.10:56960). Mar 17 17:30:05.709782 sshd[4866]: Accepted publickey for core from 10.200.16.10 port 56960 ssh2: RSA SHA256:Vv+Gx/xgYWEBj55H1UdRAcw683xVG5W8/4UU5IxNHAc Mar 17 17:30:05.711343 sshd-session[4866]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:30:05.716158 systemd-logind[1694]: New session 19 of user core. Mar 17 17:30:05.721739 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 17 17:30:06.177313 sshd[4868]: Connection closed by 10.200.16.10 port 56960 Mar 17 17:30:06.177965 sshd-session[4866]: pam_unix(sshd:session): session closed for user core Mar 17 17:30:06.181981 systemd[1]: sshd@16-10.200.20.35:22-10.200.16.10:56960.service: Deactivated successfully. Mar 17 17:30:06.184195 systemd[1]: session-19.scope: Deactivated successfully. Mar 17 17:30:06.185278 systemd-logind[1694]: Session 19 logged out. Waiting for processes to exit. Mar 17 17:30:06.189308 systemd-logind[1694]: Removed session 19. Mar 17 17:30:06.265857 systemd[1]: Started sshd@17-10.200.20.35:22-10.200.16.10:56976.service - OpenSSH per-connection server daemon (10.200.16.10:56976). Mar 17 17:30:06.713373 sshd[4877]: Accepted publickey for core from 10.200.16.10 port 56976 ssh2: RSA SHA256:Vv+Gx/xgYWEBj55H1UdRAcw683xVG5W8/4UU5IxNHAc Mar 17 17:30:06.714940 sshd-session[4877]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:30:06.719824 systemd-logind[1694]: New session 20 of user core. Mar 17 17:30:06.724705 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 17 17:30:08.044176 sshd[4879]: Connection closed by 10.200.16.10 port 56976 Mar 17 17:30:08.045090 sshd-session[4877]: pam_unix(sshd:session): session closed for user core Mar 17 17:30:08.050607 systemd[1]: sshd@17-10.200.20.35:22-10.200.16.10:56976.service: Deactivated successfully. Mar 17 17:30:08.054277 systemd[1]: session-20.scope: Deactivated successfully. Mar 17 17:30:08.059519 systemd-logind[1694]: Session 20 logged out. Waiting for processes to exit. Mar 17 17:30:08.065160 systemd-logind[1694]: Removed session 20. Mar 17 17:30:08.127354 systemd[1]: Started sshd@18-10.200.20.35:22-10.200.16.10:56986.service - OpenSSH per-connection server daemon (10.200.16.10:56986). Mar 17 17:30:08.582267 sshd[4895]: Accepted publickey for core from 10.200.16.10 port 56986 ssh2: RSA SHA256:Vv+Gx/xgYWEBj55H1UdRAcw683xVG5W8/4UU5IxNHAc Mar 17 17:30:08.583759 sshd-session[4895]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:30:08.588846 systemd-logind[1694]: New session 21 of user core. Mar 17 17:30:08.590754 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 17 17:30:09.098437 sshd[4897]: Connection closed by 10.200.16.10 port 56986 Mar 17 17:30:09.099222 sshd-session[4895]: pam_unix(sshd:session): session closed for user core Mar 17 17:30:09.103594 systemd-logind[1694]: Session 21 logged out. Waiting for processes to exit. Mar 17 17:30:09.104309 systemd[1]: sshd@18-10.200.20.35:22-10.200.16.10:56986.service: Deactivated successfully. Mar 17 17:30:09.106938 systemd[1]: session-21.scope: Deactivated successfully. Mar 17 17:30:09.108403 systemd-logind[1694]: Removed session 21. Mar 17 17:30:09.188968 systemd[1]: Started sshd@19-10.200.20.35:22-10.200.16.10:60940.service - OpenSSH per-connection server daemon (10.200.16.10:60940). Mar 17 17:30:09.641989 sshd[4906]: Accepted publickey for core from 10.200.16.10 port 60940 ssh2: RSA SHA256:Vv+Gx/xgYWEBj55H1UdRAcw683xVG5W8/4UU5IxNHAc Mar 17 17:30:09.643511 sshd-session[4906]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:30:09.647978 systemd-logind[1694]: New session 22 of user core. Mar 17 17:30:09.656755 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 17 17:30:10.029017 sshd[4908]: Connection closed by 10.200.16.10 port 60940 Mar 17 17:30:10.029711 sshd-session[4906]: pam_unix(sshd:session): session closed for user core Mar 17 17:30:10.033084 systemd-logind[1694]: Session 22 logged out. Waiting for processes to exit. Mar 17 17:30:10.033288 systemd[1]: sshd@19-10.200.20.35:22-10.200.16.10:60940.service: Deactivated successfully. Mar 17 17:30:10.035963 systemd[1]: session-22.scope: Deactivated successfully. Mar 17 17:30:10.038446 systemd-logind[1694]: Removed session 22. Mar 17 17:30:15.132860 systemd[1]: Started sshd@20-10.200.20.35:22-10.200.16.10:60942.service - OpenSSH per-connection server daemon (10.200.16.10:60942). Mar 17 17:30:15.620729 sshd[4921]: Accepted publickey for core from 10.200.16.10 port 60942 ssh2: RSA SHA256:Vv+Gx/xgYWEBj55H1UdRAcw683xVG5W8/4UU5IxNHAc Mar 17 17:30:15.622310 sshd-session[4921]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:30:15.629914 systemd-logind[1694]: New session 23 of user core. Mar 17 17:30:15.633788 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 17 17:30:16.051660 sshd[4926]: Connection closed by 10.200.16.10 port 60942 Mar 17 17:30:16.052354 sshd-session[4921]: pam_unix(sshd:session): session closed for user core Mar 17 17:30:16.057813 systemd[1]: sshd@20-10.200.20.35:22-10.200.16.10:60942.service: Deactivated successfully. Mar 17 17:30:16.063466 systemd[1]: session-23.scope: Deactivated successfully. Mar 17 17:30:16.064738 systemd-logind[1694]: Session 23 logged out. Waiting for processes to exit. Mar 17 17:30:16.066152 systemd-logind[1694]: Removed session 23. Mar 17 17:30:21.143846 systemd[1]: Started sshd@21-10.200.20.35:22-10.200.16.10:40222.service - OpenSSH per-connection server daemon (10.200.16.10:40222). Mar 17 17:30:21.629641 sshd[4936]: Accepted publickey for core from 10.200.16.10 port 40222 ssh2: RSA SHA256:Vv+Gx/xgYWEBj55H1UdRAcw683xVG5W8/4UU5IxNHAc Mar 17 17:30:21.631167 sshd-session[4936]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:30:21.636527 systemd-logind[1694]: New session 24 of user core. Mar 17 17:30:21.639822 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 17 17:30:22.053946 sshd[4938]: Connection closed by 10.200.16.10 port 40222 Mar 17 17:30:22.054498 sshd-session[4936]: pam_unix(sshd:session): session closed for user core Mar 17 17:30:22.061659 systemd[1]: sshd@21-10.200.20.35:22-10.200.16.10:40222.service: Deactivated successfully. Mar 17 17:30:22.067193 systemd[1]: session-24.scope: Deactivated successfully. Mar 17 17:30:22.068184 systemd-logind[1694]: Session 24 logged out. Waiting for processes to exit. Mar 17 17:30:22.069461 systemd-logind[1694]: Removed session 24. Mar 17 17:30:27.155039 systemd[1]: Started sshd@22-10.200.20.35:22-10.200.16.10:40228.service - OpenSSH per-connection server daemon (10.200.16.10:40228). Mar 17 17:30:27.640708 sshd[4948]: Accepted publickey for core from 10.200.16.10 port 40228 ssh2: RSA SHA256:Vv+Gx/xgYWEBj55H1UdRAcw683xVG5W8/4UU5IxNHAc Mar 17 17:30:27.642148 sshd-session[4948]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:30:27.646314 systemd-logind[1694]: New session 25 of user core. Mar 17 17:30:27.658729 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 17 17:30:28.063789 sshd[4950]: Connection closed by 10.200.16.10 port 40228 Mar 17 17:30:28.064099 sshd-session[4948]: pam_unix(sshd:session): session closed for user core Mar 17 17:30:28.067990 systemd[1]: sshd@22-10.200.20.35:22-10.200.16.10:40228.service: Deactivated successfully. Mar 17 17:30:28.069939 systemd[1]: session-25.scope: Deactivated successfully. Mar 17 17:30:28.070889 systemd-logind[1694]: Session 25 logged out. Waiting for processes to exit. Mar 17 17:30:28.073138 systemd-logind[1694]: Removed session 25. Mar 17 17:30:33.162902 systemd[1]: Started sshd@23-10.200.20.35:22-10.200.16.10:45266.service - OpenSSH per-connection server daemon (10.200.16.10:45266). Mar 17 17:30:33.652141 sshd[4964]: Accepted publickey for core from 10.200.16.10 port 45266 ssh2: RSA SHA256:Vv+Gx/xgYWEBj55H1UdRAcw683xVG5W8/4UU5IxNHAc Mar 17 17:30:33.653708 sshd-session[4964]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:30:33.659471 systemd-logind[1694]: New session 26 of user core. Mar 17 17:30:33.669726 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 17 17:30:34.083468 sshd[4966]: Connection closed by 10.200.16.10 port 45266 Mar 17 17:30:34.084178 sshd-session[4964]: pam_unix(sshd:session): session closed for user core Mar 17 17:30:34.089391 systemd[1]: sshd@23-10.200.20.35:22-10.200.16.10:45266.service: Deactivated successfully. Mar 17 17:30:34.095080 systemd[1]: session-26.scope: Deactivated successfully. Mar 17 17:30:34.096049 systemd-logind[1694]: Session 26 logged out. Waiting for processes to exit. Mar 17 17:30:34.098972 systemd-logind[1694]: Removed session 26. Mar 17 17:30:39.168868 systemd[1]: Started sshd@24-10.200.20.35:22-10.200.16.10:51568.service - OpenSSH per-connection server daemon (10.200.16.10:51568). Mar 17 17:30:39.620187 sshd[4977]: Accepted publickey for core from 10.200.16.10 port 51568 ssh2: RSA SHA256:Vv+Gx/xgYWEBj55H1UdRAcw683xVG5W8/4UU5IxNHAc Mar 17 17:30:39.621774 sshd-session[4977]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:30:39.627619 systemd-logind[1694]: New session 27 of user core. Mar 17 17:30:39.637827 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 17 17:30:40.008917 sshd[4979]: Connection closed by 10.200.16.10 port 51568 Mar 17 17:30:40.008746 sshd-session[4977]: pam_unix(sshd:session): session closed for user core Mar 17 17:30:40.012695 systemd[1]: sshd@24-10.200.20.35:22-10.200.16.10:51568.service: Deactivated successfully. Mar 17 17:30:40.014750 systemd[1]: session-27.scope: Deactivated successfully. Mar 17 17:30:40.016694 systemd-logind[1694]: Session 27 logged out. Waiting for processes to exit. Mar 17 17:30:40.019169 systemd-logind[1694]: Removed session 27. Mar 17 17:30:45.094452 systemd[1]: Started sshd@25-10.200.20.35:22-10.200.16.10:51578.service - OpenSSH per-connection server daemon (10.200.16.10:51578). Mar 17 17:30:45.558852 sshd[4994]: Accepted publickey for core from 10.200.16.10 port 51578 ssh2: RSA SHA256:Vv+Gx/xgYWEBj55H1UdRAcw683xVG5W8/4UU5IxNHAc Mar 17 17:30:45.560368 sshd-session[4994]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:30:45.564810 systemd-logind[1694]: New session 28 of user core. Mar 17 17:30:45.572745 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 17 17:30:45.954087 sshd[4996]: Connection closed by 10.200.16.10 port 51578 Mar 17 17:30:45.954578 sshd-session[4994]: pam_unix(sshd:session): session closed for user core Mar 17 17:30:45.959111 systemd[1]: sshd@25-10.200.20.35:22-10.200.16.10:51578.service: Deactivated successfully. Mar 17 17:30:45.962323 systemd[1]: session-28.scope: Deactivated successfully. Mar 17 17:30:45.965290 systemd-logind[1694]: Session 28 logged out. Waiting for processes to exit. Mar 17 17:30:45.967027 systemd-logind[1694]: Removed session 28. Mar 17 17:30:51.043885 systemd[1]: Started sshd@26-10.200.20.35:22-10.200.16.10:34994.service - OpenSSH per-connection server daemon (10.200.16.10:34994). Mar 17 17:30:51.493974 sshd[5007]: Accepted publickey for core from 10.200.16.10 port 34994 ssh2: RSA SHA256:Vv+Gx/xgYWEBj55H1UdRAcw683xVG5W8/4UU5IxNHAc Mar 17 17:30:51.495238 sshd-session[5007]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:30:51.499750 systemd-logind[1694]: New session 29 of user core. Mar 17 17:30:51.504714 systemd[1]: Started session-29.scope - Session 29 of User core. Mar 17 17:30:51.881437 sshd[5009]: Connection closed by 10.200.16.10 port 34994 Mar 17 17:30:51.881913 sshd-session[5007]: pam_unix(sshd:session): session closed for user core Mar 17 17:30:51.885375 systemd-logind[1694]: Session 29 logged out. Waiting for processes to exit. Mar 17 17:30:51.886389 systemd[1]: sshd@26-10.200.20.35:22-10.200.16.10:34994.service: Deactivated successfully. Mar 17 17:30:51.889509 systemd[1]: session-29.scope: Deactivated successfully. Mar 17 17:30:51.891855 systemd-logind[1694]: Removed session 29. Mar 17 17:30:56.975277 systemd[1]: Started sshd@27-10.200.20.35:22-10.200.16.10:35008.service - OpenSSH per-connection server daemon (10.200.16.10:35008). Mar 17 17:30:57.465222 sshd[5020]: Accepted publickey for core from 10.200.16.10 port 35008 ssh2: RSA SHA256:Vv+Gx/xgYWEBj55H1UdRAcw683xVG5W8/4UU5IxNHAc Mar 17 17:30:57.466745 sshd-session[5020]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:30:57.471589 systemd-logind[1694]: New session 30 of user core. Mar 17 17:30:57.480721 systemd[1]: Started session-30.scope - Session 30 of User core. Mar 17 17:30:57.888348 sshd[5022]: Connection closed by 10.200.16.10 port 35008 Mar 17 17:30:57.888994 sshd-session[5020]: pam_unix(sshd:session): session closed for user core Mar 17 17:30:57.892418 systemd[1]: sshd@27-10.200.20.35:22-10.200.16.10:35008.service: Deactivated successfully. Mar 17 17:30:57.894095 systemd[1]: session-30.scope: Deactivated successfully. Mar 17 17:30:57.894985 systemd-logind[1694]: Session 30 logged out. Waiting for processes to exit. Mar 17 17:30:57.896402 systemd-logind[1694]: Removed session 30. Mar 17 17:30:57.974834 systemd[1]: Started sshd@28-10.200.20.35:22-10.200.16.10:35018.service - OpenSSH per-connection server daemon (10.200.16.10:35018). Mar 17 17:30:58.421885 sshd[5033]: Accepted publickey for core from 10.200.16.10 port 35018 ssh2: RSA SHA256:Vv+Gx/xgYWEBj55H1UdRAcw683xVG5W8/4UU5IxNHAc Mar 17 17:30:58.423228 sshd-session[5033]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:30:58.428432 systemd-logind[1694]: New session 31 of user core. Mar 17 17:30:58.431688 systemd[1]: Started session-31.scope - Session 31 of User core. Mar 17 17:31:01.096577 containerd[1808]: time="2025-03-17T17:31:01.095943439Z" level=info msg="StopContainer for \"2a22cc849244e6edab9b7623fa93d22df271d5eb9bc1845ef5e41d359f736295\" with timeout 30 (s)" Mar 17 17:31:01.098612 containerd[1808]: time="2025-03-17T17:31:01.098021398Z" level=info msg="Stop container \"2a22cc849244e6edab9b7623fa93d22df271d5eb9bc1845ef5e41d359f736295\" with signal terminated" Mar 17 17:31:01.110279 containerd[1808]: time="2025-03-17T17:31:01.110162753Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 17:31:01.113395 systemd[1]: cri-containerd-2a22cc849244e6edab9b7623fa93d22df271d5eb9bc1845ef5e41d359f736295.scope: Deactivated successfully. Mar 17 17:31:01.122791 containerd[1808]: time="2025-03-17T17:31:01.122609068Z" level=info msg="StopContainer for \"05969b4c34305d8280deeee47c057d3e094802e47d13203cf5e64991f7dc6918\" with timeout 2 (s)" Mar 17 17:31:01.123337 containerd[1808]: time="2025-03-17T17:31:01.123301988Z" level=info msg="Stop container \"05969b4c34305d8280deeee47c057d3e094802e47d13203cf5e64991f7dc6918\" with signal terminated" Mar 17 17:31:01.137135 systemd-networkd[1501]: lxc_health: Link DOWN Mar 17 17:31:01.137144 systemd-networkd[1501]: lxc_health: Lost carrier Mar 17 17:31:01.146796 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2a22cc849244e6edab9b7623fa93d22df271d5eb9bc1845ef5e41d359f736295-rootfs.mount: Deactivated successfully. Mar 17 17:31:01.150482 systemd[1]: cri-containerd-05969b4c34305d8280deeee47c057d3e094802e47d13203cf5e64991f7dc6918.scope: Deactivated successfully. Mar 17 17:31:01.150947 systemd[1]: cri-containerd-05969b4c34305d8280deeee47c057d3e094802e47d13203cf5e64991f7dc6918.scope: Consumed 7.024s CPU time. Mar 17 17:31:01.171150 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-05969b4c34305d8280deeee47c057d3e094802e47d13203cf5e64991f7dc6918-rootfs.mount: Deactivated successfully. Mar 17 17:31:01.257986 containerd[1808]: time="2025-03-17T17:31:01.257910375Z" level=info msg="shim disconnected" id=05969b4c34305d8280deeee47c057d3e094802e47d13203cf5e64991f7dc6918 namespace=k8s.io Mar 17 17:31:01.257986 containerd[1808]: time="2025-03-17T17:31:01.257979015Z" level=warning msg="cleaning up after shim disconnected" id=05969b4c34305d8280deeee47c057d3e094802e47d13203cf5e64991f7dc6918 namespace=k8s.io Mar 17 17:31:01.257986 containerd[1808]: time="2025-03-17T17:31:01.257987655Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:31:01.258332 containerd[1808]: time="2025-03-17T17:31:01.258168135Z" level=info msg="shim disconnected" id=2a22cc849244e6edab9b7623fa93d22df271d5eb9bc1845ef5e41d359f736295 namespace=k8s.io Mar 17 17:31:01.258332 containerd[1808]: time="2025-03-17T17:31:01.258194655Z" level=warning msg="cleaning up after shim disconnected" id=2a22cc849244e6edab9b7623fa93d22df271d5eb9bc1845ef5e41d359f736295 namespace=k8s.io Mar 17 17:31:01.258332 containerd[1808]: time="2025-03-17T17:31:01.258202575Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:31:01.279926 containerd[1808]: time="2025-03-17T17:31:01.279883207Z" level=info msg="StopContainer for \"05969b4c34305d8280deeee47c057d3e094802e47d13203cf5e64991f7dc6918\" returns successfully" Mar 17 17:31:01.280741 containerd[1808]: time="2025-03-17T17:31:01.280709606Z" level=info msg="StopPodSandbox for \"7ab02314498ec8b3b5b8864f221a5a29c93bd0489dd272992a050eedde9df17e\"" Mar 17 17:31:01.280853 containerd[1808]: time="2025-03-17T17:31:01.280756846Z" level=info msg="Container to stop \"ed37bea3a8dad646d78960b0a14429e9ec43b2b89e708d193564b3dc0fba7381\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:31:01.280853 containerd[1808]: time="2025-03-17T17:31:01.280769366Z" level=info msg="Container to stop \"05969b4c34305d8280deeee47c057d3e094802e47d13203cf5e64991f7dc6918\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:31:01.280853 containerd[1808]: time="2025-03-17T17:31:01.280778286Z" level=info msg="Container to stop \"2d67fec0ad10552c0ed2e4ce19088031aaa1406c90250c6d1bfb01e5680d0947\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:31:01.280853 containerd[1808]: time="2025-03-17T17:31:01.280787166Z" level=info msg="Container to stop \"79464e27ce5f39a0c8467f4ab5aa2b494331b58d07a1cd846dac5cdec3d0c4ce\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:31:01.280853 containerd[1808]: time="2025-03-17T17:31:01.280798406Z" level=info msg="Container to stop \"b84c47a2fc7317b07f273c6794822779faaf75fda9c5019697740baa0a29fce1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:31:01.283175 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7ab02314498ec8b3b5b8864f221a5a29c93bd0489dd272992a050eedde9df17e-shm.mount: Deactivated successfully. Mar 17 17:31:01.284062 containerd[1808]: time="2025-03-17T17:31:01.283738645Z" level=info msg="StopContainer for \"2a22cc849244e6edab9b7623fa93d22df271d5eb9bc1845ef5e41d359f736295\" returns successfully" Mar 17 17:31:01.285958 containerd[1808]: time="2025-03-17T17:31:01.285887484Z" level=info msg="StopPodSandbox for \"96ddba37f22720cc2cacc7cbd94ca806453faa888bf9a036e4f4f890b0c2b24a\"" Mar 17 17:31:01.285958 containerd[1808]: time="2025-03-17T17:31:01.285938404Z" level=info msg="Container to stop \"2a22cc849244e6edab9b7623fa93d22df271d5eb9bc1845ef5e41d359f736295\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:31:01.287927 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-96ddba37f22720cc2cacc7cbd94ca806453faa888bf9a036e4f4f890b0c2b24a-shm.mount: Deactivated successfully. Mar 17 17:31:01.293965 systemd[1]: cri-containerd-7ab02314498ec8b3b5b8864f221a5a29c93bd0489dd272992a050eedde9df17e.scope: Deactivated successfully. Mar 17 17:31:01.299480 systemd[1]: cri-containerd-96ddba37f22720cc2cacc7cbd94ca806453faa888bf9a036e4f4f890b0c2b24a.scope: Deactivated successfully. Mar 17 17:31:01.353284 containerd[1808]: time="2025-03-17T17:31:01.353025578Z" level=info msg="shim disconnected" id=7ab02314498ec8b3b5b8864f221a5a29c93bd0489dd272992a050eedde9df17e namespace=k8s.io Mar 17 17:31:01.353284 containerd[1808]: time="2025-03-17T17:31:01.353090698Z" level=warning msg="cleaning up after shim disconnected" id=7ab02314498ec8b3b5b8864f221a5a29c93bd0489dd272992a050eedde9df17e namespace=k8s.io Mar 17 17:31:01.353284 containerd[1808]: time="2025-03-17T17:31:01.353098698Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:31:01.354288 containerd[1808]: time="2025-03-17T17:31:01.353768418Z" level=info msg="shim disconnected" id=96ddba37f22720cc2cacc7cbd94ca806453faa888bf9a036e4f4f890b0c2b24a namespace=k8s.io Mar 17 17:31:01.354288 containerd[1808]: time="2025-03-17T17:31:01.354059777Z" level=warning msg="cleaning up after shim disconnected" id=96ddba37f22720cc2cacc7cbd94ca806453faa888bf9a036e4f4f890b0c2b24a namespace=k8s.io Mar 17 17:31:01.354288 containerd[1808]: time="2025-03-17T17:31:01.354069337Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:31:01.367599 containerd[1808]: time="2025-03-17T17:31:01.367404132Z" level=info msg="TearDown network for sandbox \"96ddba37f22720cc2cacc7cbd94ca806453faa888bf9a036e4f4f890b0c2b24a\" successfully" Mar 17 17:31:01.367599 containerd[1808]: time="2025-03-17T17:31:01.367442612Z" level=info msg="StopPodSandbox for \"96ddba37f22720cc2cacc7cbd94ca806453faa888bf9a036e4f4f890b0c2b24a\" returns successfully" Mar 17 17:31:01.371685 containerd[1808]: time="2025-03-17T17:31:01.371638171Z" level=info msg="TearDown network for sandbox \"7ab02314498ec8b3b5b8864f221a5a29c93bd0489dd272992a050eedde9df17e\" successfully" Mar 17 17:31:01.371685 containerd[1808]: time="2025-03-17T17:31:01.371675771Z" level=info msg="StopPodSandbox for \"7ab02314498ec8b3b5b8864f221a5a29c93bd0489dd272992a050eedde9df17e\" returns successfully" Mar 17 17:31:01.507600 kubelet[3337]: I0317 17:31:01.506987 3337 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/eb5574a5-4148-41b0-b2b5-243de437e748-cni-path\") pod \"eb5574a5-4148-41b0-b2b5-243de437e748\" (UID: \"eb5574a5-4148-41b0-b2b5-243de437e748\") " Mar 17 17:31:01.507600 kubelet[3337]: I0317 17:31:01.507032 3337 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/eb5574a5-4148-41b0-b2b5-243de437e748-bpf-maps\") pod \"eb5574a5-4148-41b0-b2b5-243de437e748\" (UID: \"eb5574a5-4148-41b0-b2b5-243de437e748\") " Mar 17 17:31:01.507600 kubelet[3337]: I0317 17:31:01.507065 3337 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/257fac58-5384-444c-b517-deacc2d28da3-cilium-config-path\") pod \"257fac58-5384-444c-b517-deacc2d28da3\" (UID: \"257fac58-5384-444c-b517-deacc2d28da3\") " Mar 17 17:31:01.507600 kubelet[3337]: I0317 17:31:01.507085 3337 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fmjcl\" (UniqueName: \"kubernetes.io/projected/257fac58-5384-444c-b517-deacc2d28da3-kube-api-access-fmjcl\") pod \"257fac58-5384-444c-b517-deacc2d28da3\" (UID: \"257fac58-5384-444c-b517-deacc2d28da3\") " Mar 17 17:31:01.507600 kubelet[3337]: I0317 17:31:01.507108 3337 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/eb5574a5-4148-41b0-b2b5-243de437e748-clustermesh-secrets\") pod \"eb5574a5-4148-41b0-b2b5-243de437e748\" (UID: \"eb5574a5-4148-41b0-b2b5-243de437e748\") " Mar 17 17:31:01.507600 kubelet[3337]: I0317 17:31:01.507123 3337 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/eb5574a5-4148-41b0-b2b5-243de437e748-hostproc\") pod \"eb5574a5-4148-41b0-b2b5-243de437e748\" (UID: \"eb5574a5-4148-41b0-b2b5-243de437e748\") " Mar 17 17:31:01.508186 kubelet[3337]: I0317 17:31:01.507120 3337 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb5574a5-4148-41b0-b2b5-243de437e748-cni-path" (OuterVolumeSpecName: "cni-path") pod "eb5574a5-4148-41b0-b2b5-243de437e748" (UID: "eb5574a5-4148-41b0-b2b5-243de437e748"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 17:31:01.508186 kubelet[3337]: I0317 17:31:01.507144 3337 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/eb5574a5-4148-41b0-b2b5-243de437e748-cilium-cgroup\") pod \"eb5574a5-4148-41b0-b2b5-243de437e748\" (UID: \"eb5574a5-4148-41b0-b2b5-243de437e748\") " Mar 17 17:31:01.508186 kubelet[3337]: I0317 17:31:01.507160 3337 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/eb5574a5-4148-41b0-b2b5-243de437e748-host-proc-sys-net\") pod \"eb5574a5-4148-41b0-b2b5-243de437e748\" (UID: \"eb5574a5-4148-41b0-b2b5-243de437e748\") " Mar 17 17:31:01.508186 kubelet[3337]: I0317 17:31:01.507202 3337 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/eb5574a5-4148-41b0-b2b5-243de437e748-hubble-tls\") pod \"eb5574a5-4148-41b0-b2b5-243de437e748\" (UID: \"eb5574a5-4148-41b0-b2b5-243de437e748\") " Mar 17 17:31:01.508186 kubelet[3337]: I0317 17:31:01.507221 3337 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/eb5574a5-4148-41b0-b2b5-243de437e748-cilium-run\") pod \"eb5574a5-4148-41b0-b2b5-243de437e748\" (UID: \"eb5574a5-4148-41b0-b2b5-243de437e748\") " Mar 17 17:31:01.508186 kubelet[3337]: I0317 17:31:01.507238 3337 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/eb5574a5-4148-41b0-b2b5-243de437e748-cilium-config-path\") pod \"eb5574a5-4148-41b0-b2b5-243de437e748\" (UID: \"eb5574a5-4148-41b0-b2b5-243de437e748\") " Mar 17 17:31:01.508323 kubelet[3337]: I0317 17:31:01.507254 3337 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/eb5574a5-4148-41b0-b2b5-243de437e748-etc-cni-netd\") pod \"eb5574a5-4148-41b0-b2b5-243de437e748\" (UID: \"eb5574a5-4148-41b0-b2b5-243de437e748\") " Mar 17 17:31:01.508323 kubelet[3337]: I0317 17:31:01.507272 3337 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6r4bv\" (UniqueName: \"kubernetes.io/projected/eb5574a5-4148-41b0-b2b5-243de437e748-kube-api-access-6r4bv\") pod \"eb5574a5-4148-41b0-b2b5-243de437e748\" (UID: \"eb5574a5-4148-41b0-b2b5-243de437e748\") " Mar 17 17:31:01.508323 kubelet[3337]: I0317 17:31:01.507286 3337 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/eb5574a5-4148-41b0-b2b5-243de437e748-host-proc-sys-kernel\") pod \"eb5574a5-4148-41b0-b2b5-243de437e748\" (UID: \"eb5574a5-4148-41b0-b2b5-243de437e748\") " Mar 17 17:31:01.508323 kubelet[3337]: I0317 17:31:01.507299 3337 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eb5574a5-4148-41b0-b2b5-243de437e748-lib-modules\") pod \"eb5574a5-4148-41b0-b2b5-243de437e748\" (UID: \"eb5574a5-4148-41b0-b2b5-243de437e748\") " Mar 17 17:31:01.508323 kubelet[3337]: I0317 17:31:01.507314 3337 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eb5574a5-4148-41b0-b2b5-243de437e748-xtables-lock\") pod \"eb5574a5-4148-41b0-b2b5-243de437e748\" (UID: \"eb5574a5-4148-41b0-b2b5-243de437e748\") " Mar 17 17:31:01.508323 kubelet[3337]: I0317 17:31:01.507351 3337 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/eb5574a5-4148-41b0-b2b5-243de437e748-cni-path\") on node \"ci-4152.2.2-a-e33ca1f69b\" DevicePath \"\"" Mar 17 17:31:01.508449 kubelet[3337]: I0317 17:31:01.507375 3337 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb5574a5-4148-41b0-b2b5-243de437e748-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "eb5574a5-4148-41b0-b2b5-243de437e748" (UID: "eb5574a5-4148-41b0-b2b5-243de437e748"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 17:31:01.509865 kubelet[3337]: I0317 17:31:01.509573 3337 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb5574a5-4148-41b0-b2b5-243de437e748-hostproc" (OuterVolumeSpecName: "hostproc") pod "eb5574a5-4148-41b0-b2b5-243de437e748" (UID: "eb5574a5-4148-41b0-b2b5-243de437e748"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 17:31:01.509865 kubelet[3337]: I0317 17:31:01.509768 3337 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb5574a5-4148-41b0-b2b5-243de437e748-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "eb5574a5-4148-41b0-b2b5-243de437e748" (UID: "eb5574a5-4148-41b0-b2b5-243de437e748"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 17:31:01.510550 kubelet[3337]: I0317 17:31:01.509634 3337 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb5574a5-4148-41b0-b2b5-243de437e748-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "eb5574a5-4148-41b0-b2b5-243de437e748" (UID: "eb5574a5-4148-41b0-b2b5-243de437e748"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 17:31:01.510550 kubelet[3337]: I0317 17:31:01.510460 3337 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb5574a5-4148-41b0-b2b5-243de437e748-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "eb5574a5-4148-41b0-b2b5-243de437e748" (UID: "eb5574a5-4148-41b0-b2b5-243de437e748"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 17:31:01.513188 kubelet[3337]: I0317 17:31:01.513087 3337 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb5574a5-4148-41b0-b2b5-243de437e748-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "eb5574a5-4148-41b0-b2b5-243de437e748" (UID: "eb5574a5-4148-41b0-b2b5-243de437e748"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 17:31:01.513377 kubelet[3337]: I0317 17:31:01.513290 3337 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb5574a5-4148-41b0-b2b5-243de437e748-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "eb5574a5-4148-41b0-b2b5-243de437e748" (UID: "eb5574a5-4148-41b0-b2b5-243de437e748"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 17:31:01.515134 kubelet[3337]: I0317 17:31:01.515000 3337 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb5574a5-4148-41b0-b2b5-243de437e748-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "eb5574a5-4148-41b0-b2b5-243de437e748" (UID: "eb5574a5-4148-41b0-b2b5-243de437e748"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 17:31:01.515134 kubelet[3337]: I0317 17:31:01.515042 3337 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb5574a5-4148-41b0-b2b5-243de437e748-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "eb5574a5-4148-41b0-b2b5-243de437e748" (UID: "eb5574a5-4148-41b0-b2b5-243de437e748"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 17:31:01.515973 kubelet[3337]: I0317 17:31:01.515739 3337 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/257fac58-5384-444c-b517-deacc2d28da3-kube-api-access-fmjcl" (OuterVolumeSpecName: "kube-api-access-fmjcl") pod "257fac58-5384-444c-b517-deacc2d28da3" (UID: "257fac58-5384-444c-b517-deacc2d28da3"). InnerVolumeSpecName "kube-api-access-fmjcl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 17 17:31:01.515973 kubelet[3337]: I0317 17:31:01.515778 3337 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb5574a5-4148-41b0-b2b5-243de437e748-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "eb5574a5-4148-41b0-b2b5-243de437e748" (UID: "eb5574a5-4148-41b0-b2b5-243de437e748"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 17 17:31:01.516343 kubelet[3337]: I0317 17:31:01.516310 3337 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/257fac58-5384-444c-b517-deacc2d28da3-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "257fac58-5384-444c-b517-deacc2d28da3" (UID: "257fac58-5384-444c-b517-deacc2d28da3"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 17 17:31:01.516747 kubelet[3337]: I0317 17:31:01.516715 3337 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb5574a5-4148-41b0-b2b5-243de437e748-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "eb5574a5-4148-41b0-b2b5-243de437e748" (UID: "eb5574a5-4148-41b0-b2b5-243de437e748"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 17 17:31:01.517621 kubelet[3337]: I0317 17:31:01.516626 3337 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb5574a5-4148-41b0-b2b5-243de437e748-kube-api-access-6r4bv" (OuterVolumeSpecName: "kube-api-access-6r4bv") pod "eb5574a5-4148-41b0-b2b5-243de437e748" (UID: "eb5574a5-4148-41b0-b2b5-243de437e748"). InnerVolumeSpecName "kube-api-access-6r4bv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 17 17:31:01.518061 kubelet[3337]: I0317 17:31:01.518026 3337 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb5574a5-4148-41b0-b2b5-243de437e748-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "eb5574a5-4148-41b0-b2b5-243de437e748" (UID: "eb5574a5-4148-41b0-b2b5-243de437e748"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 17 17:31:01.552493 kubelet[3337]: I0317 17:31:01.552466 3337 scope.go:117] "RemoveContainer" containerID="2a22cc849244e6edab9b7623fa93d22df271d5eb9bc1845ef5e41d359f736295" Mar 17 17:31:01.555639 containerd[1808]: time="2025-03-17T17:31:01.555085419Z" level=info msg="RemoveContainer for \"2a22cc849244e6edab9b7623fa93d22df271d5eb9bc1845ef5e41d359f736295\"" Mar 17 17:31:01.561079 systemd[1]: Removed slice kubepods-besteffort-pod257fac58_5384_444c_b517_deacc2d28da3.slice - libcontainer container kubepods-besteffort-pod257fac58_5384_444c_b517_deacc2d28da3.slice. Mar 17 17:31:01.566522 systemd[1]: Removed slice kubepods-burstable-podeb5574a5_4148_41b0_b2b5_243de437e748.slice - libcontainer container kubepods-burstable-podeb5574a5_4148_41b0_b2b5_243de437e748.slice. Mar 17 17:31:01.566993 systemd[1]: kubepods-burstable-podeb5574a5_4148_41b0_b2b5_243de437e748.slice: Consumed 7.099s CPU time. Mar 17 17:31:01.572102 containerd[1808]: time="2025-03-17T17:31:01.572011732Z" level=info msg="RemoveContainer for \"2a22cc849244e6edab9b7623fa93d22df271d5eb9bc1845ef5e41d359f736295\" returns successfully" Mar 17 17:31:01.572481 kubelet[3337]: I0317 17:31:01.572453 3337 scope.go:117] "RemoveContainer" containerID="2a22cc849244e6edab9b7623fa93d22df271d5eb9bc1845ef5e41d359f736295" Mar 17 17:31:01.572785 containerd[1808]: time="2025-03-17T17:31:01.572718092Z" level=error msg="ContainerStatus for \"2a22cc849244e6edab9b7623fa93d22df271d5eb9bc1845ef5e41d359f736295\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2a22cc849244e6edab9b7623fa93d22df271d5eb9bc1845ef5e41d359f736295\": not found" Mar 17 17:31:01.573560 kubelet[3337]: E0317 17:31:01.573353 3337 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2a22cc849244e6edab9b7623fa93d22df271d5eb9bc1845ef5e41d359f736295\": not found" containerID="2a22cc849244e6edab9b7623fa93d22df271d5eb9bc1845ef5e41d359f736295" Mar 17 17:31:01.573560 kubelet[3337]: I0317 17:31:01.573384 3337 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2a22cc849244e6edab9b7623fa93d22df271d5eb9bc1845ef5e41d359f736295"} err="failed to get container status \"2a22cc849244e6edab9b7623fa93d22df271d5eb9bc1845ef5e41d359f736295\": rpc error: code = NotFound desc = an error occurred when try to find container \"2a22cc849244e6edab9b7623fa93d22df271d5eb9bc1845ef5e41d359f736295\": not found" Mar 17 17:31:01.573560 kubelet[3337]: I0317 17:31:01.573459 3337 scope.go:117] "RemoveContainer" containerID="05969b4c34305d8280deeee47c057d3e094802e47d13203cf5e64991f7dc6918" Mar 17 17:31:01.575888 containerd[1808]: time="2025-03-17T17:31:01.575774010Z" level=info msg="RemoveContainer for \"05969b4c34305d8280deeee47c057d3e094802e47d13203cf5e64991f7dc6918\"" Mar 17 17:31:01.584653 containerd[1808]: time="2025-03-17T17:31:01.584520527Z" level=info msg="RemoveContainer for \"05969b4c34305d8280deeee47c057d3e094802e47d13203cf5e64991f7dc6918\" returns successfully" Mar 17 17:31:01.585036 kubelet[3337]: I0317 17:31:01.585008 3337 scope.go:117] "RemoveContainer" containerID="b84c47a2fc7317b07f273c6794822779faaf75fda9c5019697740baa0a29fce1" Mar 17 17:31:01.586715 containerd[1808]: time="2025-03-17T17:31:01.586674806Z" level=info msg="RemoveContainer for \"b84c47a2fc7317b07f273c6794822779faaf75fda9c5019697740baa0a29fce1\"" Mar 17 17:31:01.599712 containerd[1808]: time="2025-03-17T17:31:01.599636081Z" level=info msg="RemoveContainer for \"b84c47a2fc7317b07f273c6794822779faaf75fda9c5019697740baa0a29fce1\" returns successfully" Mar 17 17:31:01.599915 kubelet[3337]: I0317 17:31:01.599887 3337 scope.go:117] "RemoveContainer" containerID="ed37bea3a8dad646d78960b0a14429e9ec43b2b89e708d193564b3dc0fba7381" Mar 17 17:31:01.601234 containerd[1808]: time="2025-03-17T17:31:01.601167480Z" level=info msg="RemoveContainer for \"ed37bea3a8dad646d78960b0a14429e9ec43b2b89e708d193564b3dc0fba7381\"" Mar 17 17:31:01.608586 kubelet[3337]: I0317 17:31:01.608446 3337 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fmjcl\" (UniqueName: \"kubernetes.io/projected/257fac58-5384-444c-b517-deacc2d28da3-kube-api-access-fmjcl\") on node \"ci-4152.2.2-a-e33ca1f69b\" DevicePath \"\"" Mar 17 17:31:01.608586 kubelet[3337]: I0317 17:31:01.608480 3337 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/eb5574a5-4148-41b0-b2b5-243de437e748-bpf-maps\") on node \"ci-4152.2.2-a-e33ca1f69b\" DevicePath \"\"" Mar 17 17:31:01.608586 kubelet[3337]: I0317 17:31:01.608493 3337 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/257fac58-5384-444c-b517-deacc2d28da3-cilium-config-path\") on node \"ci-4152.2.2-a-e33ca1f69b\" DevicePath \"\"" Mar 17 17:31:01.608586 kubelet[3337]: I0317 17:31:01.608502 3337 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/eb5574a5-4148-41b0-b2b5-243de437e748-clustermesh-secrets\") on node \"ci-4152.2.2-a-e33ca1f69b\" DevicePath \"\"" Mar 17 17:31:01.608586 kubelet[3337]: I0317 17:31:01.608512 3337 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/eb5574a5-4148-41b0-b2b5-243de437e748-hostproc\") on node \"ci-4152.2.2-a-e33ca1f69b\" DevicePath \"\"" Mar 17 17:31:01.608586 kubelet[3337]: I0317 17:31:01.608521 3337 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/eb5574a5-4148-41b0-b2b5-243de437e748-cilium-cgroup\") on node \"ci-4152.2.2-a-e33ca1f69b\" DevicePath \"\"" Mar 17 17:31:01.608586 kubelet[3337]: I0317 17:31:01.608544 3337 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/eb5574a5-4148-41b0-b2b5-243de437e748-host-proc-sys-net\") on node \"ci-4152.2.2-a-e33ca1f69b\" DevicePath \"\"" Mar 17 17:31:01.608586 kubelet[3337]: I0317 17:31:01.608577 3337 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/eb5574a5-4148-41b0-b2b5-243de437e748-hubble-tls\") on node \"ci-4152.2.2-a-e33ca1f69b\" DevicePath \"\"" Mar 17 17:31:01.609233 kubelet[3337]: I0317 17:31:01.608586 3337 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/eb5574a5-4148-41b0-b2b5-243de437e748-cilium-run\") on node \"ci-4152.2.2-a-e33ca1f69b\" DevicePath \"\"" Mar 17 17:31:01.609233 kubelet[3337]: I0317 17:31:01.608596 3337 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/eb5574a5-4148-41b0-b2b5-243de437e748-cilium-config-path\") on node \"ci-4152.2.2-a-e33ca1f69b\" DevicePath \"\"" Mar 17 17:31:01.609233 kubelet[3337]: I0317 17:31:01.608604 3337 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eb5574a5-4148-41b0-b2b5-243de437e748-xtables-lock\") on node \"ci-4152.2.2-a-e33ca1f69b\" DevicePath \"\"" Mar 17 17:31:01.609233 kubelet[3337]: I0317 17:31:01.608613 3337 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/eb5574a5-4148-41b0-b2b5-243de437e748-etc-cni-netd\") on node \"ci-4152.2.2-a-e33ca1f69b\" DevicePath \"\"" Mar 17 17:31:01.609233 kubelet[3337]: I0317 17:31:01.608621 3337 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6r4bv\" (UniqueName: \"kubernetes.io/projected/eb5574a5-4148-41b0-b2b5-243de437e748-kube-api-access-6r4bv\") on node \"ci-4152.2.2-a-e33ca1f69b\" DevicePath \"\"" Mar 17 17:31:01.609233 kubelet[3337]: I0317 17:31:01.608642 3337 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/eb5574a5-4148-41b0-b2b5-243de437e748-host-proc-sys-kernel\") on node \"ci-4152.2.2-a-e33ca1f69b\" DevicePath \"\"" Mar 17 17:31:01.609233 kubelet[3337]: I0317 17:31:01.608652 3337 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eb5574a5-4148-41b0-b2b5-243de437e748-lib-modules\") on node \"ci-4152.2.2-a-e33ca1f69b\" DevicePath \"\"" Mar 17 17:31:01.613447 containerd[1808]: time="2025-03-17T17:31:01.613400876Z" level=info msg="RemoveContainer for \"ed37bea3a8dad646d78960b0a14429e9ec43b2b89e708d193564b3dc0fba7381\" returns successfully" Mar 17 17:31:01.613827 kubelet[3337]: I0317 17:31:01.613703 3337 scope.go:117] "RemoveContainer" containerID="79464e27ce5f39a0c8467f4ab5aa2b494331b58d07a1cd846dac5cdec3d0c4ce" Mar 17 17:31:01.615237 containerd[1808]: time="2025-03-17T17:31:01.614971835Z" level=info msg="RemoveContainer for \"79464e27ce5f39a0c8467f4ab5aa2b494331b58d07a1cd846dac5cdec3d0c4ce\"" Mar 17 17:31:01.626997 containerd[1808]: time="2025-03-17T17:31:01.626921590Z" level=info msg="RemoveContainer for \"79464e27ce5f39a0c8467f4ab5aa2b494331b58d07a1cd846dac5cdec3d0c4ce\" returns successfully" Mar 17 17:31:01.627254 kubelet[3337]: I0317 17:31:01.627226 3337 scope.go:117] "RemoveContainer" containerID="2d67fec0ad10552c0ed2e4ce19088031aaa1406c90250c6d1bfb01e5680d0947" Mar 17 17:31:01.628647 containerd[1808]: time="2025-03-17T17:31:01.628595870Z" level=info msg="RemoveContainer for \"2d67fec0ad10552c0ed2e4ce19088031aaa1406c90250c6d1bfb01e5680d0947\"" Mar 17 17:31:01.656254 containerd[1808]: time="2025-03-17T17:31:01.656206299Z" level=info msg="RemoveContainer for \"2d67fec0ad10552c0ed2e4ce19088031aaa1406c90250c6d1bfb01e5680d0947\" returns successfully" Mar 17 17:31:01.656705 kubelet[3337]: I0317 17:31:01.656583 3337 scope.go:117] "RemoveContainer" containerID="05969b4c34305d8280deeee47c057d3e094802e47d13203cf5e64991f7dc6918" Mar 17 17:31:01.656967 containerd[1808]: time="2025-03-17T17:31:01.656926339Z" level=error msg="ContainerStatus for \"05969b4c34305d8280deeee47c057d3e094802e47d13203cf5e64991f7dc6918\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"05969b4c34305d8280deeee47c057d3e094802e47d13203cf5e64991f7dc6918\": not found" Mar 17 17:31:01.657220 kubelet[3337]: E0317 17:31:01.657105 3337 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"05969b4c34305d8280deeee47c057d3e094802e47d13203cf5e64991f7dc6918\": not found" containerID="05969b4c34305d8280deeee47c057d3e094802e47d13203cf5e64991f7dc6918" Mar 17 17:31:01.657220 kubelet[3337]: I0317 17:31:01.657136 3337 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"05969b4c34305d8280deeee47c057d3e094802e47d13203cf5e64991f7dc6918"} err="failed to get container status \"05969b4c34305d8280deeee47c057d3e094802e47d13203cf5e64991f7dc6918\": rpc error: code = NotFound desc = an error occurred when try to find container \"05969b4c34305d8280deeee47c057d3e094802e47d13203cf5e64991f7dc6918\": not found" Mar 17 17:31:01.657220 kubelet[3337]: I0317 17:31:01.657157 3337 scope.go:117] "RemoveContainer" containerID="b84c47a2fc7317b07f273c6794822779faaf75fda9c5019697740baa0a29fce1" Mar 17 17:31:01.657526 containerd[1808]: time="2025-03-17T17:31:01.657452258Z" level=error msg="ContainerStatus for \"b84c47a2fc7317b07f273c6794822779faaf75fda9c5019697740baa0a29fce1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b84c47a2fc7317b07f273c6794822779faaf75fda9c5019697740baa0a29fce1\": not found" Mar 17 17:31:01.657669 kubelet[3337]: E0317 17:31:01.657638 3337 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b84c47a2fc7317b07f273c6794822779faaf75fda9c5019697740baa0a29fce1\": not found" containerID="b84c47a2fc7317b07f273c6794822779faaf75fda9c5019697740baa0a29fce1" Mar 17 17:31:01.657706 kubelet[3337]: I0317 17:31:01.657671 3337 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b84c47a2fc7317b07f273c6794822779faaf75fda9c5019697740baa0a29fce1"} err="failed to get container status \"b84c47a2fc7317b07f273c6794822779faaf75fda9c5019697740baa0a29fce1\": rpc error: code = NotFound desc = an error occurred when try to find container \"b84c47a2fc7317b07f273c6794822779faaf75fda9c5019697740baa0a29fce1\": not found" Mar 17 17:31:01.657706 kubelet[3337]: I0317 17:31:01.657687 3337 scope.go:117] "RemoveContainer" containerID="ed37bea3a8dad646d78960b0a14429e9ec43b2b89e708d193564b3dc0fba7381" Mar 17 17:31:01.658053 containerd[1808]: time="2025-03-17T17:31:01.658015618Z" level=error msg="ContainerStatus for \"ed37bea3a8dad646d78960b0a14429e9ec43b2b89e708d193564b3dc0fba7381\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ed37bea3a8dad646d78960b0a14429e9ec43b2b89e708d193564b3dc0fba7381\": not found" Mar 17 17:31:01.658172 kubelet[3337]: E0317 17:31:01.658147 3337 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ed37bea3a8dad646d78960b0a14429e9ec43b2b89e708d193564b3dc0fba7381\": not found" containerID="ed37bea3a8dad646d78960b0a14429e9ec43b2b89e708d193564b3dc0fba7381" Mar 17 17:31:01.658209 kubelet[3337]: I0317 17:31:01.658176 3337 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ed37bea3a8dad646d78960b0a14429e9ec43b2b89e708d193564b3dc0fba7381"} err="failed to get container status \"ed37bea3a8dad646d78960b0a14429e9ec43b2b89e708d193564b3dc0fba7381\": rpc error: code = NotFound desc = an error occurred when try to find container \"ed37bea3a8dad646d78960b0a14429e9ec43b2b89e708d193564b3dc0fba7381\": not found" Mar 17 17:31:01.658209 kubelet[3337]: I0317 17:31:01.658194 3337 scope.go:117] "RemoveContainer" containerID="79464e27ce5f39a0c8467f4ab5aa2b494331b58d07a1cd846dac5cdec3d0c4ce" Mar 17 17:31:01.658389 containerd[1808]: time="2025-03-17T17:31:01.658352578Z" level=error msg="ContainerStatus for \"79464e27ce5f39a0c8467f4ab5aa2b494331b58d07a1cd846dac5cdec3d0c4ce\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"79464e27ce5f39a0c8467f4ab5aa2b494331b58d07a1cd846dac5cdec3d0c4ce\": not found" Mar 17 17:31:01.658621 kubelet[3337]: E0317 17:31:01.658586 3337 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"79464e27ce5f39a0c8467f4ab5aa2b494331b58d07a1cd846dac5cdec3d0c4ce\": not found" containerID="79464e27ce5f39a0c8467f4ab5aa2b494331b58d07a1cd846dac5cdec3d0c4ce" Mar 17 17:31:01.658690 kubelet[3337]: I0317 17:31:01.658625 3337 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"79464e27ce5f39a0c8467f4ab5aa2b494331b58d07a1cd846dac5cdec3d0c4ce"} err="failed to get container status \"79464e27ce5f39a0c8467f4ab5aa2b494331b58d07a1cd846dac5cdec3d0c4ce\": rpc error: code = NotFound desc = an error occurred when try to find container \"79464e27ce5f39a0c8467f4ab5aa2b494331b58d07a1cd846dac5cdec3d0c4ce\": not found" Mar 17 17:31:01.658690 kubelet[3337]: I0317 17:31:01.658646 3337 scope.go:117] "RemoveContainer" containerID="2d67fec0ad10552c0ed2e4ce19088031aaa1406c90250c6d1bfb01e5680d0947" Mar 17 17:31:01.658965 containerd[1808]: time="2025-03-17T17:31:01.658915218Z" level=error msg="ContainerStatus for \"2d67fec0ad10552c0ed2e4ce19088031aaa1406c90250c6d1bfb01e5680d0947\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2d67fec0ad10552c0ed2e4ce19088031aaa1406c90250c6d1bfb01e5680d0947\": not found" Mar 17 17:31:01.659205 kubelet[3337]: E0317 17:31:01.659104 3337 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2d67fec0ad10552c0ed2e4ce19088031aaa1406c90250c6d1bfb01e5680d0947\": not found" containerID="2d67fec0ad10552c0ed2e4ce19088031aaa1406c90250c6d1bfb01e5680d0947" Mar 17 17:31:01.659205 kubelet[3337]: I0317 17:31:01.659129 3337 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2d67fec0ad10552c0ed2e4ce19088031aaa1406c90250c6d1bfb01e5680d0947"} err="failed to get container status \"2d67fec0ad10552c0ed2e4ce19088031aaa1406c90250c6d1bfb01e5680d0947\": rpc error: code = NotFound desc = an error occurred when try to find container \"2d67fec0ad10552c0ed2e4ce19088031aaa1406c90250c6d1bfb01e5680d0947\": not found" Mar 17 17:31:02.087571 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-96ddba37f22720cc2cacc7cbd94ca806453faa888bf9a036e4f4f890b0c2b24a-rootfs.mount: Deactivated successfully. Mar 17 17:31:02.087853 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7ab02314498ec8b3b5b8864f221a5a29c93bd0489dd272992a050eedde9df17e-rootfs.mount: Deactivated successfully. Mar 17 17:31:02.087911 systemd[1]: var-lib-kubelet-pods-257fac58\x2d5384\x2d444c\x2db517\x2ddeacc2d28da3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfmjcl.mount: Deactivated successfully. Mar 17 17:31:02.087966 systemd[1]: var-lib-kubelet-pods-eb5574a5\x2d4148\x2d41b0\x2db2b5\x2d243de437e748-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6r4bv.mount: Deactivated successfully. Mar 17 17:31:02.088017 systemd[1]: var-lib-kubelet-pods-eb5574a5\x2d4148\x2d41b0\x2db2b5\x2d243de437e748-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 17 17:31:02.088065 systemd[1]: var-lib-kubelet-pods-eb5574a5\x2d4148\x2d41b0\x2db2b5\x2d243de437e748-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 17 17:31:03.020560 kubelet[3337]: I0317 17:31:03.020022 3337 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="257fac58-5384-444c-b517-deacc2d28da3" path="/var/lib/kubelet/pods/257fac58-5384-444c-b517-deacc2d28da3/volumes" Mar 17 17:31:03.020560 kubelet[3337]: I0317 17:31:03.020385 3337 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb5574a5-4148-41b0-b2b5-243de437e748" path="/var/lib/kubelet/pods/eb5574a5-4148-41b0-b2b5-243de437e748/volumes" Mar 17 17:31:03.032281 containerd[1808]: time="2025-03-17T17:31:03.032236118Z" level=info msg="StopPodSandbox for \"7ab02314498ec8b3b5b8864f221a5a29c93bd0489dd272992a050eedde9df17e\"" Mar 17 17:31:03.033026 containerd[1808]: time="2025-03-17T17:31:03.032330838Z" level=info msg="TearDown network for sandbox \"7ab02314498ec8b3b5b8864f221a5a29c93bd0489dd272992a050eedde9df17e\" successfully" Mar 17 17:31:03.033026 containerd[1808]: time="2025-03-17T17:31:03.032340998Z" level=info msg="StopPodSandbox for \"7ab02314498ec8b3b5b8864f221a5a29c93bd0489dd272992a050eedde9df17e\" returns successfully" Mar 17 17:31:03.033888 containerd[1808]: time="2025-03-17T17:31:03.033249758Z" level=info msg="RemovePodSandbox for \"7ab02314498ec8b3b5b8864f221a5a29c93bd0489dd272992a050eedde9df17e\"" Mar 17 17:31:03.033888 containerd[1808]: time="2025-03-17T17:31:03.033278398Z" level=info msg="Forcibly stopping sandbox \"7ab02314498ec8b3b5b8864f221a5a29c93bd0489dd272992a050eedde9df17e\"" Mar 17 17:31:03.033888 containerd[1808]: time="2025-03-17T17:31:03.033329038Z" level=info msg="TearDown network for sandbox \"7ab02314498ec8b3b5b8864f221a5a29c93bd0489dd272992a050eedde9df17e\" successfully" Mar 17 17:31:03.042920 containerd[1808]: time="2025-03-17T17:31:03.042868754Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7ab02314498ec8b3b5b8864f221a5a29c93bd0489dd272992a050eedde9df17e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:31:03.043025 containerd[1808]: time="2025-03-17T17:31:03.042930394Z" level=info msg="RemovePodSandbox \"7ab02314498ec8b3b5b8864f221a5a29c93bd0489dd272992a050eedde9df17e\" returns successfully" Mar 17 17:31:03.043749 containerd[1808]: time="2025-03-17T17:31:03.043617754Z" level=info msg="StopPodSandbox for \"96ddba37f22720cc2cacc7cbd94ca806453faa888bf9a036e4f4f890b0c2b24a\"" Mar 17 17:31:03.043749 containerd[1808]: time="2025-03-17T17:31:03.043699314Z" level=info msg="TearDown network for sandbox \"96ddba37f22720cc2cacc7cbd94ca806453faa888bf9a036e4f4f890b0c2b24a\" successfully" Mar 17 17:31:03.043749 containerd[1808]: time="2025-03-17T17:31:03.043708354Z" level=info msg="StopPodSandbox for \"96ddba37f22720cc2cacc7cbd94ca806453faa888bf9a036e4f4f890b0c2b24a\" returns successfully" Mar 17 17:31:03.044007 containerd[1808]: time="2025-03-17T17:31:03.043977674Z" level=info msg="RemovePodSandbox for \"96ddba37f22720cc2cacc7cbd94ca806453faa888bf9a036e4f4f890b0c2b24a\"" Mar 17 17:31:03.044043 containerd[1808]: time="2025-03-17T17:31:03.044008554Z" level=info msg="Forcibly stopping sandbox \"96ddba37f22720cc2cacc7cbd94ca806453faa888bf9a036e4f4f890b0c2b24a\"" Mar 17 17:31:03.044066 containerd[1808]: time="2025-03-17T17:31:03.044050474Z" level=info msg="TearDown network for sandbox \"96ddba37f22720cc2cacc7cbd94ca806453faa888bf9a036e4f4f890b0c2b24a\" successfully" Mar 17 17:31:03.052657 containerd[1808]: time="2025-03-17T17:31:03.052606910Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"96ddba37f22720cc2cacc7cbd94ca806453faa888bf9a036e4f4f890b0c2b24a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:31:03.052914 containerd[1808]: time="2025-03-17T17:31:03.052665630Z" level=info msg="RemovePodSandbox \"96ddba37f22720cc2cacc7cbd94ca806453faa888bf9a036e4f4f890b0c2b24a\" returns successfully" Mar 17 17:31:03.079988 sshd[5035]: Connection closed by 10.200.16.10 port 35018 Mar 17 17:31:03.080689 sshd-session[5033]: pam_unix(sshd:session): session closed for user core Mar 17 17:31:03.083953 systemd[1]: sshd@28-10.200.20.35:22-10.200.16.10:35018.service: Deactivated successfully. Mar 17 17:31:03.087947 systemd[1]: session-31.scope: Deactivated successfully. Mar 17 17:31:03.088294 systemd[1]: session-31.scope: Consumed 1.754s CPU time. Mar 17 17:31:03.089832 systemd-logind[1694]: Session 31 logged out. Waiting for processes to exit. Mar 17 17:31:03.091856 systemd-logind[1694]: Removed session 31. Mar 17 17:31:03.163595 kubelet[3337]: E0317 17:31:03.163517 3337 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 17:31:03.171852 systemd[1]: Started sshd@29-10.200.20.35:22-10.200.16.10:49524.service - OpenSSH per-connection server daemon (10.200.16.10:49524). Mar 17 17:31:03.618644 sshd[5199]: Accepted publickey for core from 10.200.16.10 port 49524 ssh2: RSA SHA256:Vv+Gx/xgYWEBj55H1UdRAcw683xVG5W8/4UU5IxNHAc Mar 17 17:31:03.620037 sshd-session[5199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:31:03.625761 systemd-logind[1694]: New session 32 of user core. Mar 17 17:31:03.633869 systemd[1]: Started session-32.scope - Session 32 of User core. Mar 17 17:31:05.339125 kubelet[3337]: I0317 17:31:05.339056 3337 memory_manager.go:355] "RemoveStaleState removing state" podUID="257fac58-5384-444c-b517-deacc2d28da3" containerName="cilium-operator" Mar 17 17:31:05.339125 kubelet[3337]: I0317 17:31:05.339098 3337 memory_manager.go:355] "RemoveStaleState removing state" podUID="eb5574a5-4148-41b0-b2b5-243de437e748" containerName="cilium-agent" Mar 17 17:31:05.350890 systemd[1]: Created slice kubepods-burstable-pod52a1d136_d1f3_444b_b621_9499b55a5665.slice - libcontainer container kubepods-burstable-pod52a1d136_d1f3_444b_b621_9499b55a5665.slice. Mar 17 17:31:05.376575 sshd[5201]: Connection closed by 10.200.16.10 port 49524 Mar 17 17:31:05.376171 sshd-session[5199]: pam_unix(sshd:session): session closed for user core Mar 17 17:31:05.381197 systemd[1]: sshd@29-10.200.20.35:22-10.200.16.10:49524.service: Deactivated successfully. Mar 17 17:31:05.385054 systemd[1]: session-32.scope: Deactivated successfully. Mar 17 17:31:05.385821 systemd[1]: session-32.scope: Consumed 1.341s CPU time. Mar 17 17:31:05.388220 systemd-logind[1694]: Session 32 logged out. Waiting for processes to exit. Mar 17 17:31:05.389925 systemd-logind[1694]: Removed session 32. Mar 17 17:31:05.432853 kubelet[3337]: I0317 17:31:05.432791 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/52a1d136-d1f3-444b-b621-9499b55a5665-clustermesh-secrets\") pod \"cilium-dk8rs\" (UID: \"52a1d136-d1f3-444b-b621-9499b55a5665\") " pod="kube-system/cilium-dk8rs" Mar 17 17:31:05.432853 kubelet[3337]: I0317 17:31:05.432845 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/52a1d136-d1f3-444b-b621-9499b55a5665-host-proc-sys-kernel\") pod \"cilium-dk8rs\" (UID: \"52a1d136-d1f3-444b-b621-9499b55a5665\") " pod="kube-system/cilium-dk8rs" Mar 17 17:31:05.432853 kubelet[3337]: I0317 17:31:05.432865 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/52a1d136-d1f3-444b-b621-9499b55a5665-hostproc\") pod \"cilium-dk8rs\" (UID: \"52a1d136-d1f3-444b-b621-9499b55a5665\") " pod="kube-system/cilium-dk8rs" Mar 17 17:31:05.433065 kubelet[3337]: I0317 17:31:05.432882 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/52a1d136-d1f3-444b-b621-9499b55a5665-xtables-lock\") pod \"cilium-dk8rs\" (UID: \"52a1d136-d1f3-444b-b621-9499b55a5665\") " pod="kube-system/cilium-dk8rs" Mar 17 17:31:05.433065 kubelet[3337]: I0317 17:31:05.432902 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/52a1d136-d1f3-444b-b621-9499b55a5665-cilium-ipsec-secrets\") pod \"cilium-dk8rs\" (UID: \"52a1d136-d1f3-444b-b621-9499b55a5665\") " pod="kube-system/cilium-dk8rs" Mar 17 17:31:05.433065 kubelet[3337]: I0317 17:31:05.432917 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/52a1d136-d1f3-444b-b621-9499b55a5665-host-proc-sys-net\") pod \"cilium-dk8rs\" (UID: \"52a1d136-d1f3-444b-b621-9499b55a5665\") " pod="kube-system/cilium-dk8rs" Mar 17 17:31:05.433065 kubelet[3337]: I0317 17:31:05.432935 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/52a1d136-d1f3-444b-b621-9499b55a5665-bpf-maps\") pod \"cilium-dk8rs\" (UID: \"52a1d136-d1f3-444b-b621-9499b55a5665\") " pod="kube-system/cilium-dk8rs" Mar 17 17:31:05.433065 kubelet[3337]: I0317 17:31:05.432950 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/52a1d136-d1f3-444b-b621-9499b55a5665-cilium-cgroup\") pod \"cilium-dk8rs\" (UID: \"52a1d136-d1f3-444b-b621-9499b55a5665\") " pod="kube-system/cilium-dk8rs" Mar 17 17:31:05.433065 kubelet[3337]: I0317 17:31:05.432966 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/52a1d136-d1f3-444b-b621-9499b55a5665-lib-modules\") pod \"cilium-dk8rs\" (UID: \"52a1d136-d1f3-444b-b621-9499b55a5665\") " pod="kube-system/cilium-dk8rs" Mar 17 17:31:05.433195 kubelet[3337]: I0317 17:31:05.432986 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/52a1d136-d1f3-444b-b621-9499b55a5665-cilium-config-path\") pod \"cilium-dk8rs\" (UID: \"52a1d136-d1f3-444b-b621-9499b55a5665\") " pod="kube-system/cilium-dk8rs" Mar 17 17:31:05.433195 kubelet[3337]: I0317 17:31:05.433000 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/52a1d136-d1f3-444b-b621-9499b55a5665-hubble-tls\") pod \"cilium-dk8rs\" (UID: \"52a1d136-d1f3-444b-b621-9499b55a5665\") " pod="kube-system/cilium-dk8rs" Mar 17 17:31:05.433195 kubelet[3337]: I0317 17:31:05.433017 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/52a1d136-d1f3-444b-b621-9499b55a5665-cilium-run\") pod \"cilium-dk8rs\" (UID: \"52a1d136-d1f3-444b-b621-9499b55a5665\") " pod="kube-system/cilium-dk8rs" Mar 17 17:31:05.433195 kubelet[3337]: I0317 17:31:05.433033 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/52a1d136-d1f3-444b-b621-9499b55a5665-cni-path\") pod \"cilium-dk8rs\" (UID: \"52a1d136-d1f3-444b-b621-9499b55a5665\") " pod="kube-system/cilium-dk8rs" Mar 17 17:31:05.433195 kubelet[3337]: I0317 17:31:05.433053 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/52a1d136-d1f3-444b-b621-9499b55a5665-etc-cni-netd\") pod \"cilium-dk8rs\" (UID: \"52a1d136-d1f3-444b-b621-9499b55a5665\") " pod="kube-system/cilium-dk8rs" Mar 17 17:31:05.433195 kubelet[3337]: I0317 17:31:05.433072 3337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-529hr\" (UniqueName: \"kubernetes.io/projected/52a1d136-d1f3-444b-b621-9499b55a5665-kube-api-access-529hr\") pod \"cilium-dk8rs\" (UID: \"52a1d136-d1f3-444b-b621-9499b55a5665\") " pod="kube-system/cilium-dk8rs" Mar 17 17:31:05.468907 systemd[1]: Started sshd@30-10.200.20.35:22-10.200.16.10:49538.service - OpenSSH per-connection server daemon (10.200.16.10:49538). Mar 17 17:31:05.655209 containerd[1808]: time="2025-03-17T17:31:05.655074456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dk8rs,Uid:52a1d136-d1f3-444b-b621-9499b55a5665,Namespace:kube-system,Attempt:0,}" Mar 17 17:31:05.706909 containerd[1808]: time="2025-03-17T17:31:05.706346807Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:31:05.706909 containerd[1808]: time="2025-03-17T17:31:05.706799206Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:31:05.706909 containerd[1808]: time="2025-03-17T17:31:05.706811686Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:31:05.707266 containerd[1808]: time="2025-03-17T17:31:05.707108086Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:31:05.726762 systemd[1]: Started cri-containerd-b6ce46b54245a6ab1042e179daebb22def13623bf7f3d46bf8508e1a8b554fb6.scope - libcontainer container b6ce46b54245a6ab1042e179daebb22def13623bf7f3d46bf8508e1a8b554fb6. Mar 17 17:31:05.749315 containerd[1808]: time="2025-03-17T17:31:05.749041759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dk8rs,Uid:52a1d136-d1f3-444b-b621-9499b55a5665,Namespace:kube-system,Attempt:0,} returns sandbox id \"b6ce46b54245a6ab1042e179daebb22def13623bf7f3d46bf8508e1a8b554fb6\"" Mar 17 17:31:05.752823 containerd[1808]: time="2025-03-17T17:31:05.752778438Z" level=info msg="CreateContainer within sandbox \"b6ce46b54245a6ab1042e179daebb22def13623bf7f3d46bf8508e1a8b554fb6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 17:31:05.842502 containerd[1808]: time="2025-03-17T17:31:05.842409622Z" level=info msg="CreateContainer within sandbox \"b6ce46b54245a6ab1042e179daebb22def13623bf7f3d46bf8508e1a8b554fb6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f5a2eaf0c17aafc5c675aba89aa4c479878c9d5dde7c6006517ff222b0c990df\"" Mar 17 17:31:05.843371 containerd[1808]: time="2025-03-17T17:31:05.843315902Z" level=info msg="StartContainer for \"f5a2eaf0c17aafc5c675aba89aa4c479878c9d5dde7c6006517ff222b0c990df\"" Mar 17 17:31:05.868761 systemd[1]: Started cri-containerd-f5a2eaf0c17aafc5c675aba89aa4c479878c9d5dde7c6006517ff222b0c990df.scope - libcontainer container f5a2eaf0c17aafc5c675aba89aa4c479878c9d5dde7c6006517ff222b0c990df. Mar 17 17:31:05.904821 containerd[1808]: time="2025-03-17T17:31:05.904670611Z" level=info msg="StartContainer for \"f5a2eaf0c17aafc5c675aba89aa4c479878c9d5dde7c6006517ff222b0c990df\" returns successfully" Mar 17 17:31:05.911810 systemd[1]: cri-containerd-f5a2eaf0c17aafc5c675aba89aa4c479878c9d5dde7c6006517ff222b0c990df.scope: Deactivated successfully. Mar 17 17:31:05.957799 sshd[5210]: Accepted publickey for core from 10.200.16.10 port 49538 ssh2: RSA SHA256:Vv+Gx/xgYWEBj55H1UdRAcw683xVG5W8/4UU5IxNHAc Mar 17 17:31:05.959205 sshd-session[5210]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:31:05.963849 systemd-logind[1694]: New session 33 of user core. Mar 17 17:31:05.969971 systemd[1]: Started session-33.scope - Session 33 of User core. Mar 17 17:31:05.977506 containerd[1808]: time="2025-03-17T17:31:05.977430038Z" level=info msg="shim disconnected" id=f5a2eaf0c17aafc5c675aba89aa4c479878c9d5dde7c6006517ff222b0c990df namespace=k8s.io Mar 17 17:31:05.977506 containerd[1808]: time="2025-03-17T17:31:05.977500558Z" level=warning msg="cleaning up after shim disconnected" id=f5a2eaf0c17aafc5c675aba89aa4c479878c9d5dde7c6006517ff222b0c990df namespace=k8s.io Mar 17 17:31:05.977506 containerd[1808]: time="2025-03-17T17:31:05.977509918Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:31:06.314005 sshd[5305]: Connection closed by 10.200.16.10 port 49538 Mar 17 17:31:06.313471 sshd-session[5210]: pam_unix(sshd:session): session closed for user core Mar 17 17:31:06.316261 systemd[1]: sshd@30-10.200.20.35:22-10.200.16.10:49538.service: Deactivated successfully. Mar 17 17:31:06.318749 systemd[1]: session-33.scope: Deactivated successfully. Mar 17 17:31:06.320923 systemd-logind[1694]: Session 33 logged out. Waiting for processes to exit. Mar 17 17:31:06.322649 systemd-logind[1694]: Removed session 33. Mar 17 17:31:06.407830 systemd[1]: Started sshd@31-10.200.20.35:22-10.200.16.10:49548.service - OpenSSH per-connection server daemon (10.200.16.10:49548). Mar 17 17:31:06.573454 containerd[1808]: time="2025-03-17T17:31:06.573265212Z" level=info msg="CreateContainer within sandbox \"b6ce46b54245a6ab1042e179daebb22def13623bf7f3d46bf8508e1a8b554fb6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 17:31:06.614044 containerd[1808]: time="2025-03-17T17:31:06.613990724Z" level=info msg="CreateContainer within sandbox \"b6ce46b54245a6ab1042e179daebb22def13623bf7f3d46bf8508e1a8b554fb6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e09f2964d25167690a8fd36293ef0c79b944d3d35a0025053e3f61b231857a21\"" Mar 17 17:31:06.615955 containerd[1808]: time="2025-03-17T17:31:06.614512764Z" level=info msg="StartContainer for \"e09f2964d25167690a8fd36293ef0c79b944d3d35a0025053e3f61b231857a21\"" Mar 17 17:31:06.647753 systemd[1]: Started cri-containerd-e09f2964d25167690a8fd36293ef0c79b944d3d35a0025053e3f61b231857a21.scope - libcontainer container e09f2964d25167690a8fd36293ef0c79b944d3d35a0025053e3f61b231857a21. Mar 17 17:31:06.676626 containerd[1808]: time="2025-03-17T17:31:06.676473913Z" level=info msg="StartContainer for \"e09f2964d25167690a8fd36293ef0c79b944d3d35a0025053e3f61b231857a21\" returns successfully" Mar 17 17:31:06.678119 systemd[1]: cri-containerd-e09f2964d25167690a8fd36293ef0c79b944d3d35a0025053e3f61b231857a21.scope: Deactivated successfully. Mar 17 17:31:06.715842 containerd[1808]: time="2025-03-17T17:31:06.715773226Z" level=info msg="shim disconnected" id=e09f2964d25167690a8fd36293ef0c79b944d3d35a0025053e3f61b231857a21 namespace=k8s.io Mar 17 17:31:06.716183 containerd[1808]: time="2025-03-17T17:31:06.715890226Z" level=warning msg="cleaning up after shim disconnected" id=e09f2964d25167690a8fd36293ef0c79b944d3d35a0025053e3f61b231857a21 namespace=k8s.io Mar 17 17:31:06.716183 containerd[1808]: time="2025-03-17T17:31:06.715900946Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:31:06.898564 sshd[5325]: Accepted publickey for core from 10.200.16.10 port 49548 ssh2: RSA SHA256:Vv+Gx/xgYWEBj55H1UdRAcw683xVG5W8/4UU5IxNHAc Mar 17 17:31:06.899979 sshd-session[5325]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:31:06.903711 systemd-logind[1694]: New session 34 of user core. Mar 17 17:31:06.908732 systemd[1]: Started session-34.scope - Session 34 of User core. Mar 17 17:31:07.014445 kubelet[3337]: I0317 17:31:07.014384 3337 setters.go:602] "Node became not ready" node="ci-4152.2.2-a-e33ca1f69b" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-03-17T17:31:07Z","lastTransitionTime":"2025-03-17T17:31:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 17 17:31:07.538952 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e09f2964d25167690a8fd36293ef0c79b944d3d35a0025053e3f61b231857a21-rootfs.mount: Deactivated successfully. Mar 17 17:31:07.579283 containerd[1808]: time="2025-03-17T17:31:07.579198912Z" level=info msg="CreateContainer within sandbox \"b6ce46b54245a6ab1042e179daebb22def13623bf7f3d46bf8508e1a8b554fb6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 17:31:07.633488 containerd[1808]: time="2025-03-17T17:31:07.633437022Z" level=info msg="CreateContainer within sandbox \"b6ce46b54245a6ab1042e179daebb22def13623bf7f3d46bf8508e1a8b554fb6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"96629bf085cdbd38038c8003175ac57602aebaeaef29b021bcf7c6ff2424e570\"" Mar 17 17:31:07.634282 containerd[1808]: time="2025-03-17T17:31:07.634247942Z" level=info msg="StartContainer for \"96629bf085cdbd38038c8003175ac57602aebaeaef29b021bcf7c6ff2424e570\"" Mar 17 17:31:07.661913 systemd[1]: Started cri-containerd-96629bf085cdbd38038c8003175ac57602aebaeaef29b021bcf7c6ff2424e570.scope - libcontainer container 96629bf085cdbd38038c8003175ac57602aebaeaef29b021bcf7c6ff2424e570. Mar 17 17:31:07.696173 systemd[1]: cri-containerd-96629bf085cdbd38038c8003175ac57602aebaeaef29b021bcf7c6ff2424e570.scope: Deactivated successfully. Mar 17 17:31:07.700010 containerd[1808]: time="2025-03-17T17:31:07.699045290Z" level=info msg="StartContainer for \"96629bf085cdbd38038c8003175ac57602aebaeaef29b021bcf7c6ff2424e570\" returns successfully" Mar 17 17:31:07.739018 containerd[1808]: time="2025-03-17T17:31:07.738922003Z" level=info msg="shim disconnected" id=96629bf085cdbd38038c8003175ac57602aebaeaef29b021bcf7c6ff2424e570 namespace=k8s.io Mar 17 17:31:07.739018 containerd[1808]: time="2025-03-17T17:31:07.738981203Z" level=warning msg="cleaning up after shim disconnected" id=96629bf085cdbd38038c8003175ac57602aebaeaef29b021bcf7c6ff2424e570 namespace=k8s.io Mar 17 17:31:07.739018 containerd[1808]: time="2025-03-17T17:31:07.738990643Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:31:08.165089 kubelet[3337]: E0317 17:31:08.165039 3337 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 17:31:08.539055 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-96629bf085cdbd38038c8003175ac57602aebaeaef29b021bcf7c6ff2424e570-rootfs.mount: Deactivated successfully. Mar 17 17:31:08.583593 containerd[1808]: time="2025-03-17T17:31:08.582744692Z" level=info msg="CreateContainer within sandbox \"b6ce46b54245a6ab1042e179daebb22def13623bf7f3d46bf8508e1a8b554fb6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 17:31:08.635430 containerd[1808]: time="2025-03-17T17:31:08.635380523Z" level=info msg="CreateContainer within sandbox \"b6ce46b54245a6ab1042e179daebb22def13623bf7f3d46bf8508e1a8b554fb6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b661af25cd81faf993e03588e8a4f944fdb0360ae3a58fd0fc29403d15d5ed8a\"" Mar 17 17:31:08.637158 containerd[1808]: time="2025-03-17T17:31:08.636194403Z" level=info msg="StartContainer for \"b661af25cd81faf993e03588e8a4f944fdb0360ae3a58fd0fc29403d15d5ed8a\"" Mar 17 17:31:08.665732 systemd[1]: Started cri-containerd-b661af25cd81faf993e03588e8a4f944fdb0360ae3a58fd0fc29403d15d5ed8a.scope - libcontainer container b661af25cd81faf993e03588e8a4f944fdb0360ae3a58fd0fc29403d15d5ed8a. Mar 17 17:31:08.689434 systemd[1]: cri-containerd-b661af25cd81faf993e03588e8a4f944fdb0360ae3a58fd0fc29403d15d5ed8a.scope: Deactivated successfully. Mar 17 17:31:08.695170 containerd[1808]: time="2025-03-17T17:31:08.695127392Z" level=info msg="StartContainer for \"b661af25cd81faf993e03588e8a4f944fdb0360ae3a58fd0fc29403d15d5ed8a\" returns successfully" Mar 17 17:31:08.740787 containerd[1808]: time="2025-03-17T17:31:08.740714904Z" level=info msg="shim disconnected" id=b661af25cd81faf993e03588e8a4f944fdb0360ae3a58fd0fc29403d15d5ed8a namespace=k8s.io Mar 17 17:31:08.740787 containerd[1808]: time="2025-03-17T17:31:08.740777464Z" level=warning msg="cleaning up after shim disconnected" id=b661af25cd81faf993e03588e8a4f944fdb0360ae3a58fd0fc29403d15d5ed8a namespace=k8s.io Mar 17 17:31:08.740787 containerd[1808]: time="2025-03-17T17:31:08.740785304Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:31:09.540202 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b661af25cd81faf993e03588e8a4f944fdb0360ae3a58fd0fc29403d15d5ed8a-rootfs.mount: Deactivated successfully. Mar 17 17:31:09.587220 containerd[1808]: time="2025-03-17T17:31:09.587101273Z" level=info msg="CreateContainer within sandbox \"b6ce46b54245a6ab1042e179daebb22def13623bf7f3d46bf8508e1a8b554fb6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 17:31:09.632777 containerd[1808]: time="2025-03-17T17:31:09.632685225Z" level=info msg="CreateContainer within sandbox \"b6ce46b54245a6ab1042e179daebb22def13623bf7f3d46bf8508e1a8b554fb6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6f51189ceda6ddb97cd3995d73c6790c6953d5369bd0b9cb5c1219f33d7e7b39\"" Mar 17 17:31:09.633624 containerd[1808]: time="2025-03-17T17:31:09.633576864Z" level=info msg="StartContainer for \"6f51189ceda6ddb97cd3995d73c6790c6953d5369bd0b9cb5c1219f33d7e7b39\"" Mar 17 17:31:09.659739 systemd[1]: Started cri-containerd-6f51189ceda6ddb97cd3995d73c6790c6953d5369bd0b9cb5c1219f33d7e7b39.scope - libcontainer container 6f51189ceda6ddb97cd3995d73c6790c6953d5369bd0b9cb5c1219f33d7e7b39. Mar 17 17:31:09.693058 containerd[1808]: time="2025-03-17T17:31:09.692838134Z" level=info msg="StartContainer for \"6f51189ceda6ddb97cd3995d73c6790c6953d5369bd0b9cb5c1219f33d7e7b39\" returns successfully" Mar 17 17:31:10.086653 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Mar 17 17:31:12.902154 systemd-networkd[1501]: lxc_health: Link UP Mar 17 17:31:12.918693 systemd-networkd[1501]: lxc_health: Gained carrier Mar 17 17:31:13.692567 kubelet[3337]: I0317 17:31:13.689353 3337 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-dk8rs" podStartSLOduration=8.689332949 podStartE2EDuration="8.689332949s" podCreationTimestamp="2025-03-17 17:31:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:31:10.61235473 +0000 UTC m=+247.932495505" watchObservedRunningTime="2025-03-17 17:31:13.689332949 +0000 UTC m=+251.009473724" Mar 17 17:31:14.872691 systemd-networkd[1501]: lxc_health: Gained IPv6LL Mar 17 17:31:19.938652 systemd[1]: run-containerd-runc-k8s.io-6f51189ceda6ddb97cd3995d73c6790c6953d5369bd0b9cb5c1219f33d7e7b39-runc.XkwJfH.mount: Deactivated successfully. Mar 17 17:31:24.212806 systemd[1]: run-containerd-runc-k8s.io-6f51189ceda6ddb97cd3995d73c6790c6953d5369bd0b9cb5c1219f33d7e7b39-runc.oJ2eeO.mount: Deactivated successfully. Mar 17 17:31:26.370323 kubelet[3337]: E0317 17:31:26.370274 3337 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 127.0.0.1:33088->127.0.0.1:37267: read tcp 127.0.0.1:33088->127.0.0.1:37267: read: connection reset by peer Mar 17 17:31:26.460815 sshd[5389]: Connection closed by 10.200.16.10 port 49548 Mar 17 17:31:26.461471 sshd-session[5325]: pam_unix(sshd:session): session closed for user core Mar 17 17:31:26.465604 systemd[1]: sshd@31-10.200.20.35:22-10.200.16.10:49548.service: Deactivated successfully. Mar 17 17:31:26.468460 systemd[1]: session-34.scope: Deactivated successfully. Mar 17 17:31:26.469338 systemd-logind[1694]: Session 34 logged out. Waiting for processes to exit. Mar 17 17:31:26.470435 systemd-logind[1694]: Removed session 34.