Jan 14 13:22:04.100488 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 19:01:45 -00 2025 Jan 14 13:22:04.100513 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 14 13:22:04.100522 kernel: BIOS-provided physical RAM map: Jan 14 13:22:04.100528 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 14 13:22:04.100534 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jan 14 13:22:04.100539 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Jan 14 13:22:04.100546 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved Jan 14 13:22:04.100555 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jan 14 13:22:04.100561 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jan 14 13:22:04.100567 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jan 14 13:22:04.100573 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jan 14 13:22:04.100593 kernel: printk: bootconsole [earlyser0] enabled Jan 14 13:22:04.100603 kernel: NX (Execute Disable) protection: active Jan 14 13:22:04.100611 kernel: APIC: Static calls initialized Jan 14 13:22:04.100621 kernel: efi: EFI v2.7 by Microsoft Jan 14 13:22:04.100629 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c1a98 RNG=0x3ffd1018 Jan 14 13:22:04.100636 kernel: random: crng init done Jan 14 13:22:04.100643 kernel: secureboot: Secure boot disabled Jan 14 13:22:04.100649 kernel: SMBIOS 3.1.0 present. Jan 14 13:22:04.100656 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Jan 14 13:22:04.100663 kernel: Hypervisor detected: Microsoft Hyper-V Jan 14 13:22:04.100670 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Jan 14 13:22:04.100677 kernel: Hyper-V: Host Build 10.0.20348.1633-1-0 Jan 14 13:22:04.100684 kernel: Hyper-V: Nested features: 0x1e0101 Jan 14 13:22:04.100693 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jan 14 13:22:04.100699 kernel: Hyper-V: Using hypercall for remote TLB flush Jan 14 13:22:04.100706 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 14 13:22:04.100713 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 14 13:22:04.100721 kernel: tsc: Marking TSC unstable due to running on Hyper-V Jan 14 13:22:04.100728 kernel: tsc: Detected 2593.906 MHz processor Jan 14 13:22:04.100735 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 14 13:22:04.100742 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 14 13:22:04.100749 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Jan 14 13:22:04.100759 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 14 13:22:04.100766 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 14 13:22:04.100772 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Jan 14 13:22:04.100779 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Jan 14 13:22:04.100786 kernel: Using GB pages for direct mapping Jan 14 13:22:04.100793 kernel: ACPI: Early table checksum verification disabled Jan 14 13:22:04.100800 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jan 14 13:22:04.100810 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:22:04.100820 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:22:04.100827 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Jan 14 13:22:04.100835 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jan 14 13:22:04.100842 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:22:04.100849 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:22:04.100857 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:22:04.100866 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:22:04.100874 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:22:04.100881 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:22:04.100888 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:22:04.100896 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jan 14 13:22:04.100903 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Jan 14 13:22:04.100911 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jan 14 13:22:04.100918 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jan 14 13:22:04.100925 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jan 14 13:22:04.100935 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jan 14 13:22:04.100942 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Jan 14 13:22:04.100953 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Jan 14 13:22:04.100963 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jan 14 13:22:04.100970 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Jan 14 13:22:04.100981 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 14 13:22:04.100989 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 14 13:22:04.100999 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jan 14 13:22:04.101012 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Jan 14 13:22:04.101020 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Jan 14 13:22:04.101030 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jan 14 13:22:04.101041 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jan 14 13:22:04.101049 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jan 14 13:22:04.101057 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jan 14 13:22:04.101068 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jan 14 13:22:04.101075 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jan 14 13:22:04.101084 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jan 14 13:22:04.101095 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jan 14 13:22:04.101102 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jan 14 13:22:04.101113 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Jan 14 13:22:04.101121 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Jan 14 13:22:04.101129 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Jan 14 13:22:04.101139 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Jan 14 13:22:04.101146 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Jan 14 13:22:04.101154 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Jan 14 13:22:04.101161 kernel: Zone ranges: Jan 14 13:22:04.101171 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 14 13:22:04.101178 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 14 13:22:04.101186 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jan 14 13:22:04.101193 kernel: Movable zone start for each node Jan 14 13:22:04.101200 kernel: Early memory node ranges Jan 14 13:22:04.101208 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 14 13:22:04.101215 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Jan 14 13:22:04.101222 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jan 14 13:22:04.101230 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jan 14 13:22:04.101240 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jan 14 13:22:04.101250 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 14 13:22:04.101257 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 14 13:22:04.101265 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Jan 14 13:22:04.101276 kernel: ACPI: PM-Timer IO Port: 0x408 Jan 14 13:22:04.101284 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Jan 14 13:22:04.101294 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Jan 14 13:22:04.101304 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 14 13:22:04.101312 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 14 13:22:04.101327 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jan 14 13:22:04.101335 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 14 13:22:04.101347 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jan 14 13:22:04.101358 kernel: Booting paravirtualized kernel on Hyper-V Jan 14 13:22:04.101369 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 14 13:22:04.101382 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 14 13:22:04.101395 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 14 13:22:04.101405 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 14 13:22:04.101415 kernel: pcpu-alloc: [0] 0 1 Jan 14 13:22:04.101429 kernel: Hyper-V: PV spinlocks enabled Jan 14 13:22:04.101441 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 14 13:22:04.101454 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 14 13:22:04.101466 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 14 13:22:04.101478 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 14 13:22:04.101491 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 14 13:22:04.101504 kernel: Fallback order for Node 0: 0 Jan 14 13:22:04.101518 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Jan 14 13:22:04.101535 kernel: Policy zone: Normal Jan 14 13:22:04.101558 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 14 13:22:04.101572 kernel: software IO TLB: area num 2. Jan 14 13:22:04.101599 kernel: Memory: 8069620K/8387460K available (12288K kernel code, 2299K rwdata, 22736K rodata, 42976K init, 2216K bss, 317584K reserved, 0K cma-reserved) Jan 14 13:22:04.101613 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 14 13:22:04.101628 kernel: ftrace: allocating 37920 entries in 149 pages Jan 14 13:22:04.101642 kernel: ftrace: allocated 149 pages with 4 groups Jan 14 13:22:04.101655 kernel: Dynamic Preempt: voluntary Jan 14 13:22:04.101670 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 14 13:22:04.101688 kernel: rcu: RCU event tracing is enabled. Jan 14 13:22:04.101702 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 14 13:22:04.101716 kernel: Trampoline variant of Tasks RCU enabled. Jan 14 13:22:04.101724 kernel: Rude variant of Tasks RCU enabled. Jan 14 13:22:04.101732 kernel: Tracing variant of Tasks RCU enabled. Jan 14 13:22:04.101740 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 14 13:22:04.101748 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 14 13:22:04.101758 kernel: Using NULL legacy PIC Jan 14 13:22:04.101766 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jan 14 13:22:04.101774 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 14 13:22:04.101782 kernel: Console: colour dummy device 80x25 Jan 14 13:22:04.101790 kernel: printk: console [tty1] enabled Jan 14 13:22:04.101797 kernel: printk: console [ttyS0] enabled Jan 14 13:22:04.101805 kernel: printk: bootconsole [earlyser0] disabled Jan 14 13:22:04.101813 kernel: ACPI: Core revision 20230628 Jan 14 13:22:04.101821 kernel: Failed to register legacy timer interrupt Jan 14 13:22:04.101829 kernel: APIC: Switch to symmetric I/O mode setup Jan 14 13:22:04.101839 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 14 13:22:04.101847 kernel: Hyper-V: Using IPI hypercalls Jan 14 13:22:04.101854 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jan 14 13:22:04.101862 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jan 14 13:22:04.101870 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jan 14 13:22:04.101878 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jan 14 13:22:04.101886 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jan 14 13:22:04.101894 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jan 14 13:22:04.101902 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593906) Jan 14 13:22:04.101912 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 14 13:22:04.101920 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 14 13:22:04.101928 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 14 13:22:04.101936 kernel: Spectre V2 : Mitigation: Retpolines Jan 14 13:22:04.101944 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 14 13:22:04.101951 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 14 13:22:04.101960 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 14 13:22:04.101967 kernel: RETBleed: Vulnerable Jan 14 13:22:04.101975 kernel: Speculative Store Bypass: Vulnerable Jan 14 13:22:04.101983 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Jan 14 13:22:04.101993 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 14 13:22:04.102000 kernel: GDS: Unknown: Dependent on hypervisor status Jan 14 13:22:04.102008 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 14 13:22:04.102016 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 14 13:22:04.102024 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 14 13:22:04.102032 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 14 13:22:04.102039 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 14 13:22:04.102047 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 14 13:22:04.102055 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 14 13:22:04.102063 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jan 14 13:22:04.102071 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jan 14 13:22:04.102081 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jan 14 13:22:04.102088 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Jan 14 13:22:04.102096 kernel: Freeing SMP alternatives memory: 32K Jan 14 13:22:04.102104 kernel: pid_max: default: 32768 minimum: 301 Jan 14 13:22:04.102112 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 14 13:22:04.102119 kernel: landlock: Up and running. Jan 14 13:22:04.102127 kernel: SELinux: Initializing. Jan 14 13:22:04.102135 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 14 13:22:04.102143 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 14 13:22:04.102151 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Jan 14 13:22:04.102159 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 14 13:22:04.102169 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 14 13:22:04.102177 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 14 13:22:04.102185 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 14 13:22:04.102193 kernel: signal: max sigframe size: 3632 Jan 14 13:22:04.102201 kernel: rcu: Hierarchical SRCU implementation. Jan 14 13:22:04.102209 kernel: rcu: Max phase no-delay instances is 400. Jan 14 13:22:04.102217 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 14 13:22:04.102225 kernel: smp: Bringing up secondary CPUs ... Jan 14 13:22:04.102233 kernel: smpboot: x86: Booting SMP configuration: Jan 14 13:22:04.102243 kernel: .... node #0, CPUs: #1 Jan 14 13:22:04.102251 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Jan 14 13:22:04.102260 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 14 13:22:04.102267 kernel: smp: Brought up 1 node, 2 CPUs Jan 14 13:22:04.102275 kernel: smpboot: Max logical packages: 1 Jan 14 13:22:04.102283 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Jan 14 13:22:04.102291 kernel: devtmpfs: initialized Jan 14 13:22:04.102299 kernel: x86/mm: Memory block size: 128MB Jan 14 13:22:04.102309 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jan 14 13:22:04.102316 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 14 13:22:04.102324 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 14 13:22:04.102332 kernel: pinctrl core: initialized pinctrl subsystem Jan 14 13:22:04.102340 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 14 13:22:04.102348 kernel: audit: initializing netlink subsys (disabled) Jan 14 13:22:04.102356 kernel: audit: type=2000 audit(1736860922.027:1): state=initialized audit_enabled=0 res=1 Jan 14 13:22:04.102364 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 14 13:22:04.102372 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 14 13:22:04.102382 kernel: cpuidle: using governor menu Jan 14 13:22:04.102390 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 14 13:22:04.102398 kernel: dca service started, version 1.12.1 Jan 14 13:22:04.102406 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Jan 14 13:22:04.102413 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 14 13:22:04.102421 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 14 13:22:04.102429 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 14 13:22:04.102437 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 14 13:22:04.102445 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 14 13:22:04.102455 kernel: ACPI: Added _OSI(Module Device) Jan 14 13:22:04.102463 kernel: ACPI: Added _OSI(Processor Device) Jan 14 13:22:04.102471 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 14 13:22:04.102478 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 14 13:22:04.102486 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 14 13:22:04.102494 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 14 13:22:04.102502 kernel: ACPI: Interpreter enabled Jan 14 13:22:04.102510 kernel: ACPI: PM: (supports S0 S5) Jan 14 13:22:04.102518 kernel: ACPI: Using IOAPIC for interrupt routing Jan 14 13:22:04.102528 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 14 13:22:04.102535 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 14 13:22:04.102543 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jan 14 13:22:04.102551 kernel: iommu: Default domain type: Translated Jan 14 13:22:04.102559 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 14 13:22:04.102567 kernel: efivars: Registered efivars operations Jan 14 13:22:04.102575 kernel: PCI: Using ACPI for IRQ routing Jan 14 13:22:04.102643 kernel: PCI: System does not support PCI Jan 14 13:22:04.102656 kernel: vgaarb: loaded Jan 14 13:22:04.102673 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Jan 14 13:22:04.102685 kernel: VFS: Disk quotas dquot_6.6.0 Jan 14 13:22:04.102713 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 14 13:22:04.102724 kernel: pnp: PnP ACPI init Jan 14 13:22:04.102732 kernel: pnp: PnP ACPI: found 3 devices Jan 14 13:22:04.102740 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 14 13:22:04.102748 kernel: NET: Registered PF_INET protocol family Jan 14 13:22:04.102756 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 14 13:22:04.102764 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 14 13:22:04.102775 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 14 13:22:04.102784 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 14 13:22:04.102792 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 14 13:22:04.102799 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 14 13:22:04.102807 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 14 13:22:04.102815 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 14 13:22:04.102823 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 14 13:22:04.102831 kernel: NET: Registered PF_XDP protocol family Jan 14 13:22:04.102839 kernel: PCI: CLS 0 bytes, default 64 Jan 14 13:22:04.102849 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 14 13:22:04.102857 kernel: software IO TLB: mapped [mem 0x000000003ae75000-0x000000003ee75000] (64MB) Jan 14 13:22:04.102865 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 14 13:22:04.102873 kernel: Initialise system trusted keyrings Jan 14 13:22:04.102881 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 14 13:22:04.102888 kernel: Key type asymmetric registered Jan 14 13:22:04.102896 kernel: Asymmetric key parser 'x509' registered Jan 14 13:22:04.102904 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 14 13:22:04.102912 kernel: io scheduler mq-deadline registered Jan 14 13:22:04.102922 kernel: io scheduler kyber registered Jan 14 13:22:04.102930 kernel: io scheduler bfq registered Jan 14 13:22:04.102938 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 14 13:22:04.102945 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 14 13:22:04.102953 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 14 13:22:04.102962 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 14 13:22:04.102969 kernel: i8042: PNP: No PS/2 controller found. Jan 14 13:22:04.103092 kernel: rtc_cmos 00:02: registered as rtc0 Jan 14 13:22:04.103176 kernel: rtc_cmos 00:02: setting system clock to 2025-01-14T13:22:03 UTC (1736860923) Jan 14 13:22:04.103248 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jan 14 13:22:04.103259 kernel: intel_pstate: CPU model not supported Jan 14 13:22:04.103267 kernel: efifb: probing for efifb Jan 14 13:22:04.103275 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 14 13:22:04.103283 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 14 13:22:04.103291 kernel: efifb: scrolling: redraw Jan 14 13:22:04.103299 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 14 13:22:04.103307 kernel: Console: switching to colour frame buffer device 128x48 Jan 14 13:22:04.103318 kernel: fb0: EFI VGA frame buffer device Jan 14 13:22:04.103326 kernel: pstore: Using crash dump compression: deflate Jan 14 13:22:04.103333 kernel: pstore: Registered efi_pstore as persistent store backend Jan 14 13:22:04.103341 kernel: NET: Registered PF_INET6 protocol family Jan 14 13:22:04.103349 kernel: Segment Routing with IPv6 Jan 14 13:22:04.103357 kernel: In-situ OAM (IOAM) with IPv6 Jan 14 13:22:04.103365 kernel: NET: Registered PF_PACKET protocol family Jan 14 13:22:04.103373 kernel: Key type dns_resolver registered Jan 14 13:22:04.103381 kernel: IPI shorthand broadcast: enabled Jan 14 13:22:04.103391 kernel: sched_clock: Marking stable (929004500, 59950400)->(1264880700, -275925800) Jan 14 13:22:04.103399 kernel: registered taskstats version 1 Jan 14 13:22:04.103407 kernel: Loading compiled-in X.509 certificates Jan 14 13:22:04.103414 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 98739e9049f62881f4df7ffd1e39335f7f55b344' Jan 14 13:22:04.103422 kernel: Key type .fscrypt registered Jan 14 13:22:04.103430 kernel: Key type fscrypt-provisioning registered Jan 14 13:22:04.103438 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 14 13:22:04.103446 kernel: ima: Allocated hash algorithm: sha1 Jan 14 13:22:04.103456 kernel: ima: No architecture policies found Jan 14 13:22:04.103464 kernel: clk: Disabling unused clocks Jan 14 13:22:04.103472 kernel: Freeing unused kernel image (initmem) memory: 42976K Jan 14 13:22:04.103480 kernel: Write protecting the kernel read-only data: 36864k Jan 14 13:22:04.103488 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K Jan 14 13:22:04.103496 kernel: Run /init as init process Jan 14 13:22:04.103504 kernel: with arguments: Jan 14 13:22:04.103512 kernel: /init Jan 14 13:22:04.103519 kernel: with environment: Jan 14 13:22:04.103527 kernel: HOME=/ Jan 14 13:22:04.103537 kernel: TERM=linux Jan 14 13:22:04.103545 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 14 13:22:04.103555 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 14 13:22:04.103564 systemd[1]: Detected virtualization microsoft. Jan 14 13:22:04.103573 systemd[1]: Detected architecture x86-64. Jan 14 13:22:04.103595 systemd[1]: Running in initrd. Jan 14 13:22:04.103608 systemd[1]: No hostname configured, using default hostname. Jan 14 13:22:04.103623 systemd[1]: Hostname set to . Jan 14 13:22:04.103637 systemd[1]: Initializing machine ID from random generator. Jan 14 13:22:04.103649 systemd[1]: Queued start job for default target initrd.target. Jan 14 13:22:04.103663 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 13:22:04.103677 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 13:22:04.103693 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 14 13:22:04.103709 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 14 13:22:04.103725 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 14 13:22:04.103746 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 14 13:22:04.103766 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 14 13:22:04.103779 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 14 13:22:04.103794 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 13:22:04.103809 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 14 13:22:04.103823 systemd[1]: Reached target paths.target - Path Units. Jan 14 13:22:04.103846 systemd[1]: Reached target slices.target - Slice Units. Jan 14 13:22:04.103862 systemd[1]: Reached target swap.target - Swaps. Jan 14 13:22:04.103876 systemd[1]: Reached target timers.target - Timer Units. Jan 14 13:22:04.103889 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 13:22:04.103904 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 13:22:04.103917 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 14 13:22:04.103931 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 14 13:22:04.103945 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 14 13:22:04.103959 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 14 13:22:04.103974 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 13:22:04.103991 systemd[1]: Reached target sockets.target - Socket Units. Jan 14 13:22:04.104007 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 14 13:22:04.104023 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 14 13:22:04.104037 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 14 13:22:04.104051 systemd[1]: Starting systemd-fsck-usr.service... Jan 14 13:22:04.104066 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 14 13:22:04.104082 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 14 13:22:04.104122 systemd-journald[177]: Collecting audit messages is disabled. Jan 14 13:22:04.104155 systemd-journald[177]: Journal started Jan 14 13:22:04.104184 systemd-journald[177]: Runtime Journal (/run/log/journal/3e18c7ebbbd24626b3172a575931da91) is 8.0M, max 158.8M, 150.8M free. Jan 14 13:22:04.109600 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 13:22:04.119593 systemd[1]: Started systemd-journald.service - Journal Service. Jan 14 13:22:04.122901 systemd-modules-load[178]: Inserted module 'overlay' Jan 14 13:22:04.129366 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 14 13:22:04.135215 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 13:22:04.147465 systemd[1]: Finished systemd-fsck-usr.service. Jan 14 13:22:04.156220 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 14 13:22:04.166599 kernel: Bridge firewalling registered Jan 14 13:22:04.164795 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 14 13:22:04.175015 systemd-modules-load[178]: Inserted module 'br_netfilter' Jan 14 13:22:04.178469 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 14 13:22:04.184991 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 14 13:22:04.191114 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:22:04.197452 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 13:22:04.203726 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 13:22:04.212740 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 14 13:22:04.220279 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 14 13:22:04.231799 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 14 13:22:04.242877 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 14 13:22:04.249771 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 14 13:22:04.254004 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 13:22:04.257844 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 13:22:04.270828 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 14 13:22:04.290684 dracut-cmdline[213]: dracut-dracut-053 Jan 14 13:22:04.295189 dracut-cmdline[213]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 14 13:22:04.309957 systemd-resolved[207]: Positive Trust Anchors: Jan 14 13:22:04.309968 systemd-resolved[207]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 14 13:22:04.310007 systemd-resolved[207]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 14 13:22:04.315443 systemd-resolved[207]: Defaulting to hostname 'linux'. Jan 14 13:22:04.316474 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 14 13:22:04.320176 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 14 13:22:04.395608 kernel: SCSI subsystem initialized Jan 14 13:22:04.406608 kernel: Loading iSCSI transport class v2.0-870. Jan 14 13:22:04.417605 kernel: iscsi: registered transport (tcp) Jan 14 13:22:04.438660 kernel: iscsi: registered transport (qla4xxx) Jan 14 13:22:04.438753 kernel: QLogic iSCSI HBA Driver Jan 14 13:22:04.474364 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 14 13:22:04.486736 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 14 13:22:04.518208 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 14 13:22:04.518300 kernel: device-mapper: uevent: version 1.0.3 Jan 14 13:22:04.521752 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 14 13:22:04.562611 kernel: raid6: avx512x4 gen() 18432 MB/s Jan 14 13:22:04.581596 kernel: raid6: avx512x2 gen() 18405 MB/s Jan 14 13:22:04.600592 kernel: raid6: avx512x1 gen() 18357 MB/s Jan 14 13:22:04.619593 kernel: raid6: avx2x4 gen() 18461 MB/s Jan 14 13:22:04.638597 kernel: raid6: avx2x2 gen() 18461 MB/s Jan 14 13:22:04.658344 kernel: raid6: avx2x1 gen() 13882 MB/s Jan 14 13:22:04.658391 kernel: raid6: using algorithm avx2x4 gen() 18461 MB/s Jan 14 13:22:04.681841 kernel: raid6: .... xor() 6974 MB/s, rmw enabled Jan 14 13:22:04.681885 kernel: raid6: using avx512x2 recovery algorithm Jan 14 13:22:04.705607 kernel: xor: automatically using best checksumming function avx Jan 14 13:22:04.851607 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 14 13:22:04.861562 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 14 13:22:04.869765 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 13:22:04.893488 systemd-udevd[396]: Using default interface naming scheme 'v255'. Jan 14 13:22:04.897975 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 13:22:04.910781 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 14 13:22:04.923840 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation Jan 14 13:22:04.952761 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 13:22:04.962761 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 14 13:22:05.002668 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 13:22:05.015264 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 14 13:22:05.040663 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 14 13:22:05.049923 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 13:22:05.061356 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 13:22:05.064717 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 14 13:22:05.070789 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 14 13:22:05.103596 kernel: cryptd: max_cpu_qlen set to 1000 Jan 14 13:22:05.118834 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 14 13:22:05.128635 kernel: hv_vmbus: Vmbus version:5.2 Jan 14 13:22:05.148632 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 14 13:22:05.148707 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 14 13:22:05.150606 kernel: AVX2 version of gcm_enc/dec engaged. Jan 14 13:22:05.152598 kernel: AES CTR mode by8 optimization enabled Jan 14 13:22:05.168605 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 14 13:22:05.174106 kernel: hv_vmbus: registering driver hv_storvsc Jan 14 13:22:05.174138 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 14 13:22:05.168986 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 13:22:05.230464 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jan 14 13:22:05.230501 kernel: hv_vmbus: registering driver hv_netvsc Jan 14 13:22:05.230521 kernel: PTP clock support registered Jan 14 13:22:05.230539 kernel: scsi host1: storvsc_host_t Jan 14 13:22:05.230762 kernel: scsi host0: storvsc_host_t Jan 14 13:22:05.230913 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 14 13:22:05.231084 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jan 14 13:22:05.231244 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 14 13:22:05.238232 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 14 13:22:05.261019 kernel: hv_vmbus: registering driver hid_hyperv Jan 14 13:22:05.261052 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jan 14 13:22:05.261071 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 14 13:22:05.241125 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 13:22:06.085090 kernel: hv_utils: Registering HyperV Utility Driver Jan 14 13:22:06.085119 kernel: hv_vmbus: registering driver hv_utils Jan 14 13:22:06.085140 kernel: hv_utils: TimeSync IC version 4.0 Jan 14 13:22:06.085157 kernel: hv_utils: Shutdown IC version 3.2 Jan 14 13:22:06.085169 kernel: hv_utils: Heartbeat IC version 3.0 Jan 14 13:22:05.241408 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:22:05.248188 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 13:22:05.267263 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 13:22:06.076180 systemd-resolved[207]: Clock change detected. Flushing caches. Jan 14 13:22:06.102492 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 13:22:06.114381 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 14 13:22:06.114616 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 14 13:22:06.114638 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 14 13:22:06.102640 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:22:06.127185 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 13:22:06.144210 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 14 13:22:06.166495 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 14 13:22:06.166698 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 14 13:22:06.166914 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 14 13:22:06.167080 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 14 13:22:06.167262 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 14 13:22:06.167288 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 14 13:22:06.152259 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:22:06.164555 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 14 13:22:06.182979 kernel: hv_netvsc 000d3ad6-5cb3-000d-3ad6-5cb3000d3ad6 eth0: VF slot 1 added Jan 14 13:22:06.202817 kernel: hv_vmbus: registering driver hv_pci Jan 14 13:22:06.208947 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 13:22:06.210663 kernel: hv_pci 65695a65-0ca8-425e-bf59-efbeb6cdb389: PCI VMBus probing: Using version 0x10004 Jan 14 13:22:06.268546 kernel: hv_pci 65695a65-0ca8-425e-bf59-efbeb6cdb389: PCI host bridge to bus 0ca8:00 Jan 14 13:22:06.268984 kernel: pci_bus 0ca8:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Jan 14 13:22:06.269173 kernel: pci_bus 0ca8:00: No busn resource found for root bus, will use [bus 00-ff] Jan 14 13:22:06.269329 kernel: pci 0ca8:00:02.0: [15b3:1016] type 00 class 0x020000 Jan 14 13:22:06.269510 kernel: pci 0ca8:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Jan 14 13:22:06.269683 kernel: pci 0ca8:00:02.0: enabling Extended Tags Jan 14 13:22:06.269886 kernel: pci 0ca8:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 0ca8:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Jan 14 13:22:06.270048 kernel: pci_bus 0ca8:00: busn_res: [bus 00-ff] end is updated to 00 Jan 14 13:22:06.270199 kernel: pci 0ca8:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Jan 14 13:22:06.432495 kernel: mlx5_core 0ca8:00:02.0: enabling device (0000 -> 0002) Jan 14 13:22:06.660204 kernel: mlx5_core 0ca8:00:02.0: firmware version: 14.30.5000 Jan 14 13:22:06.660413 kernel: hv_netvsc 000d3ad6-5cb3-000d-3ad6-5cb3000d3ad6 eth0: VF registering: eth1 Jan 14 13:22:06.661036 kernel: mlx5_core 0ca8:00:02.0 eth1: joined to eth0 Jan 14 13:22:06.661247 kernel: mlx5_core 0ca8:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jan 14 13:22:06.667775 kernel: mlx5_core 0ca8:00:02.0 enP3240s1: renamed from eth1 Jan 14 13:22:06.728164 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 14 13:22:06.833811 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (443) Jan 14 13:22:06.848409 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 14 13:22:06.863510 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 14 13:22:06.879233 kernel: BTRFS: device fsid 5e7921ba-229a-48a0-bc77-9b30aaa34aeb devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (444) Jan 14 13:22:06.894901 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 14 13:22:06.901576 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 14 13:22:06.918954 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 14 13:22:06.933775 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 14 13:22:06.941775 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 14 13:22:07.949626 disk-uuid[599]: The operation has completed successfully. Jan 14 13:22:07.952791 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 14 13:22:08.036558 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 14 13:22:08.036674 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 14 13:22:08.059933 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 14 13:22:08.066529 sh[685]: Success Jan 14 13:22:08.103835 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 14 13:22:08.317605 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 14 13:22:08.329885 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 14 13:22:08.339299 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 14 13:22:08.355796 kernel: BTRFS info (device dm-0): first mount of filesystem 5e7921ba-229a-48a0-bc77-9b30aaa34aeb Jan 14 13:22:08.355850 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 14 13:22:08.361201 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 14 13:22:08.367112 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 14 13:22:08.369596 kernel: BTRFS info (device dm-0): using free space tree Jan 14 13:22:08.710738 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 14 13:22:08.716460 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 14 13:22:08.726956 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 14 13:22:08.732939 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 14 13:22:08.746195 kernel: BTRFS info (device sda6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 14 13:22:08.752024 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 13:22:08.752091 kernel: BTRFS info (device sda6): using free space tree Jan 14 13:22:08.778783 kernel: BTRFS info (device sda6): auto enabling async discard Jan 14 13:22:08.795428 kernel: BTRFS info (device sda6): last unmount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 14 13:22:08.795002 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 14 13:22:08.806552 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 14 13:22:08.819155 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 14 13:22:08.846224 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 13:22:08.862112 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 14 13:22:08.882381 systemd-networkd[869]: lo: Link UP Jan 14 13:22:08.882392 systemd-networkd[869]: lo: Gained carrier Jan 14 13:22:08.884867 systemd-networkd[869]: Enumeration completed Jan 14 13:22:08.885376 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 14 13:22:08.888175 systemd-networkd[869]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 14 13:22:08.888179 systemd-networkd[869]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 13:22:08.889281 systemd[1]: Reached target network.target - Network. Jan 14 13:22:08.948771 kernel: mlx5_core 0ca8:00:02.0 enP3240s1: Link up Jan 14 13:22:08.981793 kernel: hv_netvsc 000d3ad6-5cb3-000d-3ad6-5cb3000d3ad6 eth0: Data path switched to VF: enP3240s1 Jan 14 13:22:08.982089 systemd-networkd[869]: enP3240s1: Link UP Jan 14 13:22:08.982225 systemd-networkd[869]: eth0: Link UP Jan 14 13:22:08.982391 systemd-networkd[869]: eth0: Gained carrier Jan 14 13:22:08.982404 systemd-networkd[869]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 14 13:22:08.985470 systemd-networkd[869]: enP3240s1: Gained carrier Jan 14 13:22:09.012880 systemd-networkd[869]: eth0: DHCPv4 address 10.200.4.19/24, gateway 10.200.4.1 acquired from 168.63.129.16 Jan 14 13:22:09.909519 ignition[826]: Ignition 2.20.0 Jan 14 13:22:09.909533 ignition[826]: Stage: fetch-offline Jan 14 13:22:09.909577 ignition[826]: no configs at "/usr/lib/ignition/base.d" Jan 14 13:22:09.909586 ignition[826]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:22:09.909704 ignition[826]: parsed url from cmdline: "" Jan 14 13:22:09.909709 ignition[826]: no config URL provided Jan 14 13:22:09.909716 ignition[826]: reading system config file "/usr/lib/ignition/user.ign" Jan 14 13:22:09.909727 ignition[826]: no config at "/usr/lib/ignition/user.ign" Jan 14 13:22:09.924010 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 13:22:09.909734 ignition[826]: failed to fetch config: resource requires networking Jan 14 13:22:09.910039 ignition[826]: Ignition finished successfully Jan 14 13:22:09.941411 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 14 13:22:09.955787 ignition[877]: Ignition 2.20.0 Jan 14 13:22:09.955799 ignition[877]: Stage: fetch Jan 14 13:22:09.956020 ignition[877]: no configs at "/usr/lib/ignition/base.d" Jan 14 13:22:09.956033 ignition[877]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:22:09.956139 ignition[877]: parsed url from cmdline: "" Jan 14 13:22:09.956144 ignition[877]: no config URL provided Jan 14 13:22:09.956149 ignition[877]: reading system config file "/usr/lib/ignition/user.ign" Jan 14 13:22:09.956155 ignition[877]: no config at "/usr/lib/ignition/user.ign" Jan 14 13:22:09.956178 ignition[877]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 14 13:22:10.041635 ignition[877]: GET result: OK Jan 14 13:22:10.041792 ignition[877]: config has been read from IMDS userdata Jan 14 13:22:10.041831 ignition[877]: parsing config with SHA512: 0ca282ebb3318d0d98142a7d2ebf35dafb9708354e85db4119d721cab470b97c352866235fefb0f3faabd0abc34f2e5ea230af38f222c59f6b1af438258f43b7 Jan 14 13:22:10.046987 unknown[877]: fetched base config from "system" Jan 14 13:22:10.047005 unknown[877]: fetched base config from "system" Jan 14 13:22:10.047509 ignition[877]: fetch: fetch complete Jan 14 13:22:10.047013 unknown[877]: fetched user config from "azure" Jan 14 13:22:10.047514 ignition[877]: fetch: fetch passed Jan 14 13:22:10.049284 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 14 13:22:10.047560 ignition[877]: Ignition finished successfully Jan 14 13:22:10.057960 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 14 13:22:10.075284 ignition[884]: Ignition 2.20.0 Jan 14 13:22:10.081704 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 14 13:22:10.075292 ignition[884]: Stage: kargs Jan 14 13:22:10.075481 ignition[884]: no configs at "/usr/lib/ignition/base.d" Jan 14 13:22:10.075494 ignition[884]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:22:10.076542 ignition[884]: kargs: kargs passed Jan 14 13:22:10.076592 ignition[884]: Ignition finished successfully Jan 14 13:22:10.111997 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 14 13:22:10.124886 ignition[890]: Ignition 2.20.0 Jan 14 13:22:10.124898 ignition[890]: Stage: disks Jan 14 13:22:10.125140 ignition[890]: no configs at "/usr/lib/ignition/base.d" Jan 14 13:22:10.125153 ignition[890]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:22:10.128948 ignition[890]: disks: disks passed Jan 14 13:22:10.128999 ignition[890]: Ignition finished successfully Jan 14 13:22:10.138509 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 14 13:22:10.141468 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 14 13:22:10.146030 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 14 13:22:10.161606 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 14 13:22:10.166525 systemd[1]: Reached target sysinit.target - System Initialization. Jan 14 13:22:10.171950 systemd[1]: Reached target basic.target - Basic System. Jan 14 13:22:10.180964 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 14 13:22:10.258098 systemd-fsck[898]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 14 13:22:10.262581 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 14 13:22:10.276868 systemd-networkd[869]: eth0: Gained IPv6LL Jan 14 13:22:10.277981 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 14 13:22:10.397124 kernel: EXT4-fs (sda9): mounted filesystem 84bcd1b2-5573-4e91-8fd5-f97782397085 r/w with ordered data mode. Quota mode: none. Jan 14 13:22:10.397734 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 14 13:22:10.403542 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 14 13:22:10.459954 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 13:22:10.462530 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 14 13:22:10.475802 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (909) Jan 14 13:22:10.487671 kernel: BTRFS info (device sda6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 14 13:22:10.489280 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 13:22:10.492546 kernel: BTRFS info (device sda6): using free space tree Jan 14 13:22:10.497780 kernel: BTRFS info (device sda6): auto enabling async discard Jan 14 13:22:10.499996 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 14 13:22:10.506807 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 14 13:22:10.507723 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 13:22:10.528295 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 13:22:10.530930 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 14 13:22:10.544936 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 14 13:22:10.852902 systemd-networkd[869]: enP3240s1: Gained IPv6LL Jan 14 13:22:11.358499 coreos-metadata[926]: Jan 14 13:22:11.358 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 14 13:22:11.363213 coreos-metadata[926]: Jan 14 13:22:11.362 INFO Fetch successful Jan 14 13:22:11.363213 coreos-metadata[926]: Jan 14 13:22:11.362 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 14 13:22:11.376572 coreos-metadata[926]: Jan 14 13:22:11.374 INFO Fetch successful Jan 14 13:22:11.376572 coreos-metadata[926]: Jan 14 13:22:11.375 INFO wrote hostname ci-4152.2.0-a-0907529617 to /sysroot/etc/hostname Jan 14 13:22:11.384011 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 14 13:22:11.405707 initrd-setup-root[940]: cut: /sysroot/etc/passwd: No such file or directory Jan 14 13:22:11.428950 initrd-setup-root[947]: cut: /sysroot/etc/group: No such file or directory Jan 14 13:22:11.434902 initrd-setup-root[954]: cut: /sysroot/etc/shadow: No such file or directory Jan 14 13:22:11.441031 initrd-setup-root[961]: cut: /sysroot/etc/gshadow: No such file or directory Jan 14 13:22:12.249794 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 14 13:22:12.257955 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 14 13:22:12.264978 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 14 13:22:12.285896 kernel: BTRFS info (device sda6): last unmount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 14 13:22:12.280704 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 14 13:22:12.304143 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 14 13:22:12.315258 ignition[1035]: INFO : Ignition 2.20.0 Jan 14 13:22:12.315258 ignition[1035]: INFO : Stage: mount Jan 14 13:22:12.321162 ignition[1035]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 13:22:12.321162 ignition[1035]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:22:12.334254 ignition[1035]: INFO : mount: mount passed Jan 14 13:22:12.334254 ignition[1035]: INFO : Ignition finished successfully Jan 14 13:22:12.322911 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 14 13:22:12.341132 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 14 13:22:12.354992 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 13:22:12.377434 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1044) Jan 14 13:22:12.377517 kernel: BTRFS info (device sda6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 14 13:22:12.378772 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 13:22:12.383392 kernel: BTRFS info (device sda6): using free space tree Jan 14 13:22:12.388776 kernel: BTRFS info (device sda6): auto enabling async discard Jan 14 13:22:12.390483 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 13:22:12.411716 ignition[1060]: INFO : Ignition 2.20.0 Jan 14 13:22:12.411716 ignition[1060]: INFO : Stage: files Jan 14 13:22:12.418338 ignition[1060]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 13:22:12.418338 ignition[1060]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:22:12.418338 ignition[1060]: DEBUG : files: compiled without relabeling support, skipping Jan 14 13:22:12.431379 ignition[1060]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 14 13:22:12.431379 ignition[1060]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 14 13:22:12.497712 ignition[1060]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 14 13:22:12.501905 ignition[1060]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 14 13:22:12.501905 ignition[1060]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 14 13:22:12.498290 unknown[1060]: wrote ssh authorized keys file for user: core Jan 14 13:22:12.515765 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 14 13:22:12.521537 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 14 13:22:12.521537 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 14 13:22:12.521537 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 14 13:22:12.571139 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 14 13:22:13.360855 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 14 13:22:13.367268 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 14 13:22:13.367268 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 14 13:22:13.854823 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Jan 14 13:22:13.918492 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 14 13:22:13.923136 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Jan 14 13:22:13.923136 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Jan 14 13:22:13.923136 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 14 13:22:13.936479 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 14 13:22:13.936479 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 14 13:22:13.936479 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 14 13:22:13.936479 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 14 13:22:13.957473 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 14 13:22:13.962623 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 13:22:13.967377 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 13:22:13.967377 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 14 13:22:13.967377 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 14 13:22:13.967377 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 14 13:22:13.967377 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Jan 14 13:22:14.443402 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Jan 14 13:22:14.693347 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 14 13:22:14.693347 ignition[1060]: INFO : files: op(d): [started] processing unit "containerd.service" Jan 14 13:22:14.746883 ignition[1060]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 14 13:22:14.753627 ignition[1060]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 14 13:22:14.753627 ignition[1060]: INFO : files: op(d): [finished] processing unit "containerd.service" Jan 14 13:22:14.753627 ignition[1060]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Jan 14 13:22:14.770898 ignition[1060]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 14 13:22:14.770898 ignition[1060]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 14 13:22:14.770898 ignition[1060]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Jan 14 13:22:14.770898 ignition[1060]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 14 13:22:14.770898 ignition[1060]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 14 13:22:14.770898 ignition[1060]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 14 13:22:14.770898 ignition[1060]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 14 13:22:14.770898 ignition[1060]: INFO : files: files passed Jan 14 13:22:14.770898 ignition[1060]: INFO : Ignition finished successfully Jan 14 13:22:14.767265 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 14 13:22:14.795020 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 14 13:22:14.819932 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 14 13:22:14.824965 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 14 13:22:14.847322 initrd-setup-root-after-ignition[1088]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 13:22:14.847322 initrd-setup-root-after-ignition[1088]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 14 13:22:14.825062 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 14 13:22:14.861360 initrd-setup-root-after-ignition[1092]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 13:22:14.843057 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 13:22:14.848050 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 14 13:22:14.875976 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 14 13:22:14.914093 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 14 13:22:14.914236 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 14 13:22:14.927240 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 14 13:22:14.934913 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 14 13:22:14.942813 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 14 13:22:14.952965 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 14 13:22:14.967374 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 13:22:14.975947 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 14 13:22:14.987332 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 14 13:22:14.993799 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 13:22:15.000058 systemd[1]: Stopped target timers.target - Timer Units. Jan 14 13:22:15.002581 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 14 13:22:15.002707 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 13:22:15.008499 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 14 13:22:15.013354 systemd[1]: Stopped target basic.target - Basic System. Jan 14 13:22:15.018895 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 14 13:22:15.025459 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 13:22:15.030356 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 14 13:22:15.035925 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 14 13:22:15.048349 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 13:22:15.056070 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 14 13:22:15.061383 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 14 13:22:15.067823 systemd[1]: Stopped target swap.target - Swaps. Jan 14 13:22:15.075070 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 14 13:22:15.075242 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 14 13:22:15.083598 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 14 13:22:15.093257 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 13:22:15.103271 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 14 13:22:15.106627 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 13:22:15.110382 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 14 13:22:15.110549 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 14 13:22:15.118851 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 14 13:22:15.119030 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 13:22:15.124471 systemd[1]: ignition-files.service: Deactivated successfully. Jan 14 13:22:15.124623 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 14 13:22:15.139864 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 14 13:22:15.140022 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 14 13:22:15.153048 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 14 13:22:15.157674 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 14 13:22:15.159446 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 13:22:15.172464 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 14 13:22:15.187843 ignition[1113]: INFO : Ignition 2.20.0 Jan 14 13:22:15.187843 ignition[1113]: INFO : Stage: umount Jan 14 13:22:15.187843 ignition[1113]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 13:22:15.187843 ignition[1113]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:22:15.209464 ignition[1113]: INFO : umount: umount passed Jan 14 13:22:15.209464 ignition[1113]: INFO : Ignition finished successfully Jan 14 13:22:15.188081 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 14 13:22:15.191671 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 13:22:15.201308 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 14 13:22:15.201463 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 13:22:15.212392 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 14 13:22:15.212486 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 14 13:22:15.218538 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 14 13:22:15.218636 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 14 13:22:15.225445 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 14 13:22:15.225491 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 14 13:22:15.231239 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 14 13:22:15.231291 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 14 13:22:15.239109 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 14 13:22:15.239165 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 14 13:22:15.242316 systemd[1]: Stopped target network.target - Network. Jan 14 13:22:15.251634 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 14 13:22:15.251713 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 13:22:15.257164 systemd[1]: Stopped target paths.target - Path Units. Jan 14 13:22:15.259443 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 14 13:22:15.260011 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 13:22:15.273499 systemd[1]: Stopped target slices.target - Slice Units. Jan 14 13:22:15.311293 systemd[1]: Stopped target sockets.target - Socket Units. Jan 14 13:22:15.314115 systemd[1]: iscsid.socket: Deactivated successfully. Jan 14 13:22:15.314171 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 13:22:15.318606 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 14 13:22:15.318657 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 13:22:15.325828 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 14 13:22:15.327958 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 14 13:22:15.337318 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 14 13:22:15.337400 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 14 13:22:15.346710 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 14 13:22:15.350271 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 14 13:22:15.358954 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 14 13:22:15.359460 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 14 13:22:15.359566 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 14 13:22:15.364244 systemd-networkd[869]: eth0: DHCPv6 lease lost Jan 14 13:22:15.366386 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 14 13:22:15.366491 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 13:22:15.372167 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 14 13:22:15.372280 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 14 13:22:15.375279 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 14 13:22:15.375348 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 14 13:22:15.401035 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 14 13:22:15.409425 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 14 13:22:15.409524 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 13:22:15.427229 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 14 13:22:15.427301 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 14 13:22:15.430013 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 14 13:22:15.430063 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 14 13:22:15.436070 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 13:22:15.460372 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 14 13:22:15.460543 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 13:22:15.466480 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 14 13:22:15.466523 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 14 13:22:15.475274 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 14 13:22:15.475315 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 13:22:15.480305 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 14 13:22:15.505079 kernel: hv_netvsc 000d3ad6-5cb3-000d-3ad6-5cb3000d3ad6 eth0: Data path switched from VF: enP3240s1 Jan 14 13:22:15.480366 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 14 13:22:15.486403 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 14 13:22:15.486451 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 14 13:22:15.491532 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 14 13:22:15.491586 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 13:22:15.517016 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 14 13:22:15.525110 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 14 13:22:15.525167 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 13:22:15.531252 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 13:22:15.531318 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:22:15.542734 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 14 13:22:15.542925 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 14 13:22:15.550080 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 14 13:22:15.550171 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 14 13:22:15.945005 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 14 13:22:15.945169 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 14 13:22:15.948360 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 14 13:22:15.959196 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 14 13:22:15.959303 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 14 13:22:15.971960 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 14 13:22:16.075070 systemd[1]: Switching root. Jan 14 13:22:16.106016 systemd-journald[177]: Journal stopped Jan 14 13:22:04.100488 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 19:01:45 -00 2025 Jan 14 13:22:04.100513 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 14 13:22:04.100522 kernel: BIOS-provided physical RAM map: Jan 14 13:22:04.100528 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 14 13:22:04.100534 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jan 14 13:22:04.100539 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Jan 14 13:22:04.100546 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved Jan 14 13:22:04.100555 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jan 14 13:22:04.100561 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jan 14 13:22:04.100567 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jan 14 13:22:04.100573 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jan 14 13:22:04.100593 kernel: printk: bootconsole [earlyser0] enabled Jan 14 13:22:04.100603 kernel: NX (Execute Disable) protection: active Jan 14 13:22:04.100611 kernel: APIC: Static calls initialized Jan 14 13:22:04.100621 kernel: efi: EFI v2.7 by Microsoft Jan 14 13:22:04.100629 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c1a98 RNG=0x3ffd1018 Jan 14 13:22:04.100636 kernel: random: crng init done Jan 14 13:22:04.100643 kernel: secureboot: Secure boot disabled Jan 14 13:22:04.100649 kernel: SMBIOS 3.1.0 present. Jan 14 13:22:04.100656 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Jan 14 13:22:04.100663 kernel: Hypervisor detected: Microsoft Hyper-V Jan 14 13:22:04.100670 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Jan 14 13:22:04.100677 kernel: Hyper-V: Host Build 10.0.20348.1633-1-0 Jan 14 13:22:04.100684 kernel: Hyper-V: Nested features: 0x1e0101 Jan 14 13:22:04.100693 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jan 14 13:22:04.100699 kernel: Hyper-V: Using hypercall for remote TLB flush Jan 14 13:22:04.100706 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 14 13:22:04.100713 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 14 13:22:04.100721 kernel: tsc: Marking TSC unstable due to running on Hyper-V Jan 14 13:22:04.100728 kernel: tsc: Detected 2593.906 MHz processor Jan 14 13:22:04.100735 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 14 13:22:04.100742 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 14 13:22:04.100749 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Jan 14 13:22:04.100759 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 14 13:22:04.100766 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 14 13:22:04.100772 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Jan 14 13:22:04.100779 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Jan 14 13:22:04.100786 kernel: Using GB pages for direct mapping Jan 14 13:22:04.100793 kernel: ACPI: Early table checksum verification disabled Jan 14 13:22:04.100800 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jan 14 13:22:04.100810 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:22:04.100820 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:22:04.100827 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Jan 14 13:22:04.100835 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jan 14 13:22:04.100842 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:22:04.100849 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:22:04.100857 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:22:04.100866 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:22:04.100874 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:22:04.100881 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:22:04.100888 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 13:22:04.100896 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jan 14 13:22:04.100903 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Jan 14 13:22:04.100911 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jan 14 13:22:04.100918 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jan 14 13:22:04.100925 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jan 14 13:22:04.100935 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jan 14 13:22:04.100942 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Jan 14 13:22:04.100953 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Jan 14 13:22:04.100963 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jan 14 13:22:04.100970 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Jan 14 13:22:04.100981 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 14 13:22:04.100989 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 14 13:22:04.100999 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jan 14 13:22:04.101012 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Jan 14 13:22:04.101020 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Jan 14 13:22:04.101030 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jan 14 13:22:04.101041 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jan 14 13:22:04.101049 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jan 14 13:22:04.101057 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jan 14 13:22:04.101068 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jan 14 13:22:04.101075 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jan 14 13:22:04.101084 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jan 14 13:22:04.101095 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jan 14 13:22:04.101102 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jan 14 13:22:04.101113 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Jan 14 13:22:04.101121 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Jan 14 13:22:04.101129 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Jan 14 13:22:04.101139 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Jan 14 13:22:04.101146 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Jan 14 13:22:04.101154 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Jan 14 13:22:04.101161 kernel: Zone ranges: Jan 14 13:22:04.101171 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 14 13:22:04.101178 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 14 13:22:04.101186 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jan 14 13:22:04.101193 kernel: Movable zone start for each node Jan 14 13:22:04.101200 kernel: Early memory node ranges Jan 14 13:22:04.101208 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 14 13:22:04.101215 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Jan 14 13:22:04.101222 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jan 14 13:22:04.101230 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jan 14 13:22:04.101240 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jan 14 13:22:04.101250 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 14 13:22:04.101257 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 14 13:22:04.101265 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Jan 14 13:22:04.101276 kernel: ACPI: PM-Timer IO Port: 0x408 Jan 14 13:22:04.101284 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Jan 14 13:22:04.101294 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Jan 14 13:22:04.101304 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 14 13:22:04.101312 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 14 13:22:04.101327 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jan 14 13:22:04.101335 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 14 13:22:04.101347 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jan 14 13:22:04.101358 kernel: Booting paravirtualized kernel on Hyper-V Jan 14 13:22:04.101369 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 14 13:22:04.101382 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 14 13:22:04.101395 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 14 13:22:04.101405 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 14 13:22:04.101415 kernel: pcpu-alloc: [0] 0 1 Jan 14 13:22:04.101429 kernel: Hyper-V: PV spinlocks enabled Jan 14 13:22:04.101441 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 14 13:22:04.101454 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 14 13:22:04.101466 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 14 13:22:04.101478 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 14 13:22:04.101491 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 14 13:22:04.101504 kernel: Fallback order for Node 0: 0 Jan 14 13:22:04.101518 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Jan 14 13:22:04.101535 kernel: Policy zone: Normal Jan 14 13:22:04.101558 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 14 13:22:04.101572 kernel: software IO TLB: area num 2. Jan 14 13:22:04.101599 kernel: Memory: 8069620K/8387460K available (12288K kernel code, 2299K rwdata, 22736K rodata, 42976K init, 2216K bss, 317584K reserved, 0K cma-reserved) Jan 14 13:22:04.101613 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 14 13:22:04.101628 kernel: ftrace: allocating 37920 entries in 149 pages Jan 14 13:22:04.101642 kernel: ftrace: allocated 149 pages with 4 groups Jan 14 13:22:04.101655 kernel: Dynamic Preempt: voluntary Jan 14 13:22:04.101670 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 14 13:22:04.101688 kernel: rcu: RCU event tracing is enabled. Jan 14 13:22:04.101702 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 14 13:22:04.101716 kernel: Trampoline variant of Tasks RCU enabled. Jan 14 13:22:04.101724 kernel: Rude variant of Tasks RCU enabled. Jan 14 13:22:04.101732 kernel: Tracing variant of Tasks RCU enabled. Jan 14 13:22:04.101740 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 14 13:22:04.101748 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 14 13:22:04.101758 kernel: Using NULL legacy PIC Jan 14 13:22:04.101766 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jan 14 13:22:04.101774 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 14 13:22:04.101782 kernel: Console: colour dummy device 80x25 Jan 14 13:22:04.101790 kernel: printk: console [tty1] enabled Jan 14 13:22:04.101797 kernel: printk: console [ttyS0] enabled Jan 14 13:22:04.101805 kernel: printk: bootconsole [earlyser0] disabled Jan 14 13:22:04.101813 kernel: ACPI: Core revision 20230628 Jan 14 13:22:04.101821 kernel: Failed to register legacy timer interrupt Jan 14 13:22:04.101829 kernel: APIC: Switch to symmetric I/O mode setup Jan 14 13:22:04.101839 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 14 13:22:04.101847 kernel: Hyper-V: Using IPI hypercalls Jan 14 13:22:04.101854 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jan 14 13:22:04.101862 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jan 14 13:22:04.101870 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jan 14 13:22:04.101878 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jan 14 13:22:04.101886 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jan 14 13:22:04.101894 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jan 14 13:22:04.101902 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593906) Jan 14 13:22:04.101912 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 14 13:22:04.101920 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 14 13:22:04.101928 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 14 13:22:04.101936 kernel: Spectre V2 : Mitigation: Retpolines Jan 14 13:22:04.101944 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 14 13:22:04.101951 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 14 13:22:04.101960 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 14 13:22:04.101967 kernel: RETBleed: Vulnerable Jan 14 13:22:04.101975 kernel: Speculative Store Bypass: Vulnerable Jan 14 13:22:04.101983 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Jan 14 13:22:04.101993 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 14 13:22:04.102000 kernel: GDS: Unknown: Dependent on hypervisor status Jan 14 13:22:04.102008 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 14 13:22:04.102016 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 14 13:22:04.102024 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 14 13:22:04.102032 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 14 13:22:04.102039 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 14 13:22:04.102047 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 14 13:22:04.102055 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 14 13:22:04.102063 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jan 14 13:22:04.102071 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jan 14 13:22:04.102081 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jan 14 13:22:04.102088 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Jan 14 13:22:04.102096 kernel: Freeing SMP alternatives memory: 32K Jan 14 13:22:04.102104 kernel: pid_max: default: 32768 minimum: 301 Jan 14 13:22:04.102112 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 14 13:22:04.102119 kernel: landlock: Up and running. Jan 14 13:22:04.102127 kernel: SELinux: Initializing. Jan 14 13:22:04.102135 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 14 13:22:04.102143 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 14 13:22:04.102151 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Jan 14 13:22:04.102159 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 14 13:22:04.102169 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 14 13:22:04.102177 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 14 13:22:04.102185 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 14 13:22:04.102193 kernel: signal: max sigframe size: 3632 Jan 14 13:22:04.102201 kernel: rcu: Hierarchical SRCU implementation. Jan 14 13:22:04.102209 kernel: rcu: Max phase no-delay instances is 400. Jan 14 13:22:04.102217 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 14 13:22:04.102225 kernel: smp: Bringing up secondary CPUs ... Jan 14 13:22:04.102233 kernel: smpboot: x86: Booting SMP configuration: Jan 14 13:22:04.102243 kernel: .... node #0, CPUs: #1 Jan 14 13:22:04.102251 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Jan 14 13:22:04.102260 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 14 13:22:04.102267 kernel: smp: Brought up 1 node, 2 CPUs Jan 14 13:22:04.102275 kernel: smpboot: Max logical packages: 1 Jan 14 13:22:04.102283 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Jan 14 13:22:04.102291 kernel: devtmpfs: initialized Jan 14 13:22:04.102299 kernel: x86/mm: Memory block size: 128MB Jan 14 13:22:04.102309 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jan 14 13:22:04.102316 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 14 13:22:04.102324 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 14 13:22:04.102332 kernel: pinctrl core: initialized pinctrl subsystem Jan 14 13:22:04.102340 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 14 13:22:04.102348 kernel: audit: initializing netlink subsys (disabled) Jan 14 13:22:04.102356 kernel: audit: type=2000 audit(1736860922.027:1): state=initialized audit_enabled=0 res=1 Jan 14 13:22:04.102364 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 14 13:22:04.102372 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 14 13:22:04.102382 kernel: cpuidle: using governor menu Jan 14 13:22:04.102390 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 14 13:22:04.102398 kernel: dca service started, version 1.12.1 Jan 14 13:22:04.102406 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Jan 14 13:22:04.102413 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 14 13:22:04.102421 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 14 13:22:04.102429 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 14 13:22:04.102437 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 14 13:22:04.102445 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 14 13:22:04.102455 kernel: ACPI: Added _OSI(Module Device) Jan 14 13:22:04.102463 kernel: ACPI: Added _OSI(Processor Device) Jan 14 13:22:04.102471 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 14 13:22:04.102478 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 14 13:22:04.102486 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 14 13:22:04.102494 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 14 13:22:04.102502 kernel: ACPI: Interpreter enabled Jan 14 13:22:04.102510 kernel: ACPI: PM: (supports S0 S5) Jan 14 13:22:04.102518 kernel: ACPI: Using IOAPIC for interrupt routing Jan 14 13:22:04.102528 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 14 13:22:04.102535 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 14 13:22:04.102543 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jan 14 13:22:04.102551 kernel: iommu: Default domain type: Translated Jan 14 13:22:04.102559 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 14 13:22:04.102567 kernel: efivars: Registered efivars operations Jan 14 13:22:04.102575 kernel: PCI: Using ACPI for IRQ routing Jan 14 13:22:04.102643 kernel: PCI: System does not support PCI Jan 14 13:22:04.102656 kernel: vgaarb: loaded Jan 14 13:22:04.102673 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Jan 14 13:22:04.102685 kernel: VFS: Disk quotas dquot_6.6.0 Jan 14 13:22:04.102713 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 14 13:22:04.102724 kernel: pnp: PnP ACPI init Jan 14 13:22:04.102732 kernel: pnp: PnP ACPI: found 3 devices Jan 14 13:22:04.102740 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 14 13:22:04.102748 kernel: NET: Registered PF_INET protocol family Jan 14 13:22:04.102756 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 14 13:22:04.102764 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 14 13:22:04.102775 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 14 13:22:04.102784 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 14 13:22:04.102792 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 14 13:22:04.102799 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 14 13:22:04.102807 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 14 13:22:04.102815 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 14 13:22:04.102823 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 14 13:22:04.102831 kernel: NET: Registered PF_XDP protocol family Jan 14 13:22:04.102839 kernel: PCI: CLS 0 bytes, default 64 Jan 14 13:22:04.102849 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 14 13:22:04.102857 kernel: software IO TLB: mapped [mem 0x000000003ae75000-0x000000003ee75000] (64MB) Jan 14 13:22:04.102865 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 14 13:22:04.102873 kernel: Initialise system trusted keyrings Jan 14 13:22:04.102881 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 14 13:22:04.102888 kernel: Key type asymmetric registered Jan 14 13:22:04.102896 kernel: Asymmetric key parser 'x509' registered Jan 14 13:22:04.102904 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 14 13:22:04.102912 kernel: io scheduler mq-deadline registered Jan 14 13:22:04.102922 kernel: io scheduler kyber registered Jan 14 13:22:04.102930 kernel: io scheduler bfq registered Jan 14 13:22:04.102938 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 14 13:22:04.102945 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 14 13:22:04.102953 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 14 13:22:04.102962 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 14 13:22:04.102969 kernel: i8042: PNP: No PS/2 controller found. Jan 14 13:22:04.103092 kernel: rtc_cmos 00:02: registered as rtc0 Jan 14 13:22:04.103176 kernel: rtc_cmos 00:02: setting system clock to 2025-01-14T13:22:03 UTC (1736860923) Jan 14 13:22:04.103248 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jan 14 13:22:04.103259 kernel: intel_pstate: CPU model not supported Jan 14 13:22:04.103267 kernel: efifb: probing for efifb Jan 14 13:22:04.103275 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 14 13:22:04.103283 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 14 13:22:04.103291 kernel: efifb: scrolling: redraw Jan 14 13:22:04.103299 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 14 13:22:04.103307 kernel: Console: switching to colour frame buffer device 128x48 Jan 14 13:22:04.103318 kernel: fb0: EFI VGA frame buffer device Jan 14 13:22:04.103326 kernel: pstore: Using crash dump compression: deflate Jan 14 13:22:04.103333 kernel: pstore: Registered efi_pstore as persistent store backend Jan 14 13:22:04.103341 kernel: NET: Registered PF_INET6 protocol family Jan 14 13:22:04.103349 kernel: Segment Routing with IPv6 Jan 14 13:22:04.103357 kernel: In-situ OAM (IOAM) with IPv6 Jan 14 13:22:04.103365 kernel: NET: Registered PF_PACKET protocol family Jan 14 13:22:04.103373 kernel: Key type dns_resolver registered Jan 14 13:22:04.103381 kernel: IPI shorthand broadcast: enabled Jan 14 13:22:04.103391 kernel: sched_clock: Marking stable (929004500, 59950400)->(1264880700, -275925800) Jan 14 13:22:04.103399 kernel: registered taskstats version 1 Jan 14 13:22:04.103407 kernel: Loading compiled-in X.509 certificates Jan 14 13:22:04.103414 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 98739e9049f62881f4df7ffd1e39335f7f55b344' Jan 14 13:22:04.103422 kernel: Key type .fscrypt registered Jan 14 13:22:04.103430 kernel: Key type fscrypt-provisioning registered Jan 14 13:22:04.103438 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 14 13:22:04.103446 kernel: ima: Allocated hash algorithm: sha1 Jan 14 13:22:04.103456 kernel: ima: No architecture policies found Jan 14 13:22:04.103464 kernel: clk: Disabling unused clocks Jan 14 13:22:04.103472 kernel: Freeing unused kernel image (initmem) memory: 42976K Jan 14 13:22:04.103480 kernel: Write protecting the kernel read-only data: 36864k Jan 14 13:22:04.103488 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K Jan 14 13:22:04.103496 kernel: Run /init as init process Jan 14 13:22:04.103504 kernel: with arguments: Jan 14 13:22:04.103512 kernel: /init Jan 14 13:22:04.103519 kernel: with environment: Jan 14 13:22:04.103527 kernel: HOME=/ Jan 14 13:22:04.103537 kernel: TERM=linux Jan 14 13:22:04.103545 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 14 13:22:04.103555 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 14 13:22:04.103564 systemd[1]: Detected virtualization microsoft. Jan 14 13:22:04.103573 systemd[1]: Detected architecture x86-64. Jan 14 13:22:04.103595 systemd[1]: Running in initrd. Jan 14 13:22:04.103608 systemd[1]: No hostname configured, using default hostname. Jan 14 13:22:04.103623 systemd[1]: Hostname set to . Jan 14 13:22:04.103637 systemd[1]: Initializing machine ID from random generator. Jan 14 13:22:04.103649 systemd[1]: Queued start job for default target initrd.target. Jan 14 13:22:04.103663 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 13:22:04.103677 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 13:22:04.103693 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 14 13:22:04.103709 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 14 13:22:04.103725 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 14 13:22:04.103746 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 14 13:22:04.103766 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 14 13:22:04.103779 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 14 13:22:04.103794 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 13:22:04.103809 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 14 13:22:04.103823 systemd[1]: Reached target paths.target - Path Units. Jan 14 13:22:04.103846 systemd[1]: Reached target slices.target - Slice Units. Jan 14 13:22:04.103862 systemd[1]: Reached target swap.target - Swaps. Jan 14 13:22:04.103876 systemd[1]: Reached target timers.target - Timer Units. Jan 14 13:22:04.103889 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 13:22:04.103904 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 13:22:04.103917 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 14 13:22:04.103931 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 14 13:22:04.103945 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 14 13:22:04.103959 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 14 13:22:04.103974 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 13:22:04.103991 systemd[1]: Reached target sockets.target - Socket Units. Jan 14 13:22:04.104007 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 14 13:22:04.104023 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 14 13:22:04.104037 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 14 13:22:04.104051 systemd[1]: Starting systemd-fsck-usr.service... Jan 14 13:22:04.104066 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 14 13:22:04.104082 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 14 13:22:04.104122 systemd-journald[177]: Collecting audit messages is disabled. Jan 14 13:22:04.104155 systemd-journald[177]: Journal started Jan 14 13:22:04.104184 systemd-journald[177]: Runtime Journal (/run/log/journal/3e18c7ebbbd24626b3172a575931da91) is 8.0M, max 158.8M, 150.8M free. Jan 14 13:22:04.109600 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 13:22:04.119593 systemd[1]: Started systemd-journald.service - Journal Service. Jan 14 13:22:04.122901 systemd-modules-load[178]: Inserted module 'overlay' Jan 14 13:22:04.129366 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 14 13:22:04.135215 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 13:22:04.147465 systemd[1]: Finished systemd-fsck-usr.service. Jan 14 13:22:04.156220 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 14 13:22:04.166599 kernel: Bridge firewalling registered Jan 14 13:22:04.164795 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 14 13:22:04.175015 systemd-modules-load[178]: Inserted module 'br_netfilter' Jan 14 13:22:04.178469 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 14 13:22:04.184991 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 14 13:22:04.191114 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:22:04.197452 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 13:22:04.203726 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 13:22:04.212740 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 14 13:22:04.220279 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 14 13:22:04.231799 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 14 13:22:04.242877 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 14 13:22:04.249771 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 14 13:22:04.254004 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 13:22:04.257844 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 13:22:04.270828 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 14 13:22:04.290684 dracut-cmdline[213]: dracut-dracut-053 Jan 14 13:22:04.295189 dracut-cmdline[213]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 14 13:22:04.309957 systemd-resolved[207]: Positive Trust Anchors: Jan 14 13:22:04.309968 systemd-resolved[207]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 14 13:22:04.310007 systemd-resolved[207]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 14 13:22:04.315443 systemd-resolved[207]: Defaulting to hostname 'linux'. Jan 14 13:22:04.316474 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 14 13:22:04.320176 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 14 13:22:04.395608 kernel: SCSI subsystem initialized Jan 14 13:22:04.406608 kernel: Loading iSCSI transport class v2.0-870. Jan 14 13:22:04.417605 kernel: iscsi: registered transport (tcp) Jan 14 13:22:04.438660 kernel: iscsi: registered transport (qla4xxx) Jan 14 13:22:04.438753 kernel: QLogic iSCSI HBA Driver Jan 14 13:22:04.474364 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 14 13:22:04.486736 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 14 13:22:04.518208 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 14 13:22:04.518300 kernel: device-mapper: uevent: version 1.0.3 Jan 14 13:22:04.521752 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 14 13:22:04.562611 kernel: raid6: avx512x4 gen() 18432 MB/s Jan 14 13:22:04.581596 kernel: raid6: avx512x2 gen() 18405 MB/s Jan 14 13:22:04.600592 kernel: raid6: avx512x1 gen() 18357 MB/s Jan 14 13:22:04.619593 kernel: raid6: avx2x4 gen() 18461 MB/s Jan 14 13:22:04.638597 kernel: raid6: avx2x2 gen() 18461 MB/s Jan 14 13:22:04.658344 kernel: raid6: avx2x1 gen() 13882 MB/s Jan 14 13:22:04.658391 kernel: raid6: using algorithm avx2x4 gen() 18461 MB/s Jan 14 13:22:04.681841 kernel: raid6: .... xor() 6974 MB/s, rmw enabled Jan 14 13:22:04.681885 kernel: raid6: using avx512x2 recovery algorithm Jan 14 13:22:04.705607 kernel: xor: automatically using best checksumming function avx Jan 14 13:22:04.851607 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 14 13:22:04.861562 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 14 13:22:04.869765 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 13:22:04.893488 systemd-udevd[396]: Using default interface naming scheme 'v255'. Jan 14 13:22:04.897975 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 13:22:04.910781 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 14 13:22:04.923840 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation Jan 14 13:22:04.952761 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 13:22:04.962761 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 14 13:22:05.002668 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 13:22:05.015264 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 14 13:22:05.040663 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 14 13:22:05.049923 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 13:22:05.061356 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 13:22:05.064717 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 14 13:22:05.070789 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 14 13:22:05.103596 kernel: cryptd: max_cpu_qlen set to 1000 Jan 14 13:22:05.118834 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 14 13:22:05.128635 kernel: hv_vmbus: Vmbus version:5.2 Jan 14 13:22:05.148632 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 14 13:22:05.148707 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 14 13:22:05.150606 kernel: AVX2 version of gcm_enc/dec engaged. Jan 14 13:22:05.152598 kernel: AES CTR mode by8 optimization enabled Jan 14 13:22:05.168605 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 14 13:22:05.174106 kernel: hv_vmbus: registering driver hv_storvsc Jan 14 13:22:05.174138 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 14 13:22:05.168986 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 13:22:05.230464 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jan 14 13:22:05.230501 kernel: hv_vmbus: registering driver hv_netvsc Jan 14 13:22:05.230521 kernel: PTP clock support registered Jan 14 13:22:05.230539 kernel: scsi host1: storvsc_host_t Jan 14 13:22:05.230762 kernel: scsi host0: storvsc_host_t Jan 14 13:22:05.230913 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 14 13:22:05.231084 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jan 14 13:22:05.231244 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 14 13:22:05.238232 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 14 13:22:05.261019 kernel: hv_vmbus: registering driver hid_hyperv Jan 14 13:22:05.261052 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jan 14 13:22:05.261071 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 14 13:22:05.241125 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 13:22:06.085090 kernel: hv_utils: Registering HyperV Utility Driver Jan 14 13:22:06.085119 kernel: hv_vmbus: registering driver hv_utils Jan 14 13:22:06.085140 kernel: hv_utils: TimeSync IC version 4.0 Jan 14 13:22:06.085157 kernel: hv_utils: Shutdown IC version 3.2 Jan 14 13:22:06.085169 kernel: hv_utils: Heartbeat IC version 3.0 Jan 14 13:22:05.241408 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:22:05.248188 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 13:22:05.267263 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 13:22:06.076180 systemd-resolved[207]: Clock change detected. Flushing caches. Jan 14 13:22:06.102492 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 13:22:06.114381 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 14 13:22:06.114616 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 14 13:22:06.114638 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 14 13:22:06.102640 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:22:06.127185 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 13:22:06.144210 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 14 13:22:06.166495 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 14 13:22:06.166698 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 14 13:22:06.166914 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 14 13:22:06.167080 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 14 13:22:06.167262 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 14 13:22:06.167288 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 14 13:22:06.152259 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:22:06.164555 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 14 13:22:06.182979 kernel: hv_netvsc 000d3ad6-5cb3-000d-3ad6-5cb3000d3ad6 eth0: VF slot 1 added Jan 14 13:22:06.202817 kernel: hv_vmbus: registering driver hv_pci Jan 14 13:22:06.208947 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 13:22:06.210663 kernel: hv_pci 65695a65-0ca8-425e-bf59-efbeb6cdb389: PCI VMBus probing: Using version 0x10004 Jan 14 13:22:06.268546 kernel: hv_pci 65695a65-0ca8-425e-bf59-efbeb6cdb389: PCI host bridge to bus 0ca8:00 Jan 14 13:22:06.268984 kernel: pci_bus 0ca8:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Jan 14 13:22:06.269173 kernel: pci_bus 0ca8:00: No busn resource found for root bus, will use [bus 00-ff] Jan 14 13:22:06.269329 kernel: pci 0ca8:00:02.0: [15b3:1016] type 00 class 0x020000 Jan 14 13:22:06.269510 kernel: pci 0ca8:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Jan 14 13:22:06.269683 kernel: pci 0ca8:00:02.0: enabling Extended Tags Jan 14 13:22:06.269886 kernel: pci 0ca8:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 0ca8:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Jan 14 13:22:06.270048 kernel: pci_bus 0ca8:00: busn_res: [bus 00-ff] end is updated to 00 Jan 14 13:22:06.270199 kernel: pci 0ca8:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Jan 14 13:22:06.432495 kernel: mlx5_core 0ca8:00:02.0: enabling device (0000 -> 0002) Jan 14 13:22:06.660204 kernel: mlx5_core 0ca8:00:02.0: firmware version: 14.30.5000 Jan 14 13:22:06.660413 kernel: hv_netvsc 000d3ad6-5cb3-000d-3ad6-5cb3000d3ad6 eth0: VF registering: eth1 Jan 14 13:22:06.661036 kernel: mlx5_core 0ca8:00:02.0 eth1: joined to eth0 Jan 14 13:22:06.661247 kernel: mlx5_core 0ca8:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jan 14 13:22:06.667775 kernel: mlx5_core 0ca8:00:02.0 enP3240s1: renamed from eth1 Jan 14 13:22:06.728164 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 14 13:22:06.833811 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (443) Jan 14 13:22:06.848409 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 14 13:22:06.863510 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 14 13:22:06.879233 kernel: BTRFS: device fsid 5e7921ba-229a-48a0-bc77-9b30aaa34aeb devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (444) Jan 14 13:22:06.894901 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 14 13:22:06.901576 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 14 13:22:06.918954 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 14 13:22:06.933775 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 14 13:22:06.941775 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 14 13:22:07.949626 disk-uuid[599]: The operation has completed successfully. Jan 14 13:22:07.952791 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 14 13:22:08.036558 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 14 13:22:08.036674 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 14 13:22:08.059933 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 14 13:22:08.066529 sh[685]: Success Jan 14 13:22:08.103835 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 14 13:22:08.317605 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 14 13:22:08.329885 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 14 13:22:08.339299 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 14 13:22:08.355796 kernel: BTRFS info (device dm-0): first mount of filesystem 5e7921ba-229a-48a0-bc77-9b30aaa34aeb Jan 14 13:22:08.355850 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 14 13:22:08.361201 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 14 13:22:08.367112 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 14 13:22:08.369596 kernel: BTRFS info (device dm-0): using free space tree Jan 14 13:22:08.710738 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 14 13:22:08.716460 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 14 13:22:08.726956 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 14 13:22:08.732939 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 14 13:22:08.746195 kernel: BTRFS info (device sda6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 14 13:22:08.752024 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 13:22:08.752091 kernel: BTRFS info (device sda6): using free space tree Jan 14 13:22:08.778783 kernel: BTRFS info (device sda6): auto enabling async discard Jan 14 13:22:08.795428 kernel: BTRFS info (device sda6): last unmount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 14 13:22:08.795002 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 14 13:22:08.806552 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 14 13:22:08.819155 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 14 13:22:08.846224 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 13:22:08.862112 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 14 13:22:08.882381 systemd-networkd[869]: lo: Link UP Jan 14 13:22:08.882392 systemd-networkd[869]: lo: Gained carrier Jan 14 13:22:08.884867 systemd-networkd[869]: Enumeration completed Jan 14 13:22:08.885376 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 14 13:22:08.888175 systemd-networkd[869]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 14 13:22:08.888179 systemd-networkd[869]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 13:22:08.889281 systemd[1]: Reached target network.target - Network. Jan 14 13:22:08.948771 kernel: mlx5_core 0ca8:00:02.0 enP3240s1: Link up Jan 14 13:22:08.981793 kernel: hv_netvsc 000d3ad6-5cb3-000d-3ad6-5cb3000d3ad6 eth0: Data path switched to VF: enP3240s1 Jan 14 13:22:08.982089 systemd-networkd[869]: enP3240s1: Link UP Jan 14 13:22:08.982225 systemd-networkd[869]: eth0: Link UP Jan 14 13:22:08.982391 systemd-networkd[869]: eth0: Gained carrier Jan 14 13:22:08.982404 systemd-networkd[869]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 14 13:22:08.985470 systemd-networkd[869]: enP3240s1: Gained carrier Jan 14 13:22:09.012880 systemd-networkd[869]: eth0: DHCPv4 address 10.200.4.19/24, gateway 10.200.4.1 acquired from 168.63.129.16 Jan 14 13:22:09.909519 ignition[826]: Ignition 2.20.0 Jan 14 13:22:09.909533 ignition[826]: Stage: fetch-offline Jan 14 13:22:09.909577 ignition[826]: no configs at "/usr/lib/ignition/base.d" Jan 14 13:22:09.909586 ignition[826]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:22:09.909704 ignition[826]: parsed url from cmdline: "" Jan 14 13:22:09.909709 ignition[826]: no config URL provided Jan 14 13:22:09.909716 ignition[826]: reading system config file "/usr/lib/ignition/user.ign" Jan 14 13:22:09.909727 ignition[826]: no config at "/usr/lib/ignition/user.ign" Jan 14 13:22:09.924010 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 13:22:09.909734 ignition[826]: failed to fetch config: resource requires networking Jan 14 13:22:09.910039 ignition[826]: Ignition finished successfully Jan 14 13:22:09.941411 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 14 13:22:09.955787 ignition[877]: Ignition 2.20.0 Jan 14 13:22:09.955799 ignition[877]: Stage: fetch Jan 14 13:22:09.956020 ignition[877]: no configs at "/usr/lib/ignition/base.d" Jan 14 13:22:09.956033 ignition[877]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:22:09.956139 ignition[877]: parsed url from cmdline: "" Jan 14 13:22:09.956144 ignition[877]: no config URL provided Jan 14 13:22:09.956149 ignition[877]: reading system config file "/usr/lib/ignition/user.ign" Jan 14 13:22:09.956155 ignition[877]: no config at "/usr/lib/ignition/user.ign" Jan 14 13:22:09.956178 ignition[877]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 14 13:22:10.041635 ignition[877]: GET result: OK Jan 14 13:22:10.041792 ignition[877]: config has been read from IMDS userdata Jan 14 13:22:10.041831 ignition[877]: parsing config with SHA512: 0ca282ebb3318d0d98142a7d2ebf35dafb9708354e85db4119d721cab470b97c352866235fefb0f3faabd0abc34f2e5ea230af38f222c59f6b1af438258f43b7 Jan 14 13:22:10.046987 unknown[877]: fetched base config from "system" Jan 14 13:22:10.047005 unknown[877]: fetched base config from "system" Jan 14 13:22:10.047509 ignition[877]: fetch: fetch complete Jan 14 13:22:10.047013 unknown[877]: fetched user config from "azure" Jan 14 13:22:10.047514 ignition[877]: fetch: fetch passed Jan 14 13:22:10.049284 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 14 13:22:10.047560 ignition[877]: Ignition finished successfully Jan 14 13:22:10.057960 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 14 13:22:10.075284 ignition[884]: Ignition 2.20.0 Jan 14 13:22:10.081704 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 14 13:22:10.075292 ignition[884]: Stage: kargs Jan 14 13:22:10.075481 ignition[884]: no configs at "/usr/lib/ignition/base.d" Jan 14 13:22:10.075494 ignition[884]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:22:10.076542 ignition[884]: kargs: kargs passed Jan 14 13:22:10.076592 ignition[884]: Ignition finished successfully Jan 14 13:22:10.111997 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 14 13:22:10.124886 ignition[890]: Ignition 2.20.0 Jan 14 13:22:10.124898 ignition[890]: Stage: disks Jan 14 13:22:10.125140 ignition[890]: no configs at "/usr/lib/ignition/base.d" Jan 14 13:22:10.125153 ignition[890]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:22:10.128948 ignition[890]: disks: disks passed Jan 14 13:22:10.128999 ignition[890]: Ignition finished successfully Jan 14 13:22:10.138509 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 14 13:22:10.141468 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 14 13:22:10.146030 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 14 13:22:10.161606 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 14 13:22:10.166525 systemd[1]: Reached target sysinit.target - System Initialization. Jan 14 13:22:10.171950 systemd[1]: Reached target basic.target - Basic System. Jan 14 13:22:10.180964 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 14 13:22:10.258098 systemd-fsck[898]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 14 13:22:10.262581 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 14 13:22:10.276868 systemd-networkd[869]: eth0: Gained IPv6LL Jan 14 13:22:10.277981 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 14 13:22:10.397124 kernel: EXT4-fs (sda9): mounted filesystem 84bcd1b2-5573-4e91-8fd5-f97782397085 r/w with ordered data mode. Quota mode: none. Jan 14 13:22:10.397734 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 14 13:22:10.403542 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 14 13:22:10.459954 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 13:22:10.462530 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 14 13:22:10.475802 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (909) Jan 14 13:22:10.487671 kernel: BTRFS info (device sda6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 14 13:22:10.489280 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 13:22:10.492546 kernel: BTRFS info (device sda6): using free space tree Jan 14 13:22:10.497780 kernel: BTRFS info (device sda6): auto enabling async discard Jan 14 13:22:10.499996 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 14 13:22:10.506807 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 14 13:22:10.507723 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 13:22:10.528295 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 13:22:10.530930 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 14 13:22:10.544936 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 14 13:22:10.852902 systemd-networkd[869]: enP3240s1: Gained IPv6LL Jan 14 13:22:11.358499 coreos-metadata[926]: Jan 14 13:22:11.358 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 14 13:22:11.363213 coreos-metadata[926]: Jan 14 13:22:11.362 INFO Fetch successful Jan 14 13:22:11.363213 coreos-metadata[926]: Jan 14 13:22:11.362 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 14 13:22:11.376572 coreos-metadata[926]: Jan 14 13:22:11.374 INFO Fetch successful Jan 14 13:22:11.376572 coreos-metadata[926]: Jan 14 13:22:11.375 INFO wrote hostname ci-4152.2.0-a-0907529617 to /sysroot/etc/hostname Jan 14 13:22:11.384011 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 14 13:22:11.405707 initrd-setup-root[940]: cut: /sysroot/etc/passwd: No such file or directory Jan 14 13:22:11.428950 initrd-setup-root[947]: cut: /sysroot/etc/group: No such file or directory Jan 14 13:22:11.434902 initrd-setup-root[954]: cut: /sysroot/etc/shadow: No such file or directory Jan 14 13:22:11.441031 initrd-setup-root[961]: cut: /sysroot/etc/gshadow: No such file or directory Jan 14 13:22:12.249794 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 14 13:22:12.257955 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 14 13:22:12.264978 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 14 13:22:12.285896 kernel: BTRFS info (device sda6): last unmount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 14 13:22:12.280704 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 14 13:22:12.304143 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 14 13:22:12.315258 ignition[1035]: INFO : Ignition 2.20.0 Jan 14 13:22:12.315258 ignition[1035]: INFO : Stage: mount Jan 14 13:22:12.321162 ignition[1035]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 13:22:12.321162 ignition[1035]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:22:12.334254 ignition[1035]: INFO : mount: mount passed Jan 14 13:22:12.334254 ignition[1035]: INFO : Ignition finished successfully Jan 14 13:22:12.322911 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 14 13:22:12.341132 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 14 13:22:12.354992 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 13:22:12.377434 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1044) Jan 14 13:22:12.377517 kernel: BTRFS info (device sda6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 14 13:22:12.378772 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 13:22:12.383392 kernel: BTRFS info (device sda6): using free space tree Jan 14 13:22:12.388776 kernel: BTRFS info (device sda6): auto enabling async discard Jan 14 13:22:12.390483 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 13:22:12.411716 ignition[1060]: INFO : Ignition 2.20.0 Jan 14 13:22:12.411716 ignition[1060]: INFO : Stage: files Jan 14 13:22:12.418338 ignition[1060]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 13:22:12.418338 ignition[1060]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:22:12.418338 ignition[1060]: DEBUG : files: compiled without relabeling support, skipping Jan 14 13:22:12.431379 ignition[1060]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 14 13:22:12.431379 ignition[1060]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 14 13:22:12.497712 ignition[1060]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 14 13:22:12.501905 ignition[1060]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 14 13:22:12.501905 ignition[1060]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 14 13:22:12.498290 unknown[1060]: wrote ssh authorized keys file for user: core Jan 14 13:22:12.515765 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 14 13:22:12.521537 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 14 13:22:12.521537 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 14 13:22:12.521537 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 14 13:22:12.571139 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 14 13:22:13.360855 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 14 13:22:13.367268 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 14 13:22:13.367268 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 14 13:22:13.854823 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Jan 14 13:22:13.918492 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 14 13:22:13.923136 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Jan 14 13:22:13.923136 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Jan 14 13:22:13.923136 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 14 13:22:13.936479 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 14 13:22:13.936479 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 14 13:22:13.936479 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 14 13:22:13.936479 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 14 13:22:13.957473 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 14 13:22:13.962623 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 13:22:13.967377 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 13:22:13.967377 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 14 13:22:13.967377 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 14 13:22:13.967377 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 14 13:22:13.967377 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Jan 14 13:22:14.443402 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Jan 14 13:22:14.693347 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 14 13:22:14.693347 ignition[1060]: INFO : files: op(d): [started] processing unit "containerd.service" Jan 14 13:22:14.746883 ignition[1060]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 14 13:22:14.753627 ignition[1060]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 14 13:22:14.753627 ignition[1060]: INFO : files: op(d): [finished] processing unit "containerd.service" Jan 14 13:22:14.753627 ignition[1060]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Jan 14 13:22:14.770898 ignition[1060]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 14 13:22:14.770898 ignition[1060]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 14 13:22:14.770898 ignition[1060]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Jan 14 13:22:14.770898 ignition[1060]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 14 13:22:14.770898 ignition[1060]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 14 13:22:14.770898 ignition[1060]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 14 13:22:14.770898 ignition[1060]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 14 13:22:14.770898 ignition[1060]: INFO : files: files passed Jan 14 13:22:14.770898 ignition[1060]: INFO : Ignition finished successfully Jan 14 13:22:14.767265 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 14 13:22:14.795020 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 14 13:22:14.819932 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 14 13:22:14.824965 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 14 13:22:14.847322 initrd-setup-root-after-ignition[1088]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 13:22:14.847322 initrd-setup-root-after-ignition[1088]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 14 13:22:14.825062 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 14 13:22:14.861360 initrd-setup-root-after-ignition[1092]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 13:22:14.843057 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 13:22:14.848050 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 14 13:22:14.875976 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 14 13:22:14.914093 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 14 13:22:14.914236 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 14 13:22:14.927240 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 14 13:22:14.934913 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 14 13:22:14.942813 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 14 13:22:14.952965 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 14 13:22:14.967374 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 13:22:14.975947 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 14 13:22:14.987332 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 14 13:22:14.993799 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 13:22:15.000058 systemd[1]: Stopped target timers.target - Timer Units. Jan 14 13:22:15.002581 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 14 13:22:15.002707 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 13:22:15.008499 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 14 13:22:15.013354 systemd[1]: Stopped target basic.target - Basic System. Jan 14 13:22:15.018895 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 14 13:22:15.025459 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 13:22:15.030356 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 14 13:22:15.035925 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 14 13:22:15.048349 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 13:22:15.056070 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 14 13:22:15.061383 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 14 13:22:15.067823 systemd[1]: Stopped target swap.target - Swaps. Jan 14 13:22:15.075070 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 14 13:22:15.075242 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 14 13:22:15.083598 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 14 13:22:15.093257 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 13:22:15.103271 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 14 13:22:15.106627 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 13:22:15.110382 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 14 13:22:15.110549 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 14 13:22:15.118851 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 14 13:22:15.119030 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 13:22:15.124471 systemd[1]: ignition-files.service: Deactivated successfully. Jan 14 13:22:15.124623 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 14 13:22:15.139864 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 14 13:22:15.140022 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 14 13:22:15.153048 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 14 13:22:15.157674 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 14 13:22:15.159446 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 13:22:15.172464 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 14 13:22:15.187843 ignition[1113]: INFO : Ignition 2.20.0 Jan 14 13:22:15.187843 ignition[1113]: INFO : Stage: umount Jan 14 13:22:15.187843 ignition[1113]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 13:22:15.187843 ignition[1113]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 13:22:15.209464 ignition[1113]: INFO : umount: umount passed Jan 14 13:22:15.209464 ignition[1113]: INFO : Ignition finished successfully Jan 14 13:22:15.188081 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 14 13:22:15.191671 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 13:22:15.201308 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 14 13:22:15.201463 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 13:22:15.212392 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 14 13:22:15.212486 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 14 13:22:15.218538 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 14 13:22:15.218636 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 14 13:22:15.225445 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 14 13:22:15.225491 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 14 13:22:15.231239 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 14 13:22:15.231291 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 14 13:22:15.239109 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 14 13:22:15.239165 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 14 13:22:15.242316 systemd[1]: Stopped target network.target - Network. Jan 14 13:22:15.251634 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 14 13:22:15.251713 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 13:22:15.257164 systemd[1]: Stopped target paths.target - Path Units. Jan 14 13:22:15.259443 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 14 13:22:15.260011 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 13:22:15.273499 systemd[1]: Stopped target slices.target - Slice Units. Jan 14 13:22:15.311293 systemd[1]: Stopped target sockets.target - Socket Units. Jan 14 13:22:15.314115 systemd[1]: iscsid.socket: Deactivated successfully. Jan 14 13:22:15.314171 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 13:22:15.318606 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 14 13:22:15.318657 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 13:22:15.325828 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 14 13:22:15.327958 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 14 13:22:15.337318 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 14 13:22:15.337400 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 14 13:22:15.346710 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 14 13:22:15.350271 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 14 13:22:15.358954 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 14 13:22:15.359460 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 14 13:22:15.359566 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 14 13:22:15.364244 systemd-networkd[869]: eth0: DHCPv6 lease lost Jan 14 13:22:15.366386 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 14 13:22:15.366491 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 13:22:15.372167 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 14 13:22:15.372280 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 14 13:22:15.375279 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 14 13:22:15.375348 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 14 13:22:15.401035 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 14 13:22:15.409425 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 14 13:22:15.409524 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 13:22:15.427229 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 14 13:22:15.427301 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 14 13:22:15.430013 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 14 13:22:15.430063 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 14 13:22:15.436070 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 13:22:15.460372 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 14 13:22:15.460543 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 13:22:15.466480 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 14 13:22:15.466523 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 14 13:22:15.475274 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 14 13:22:15.475315 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 13:22:15.480305 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 14 13:22:15.505079 kernel: hv_netvsc 000d3ad6-5cb3-000d-3ad6-5cb3000d3ad6 eth0: Data path switched from VF: enP3240s1 Jan 14 13:22:15.480366 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 14 13:22:15.486403 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 14 13:22:15.486451 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 14 13:22:15.491532 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 14 13:22:15.491586 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 13:22:15.517016 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 14 13:22:15.525110 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 14 13:22:15.525167 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 13:22:15.531252 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 13:22:15.531318 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:22:15.542734 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 14 13:22:15.542925 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 14 13:22:15.550080 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 14 13:22:15.550171 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 14 13:22:15.945005 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 14 13:22:15.945169 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 14 13:22:15.948360 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 14 13:22:15.959196 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 14 13:22:15.959303 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 14 13:22:15.971960 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 14 13:22:16.075070 systemd[1]: Switching root. Jan 14 13:22:16.106016 systemd-journald[177]: Journal stopped Jan 14 13:22:22.199652 systemd-journald[177]: Received SIGTERM from PID 1 (systemd). Jan 14 13:22:22.199687 kernel: SELinux: policy capability network_peer_controls=1 Jan 14 13:22:22.199705 kernel: SELinux: policy capability open_perms=1 Jan 14 13:22:22.199717 kernel: SELinux: policy capability extended_socket_class=1 Jan 14 13:22:22.199725 kernel: SELinux: policy capability always_check_network=0 Jan 14 13:22:22.199736 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 14 13:22:22.199745 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 14 13:22:22.199880 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 14 13:22:22.199892 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 14 13:22:22.199902 kernel: audit: type=1403 audit(1736860938.402:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 14 13:22:22.199915 systemd[1]: Successfully loaded SELinux policy in 149.473ms. Jan 14 13:22:22.199927 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.294ms. Jan 14 13:22:22.199940 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 14 13:22:22.199952 systemd[1]: Detected virtualization microsoft. Jan 14 13:22:22.199968 systemd[1]: Detected architecture x86-64. Jan 14 13:22:22.199981 systemd[1]: Detected first boot. Jan 14 13:22:22.199993 systemd[1]: Hostname set to . Jan 14 13:22:22.200004 systemd[1]: Initializing machine ID from random generator. Jan 14 13:22:22.200018 zram_generator::config[1175]: No configuration found. Jan 14 13:22:22.200033 systemd[1]: Populated /etc with preset unit settings. Jan 14 13:22:22.200044 systemd[1]: Queued start job for default target multi-user.target. Jan 14 13:22:22.200056 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 14 13:22:22.200071 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 14 13:22:22.200086 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 14 13:22:22.200097 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 14 13:22:22.200109 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 14 13:22:22.200125 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 14 13:22:22.200135 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 14 13:22:22.200146 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 14 13:22:22.200158 systemd[1]: Created slice user.slice - User and Session Slice. Jan 14 13:22:22.200170 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 13:22:22.200181 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 13:22:22.200194 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 14 13:22:22.200209 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 14 13:22:22.200221 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 14 13:22:22.200234 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 14 13:22:22.200247 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 14 13:22:22.200261 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 13:22:22.200272 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 14 13:22:22.200284 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 13:22:22.200301 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 14 13:22:22.200314 systemd[1]: Reached target slices.target - Slice Units. Jan 14 13:22:22.200330 systemd[1]: Reached target swap.target - Swaps. Jan 14 13:22:22.200342 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 14 13:22:22.200355 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 14 13:22:22.200369 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 14 13:22:22.200382 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 14 13:22:22.200393 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 14 13:22:22.200406 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 14 13:22:22.200421 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 13:22:22.200436 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 14 13:22:22.200449 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 14 13:22:22.200462 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 14 13:22:22.200474 systemd[1]: Mounting media.mount - External Media Directory... Jan 14 13:22:22.200490 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 13:22:22.200504 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 14 13:22:22.200517 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 14 13:22:22.200528 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 14 13:22:22.200542 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 14 13:22:22.200553 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 13:22:22.200566 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 14 13:22:22.200579 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 14 13:22:22.200593 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 13:22:22.200606 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 14 13:22:22.200616 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 13:22:22.200631 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 14 13:22:22.200645 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 13:22:22.200659 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 14 13:22:22.200673 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 14 13:22:22.200687 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 14 13:22:22.200703 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 14 13:22:22.200713 kernel: fuse: init (API version 7.39) Jan 14 13:22:22.200726 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 14 13:22:22.200740 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 14 13:22:22.200759 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 14 13:22:22.200773 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 14 13:22:22.200786 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 13:22:22.200798 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 14 13:22:22.200814 kernel: loop: module loaded Jan 14 13:22:22.200827 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 14 13:22:22.200841 systemd[1]: Mounted media.mount - External Media Directory. Jan 14 13:22:22.200853 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 14 13:22:22.200865 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 14 13:22:22.200879 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 14 13:22:22.200913 systemd-journald[1282]: Collecting audit messages is disabled. Jan 14 13:22:22.200941 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 14 13:22:22.200956 systemd-journald[1282]: Journal started Jan 14 13:22:22.200982 systemd-journald[1282]: Runtime Journal (/run/log/journal/faf346b160b14af9a0eec4ac8bdb8535) is 8.0M, max 158.8M, 150.8M free. Jan 14 13:22:22.211778 systemd[1]: Started systemd-journald.service - Journal Service. Jan 14 13:22:22.217093 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 13:22:22.230525 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 14 13:22:22.230863 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 14 13:22:22.234461 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 13:22:22.234715 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 13:22:22.238506 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 13:22:22.239188 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 13:22:22.245226 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 14 13:22:22.245482 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 14 13:22:22.248938 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 13:22:22.249184 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 13:22:22.255987 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 14 13:22:22.259796 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 14 13:22:22.265368 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 14 13:22:22.287027 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 13:22:22.298597 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 14 13:22:22.321774 kernel: ACPI: bus type drm_connector registered Jan 14 13:22:22.322984 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 14 13:22:22.336901 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 14 13:22:22.340065 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 14 13:22:22.367952 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 14 13:22:22.372299 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 14 13:22:22.375795 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 14 13:22:22.378206 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 14 13:22:22.382804 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 14 13:22:22.386977 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 14 13:22:22.401301 systemd-journald[1282]: Time spent on flushing to /var/log/journal/faf346b160b14af9a0eec4ac8bdb8535 is 44.357ms for 946 entries. Jan 14 13:22:22.401301 systemd-journald[1282]: System Journal (/var/log/journal/faf346b160b14af9a0eec4ac8bdb8535) is 8.0M, max 2.6G, 2.6G free. Jan 14 13:22:22.470263 systemd-journald[1282]: Received client request to flush runtime journal. Jan 14 13:22:22.406133 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 14 13:22:22.411954 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 14 13:22:22.425409 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 14 13:22:22.425938 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 14 13:22:22.435427 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 14 13:22:22.440473 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 14 13:22:22.455594 udevadm[1336]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 14 13:22:22.460029 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 14 13:22:22.464056 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 14 13:22:22.473225 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 14 13:22:22.528389 systemd-tmpfiles[1334]: ACLs are not supported, ignoring. Jan 14 13:22:22.528417 systemd-tmpfiles[1334]: ACLs are not supported, ignoring. Jan 14 13:22:22.535468 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 13:22:22.551116 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 14 13:22:22.556632 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 14 13:22:22.705746 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 14 13:22:22.719041 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 14 13:22:22.740243 systemd-tmpfiles[1356]: ACLs are not supported, ignoring. Jan 14 13:22:22.740269 systemd-tmpfiles[1356]: ACLs are not supported, ignoring. Jan 14 13:22:22.745655 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 13:22:23.850661 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 14 13:22:23.858027 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 13:22:23.890075 systemd-udevd[1362]: Using default interface naming scheme 'v255'. Jan 14 13:22:24.240527 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 13:22:24.253935 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 14 13:22:24.322589 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jan 14 13:22:24.337900 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 14 13:22:24.422849 kernel: mousedev: PS/2 mouse device common for all mice Jan 14 13:22:24.439607 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 14 13:22:24.468782 kernel: hv_vmbus: registering driver hv_balloon Jan 14 13:22:24.473780 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jan 14 13:22:24.503818 kernel: hv_vmbus: registering driver hyperv_fb Jan 14 13:22:24.523205 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jan 14 13:22:24.523284 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jan 14 13:22:24.528860 kernel: Console: switching to colour dummy device 80x25 Jan 14 13:22:24.531772 kernel: Console: switching to colour frame buffer device 128x48 Jan 14 13:22:24.566027 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 13:22:24.576270 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 13:22:24.576593 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:22:24.605257 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 13:22:24.741197 systemd-networkd[1366]: lo: Link UP Jan 14 13:22:24.741789 systemd-networkd[1366]: lo: Gained carrier Jan 14 13:22:24.748391 systemd-networkd[1366]: Enumeration completed Jan 14 13:22:24.748552 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 14 13:22:24.753207 systemd-networkd[1366]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 14 13:22:24.753297 systemd-networkd[1366]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 13:22:24.763663 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 14 13:22:24.780959 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1374) Jan 14 13:22:24.828778 kernel: mlx5_core 0ca8:00:02.0 enP3240s1: Link up Jan 14 13:22:24.852348 kernel: hv_netvsc 000d3ad6-5cb3-000d-3ad6-5cb3000d3ad6 eth0: Data path switched to VF: enP3240s1 Jan 14 13:22:24.852803 systemd-networkd[1366]: enP3240s1: Link UP Jan 14 13:22:24.852938 systemd-networkd[1366]: eth0: Link UP Jan 14 13:22:24.852943 systemd-networkd[1366]: eth0: Gained carrier Jan 14 13:22:24.852963 systemd-networkd[1366]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 14 13:22:24.901013 systemd-networkd[1366]: enP3240s1: Gained carrier Jan 14 13:22:24.927116 systemd-networkd[1366]: eth0: DHCPv4 address 10.200.4.19/24, gateway 10.200.4.1 acquired from 168.63.129.16 Jan 14 13:22:24.962217 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 14 13:22:25.007777 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Jan 14 13:22:25.019043 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:22:25.075470 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 14 13:22:25.089142 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 14 13:22:25.237556 lvm[1479]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 14 13:22:25.268232 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 14 13:22:25.272004 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 14 13:22:25.282957 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 14 13:22:25.287884 lvm[1482]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 14 13:22:25.321323 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 14 13:22:25.327393 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 14 13:22:25.331026 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 14 13:22:25.331068 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 14 13:22:25.333649 systemd[1]: Reached target machines.target - Containers. Jan 14 13:22:25.337221 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 14 13:22:25.345939 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 14 13:22:25.350340 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 14 13:22:25.352932 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 13:22:25.355918 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 14 13:22:25.366920 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 14 13:22:25.374929 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 14 13:22:25.380504 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 14 13:22:25.452524 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 14 13:22:25.454328 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 14 13:22:25.481554 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 14 13:22:25.488872 kernel: loop0: detected capacity change from 0 to 211296 Jan 14 13:22:25.537785 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 14 13:22:25.576784 kernel: loop1: detected capacity change from 0 to 140992 Jan 14 13:22:26.038782 kernel: loop2: detected capacity change from 0 to 28272 Jan 14 13:22:26.445792 kernel: loop3: detected capacity change from 0 to 138184 Jan 14 13:22:26.597033 systemd-networkd[1366]: eth0: Gained IPv6LL Jan 14 13:22:26.604132 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 14 13:22:26.660919 systemd-networkd[1366]: enP3240s1: Gained IPv6LL Jan 14 13:22:26.844782 kernel: loop4: detected capacity change from 0 to 211296 Jan 14 13:22:26.852792 kernel: loop5: detected capacity change from 0 to 140992 Jan 14 13:22:26.865733 kernel: loop6: detected capacity change from 0 to 28272 Jan 14 13:22:26.872779 kernel: loop7: detected capacity change from 0 to 138184 Jan 14 13:22:26.881952 (sd-merge)[1505]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jan 14 13:22:26.882535 (sd-merge)[1505]: Merged extensions into '/usr'. Jan 14 13:22:26.886620 systemd[1]: Reloading requested from client PID 1490 ('systemd-sysext') (unit systemd-sysext.service)... Jan 14 13:22:26.886637 systemd[1]: Reloading... Jan 14 13:22:26.949074 zram_generator::config[1535]: No configuration found. Jan 14 13:22:27.102950 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 14 13:22:27.178493 systemd[1]: Reloading finished in 291 ms. Jan 14 13:22:27.195006 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 14 13:22:27.208946 systemd[1]: Starting ensure-sysext.service... Jan 14 13:22:27.219949 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 14 13:22:27.229875 systemd[1]: Reloading requested from client PID 1596 ('systemctl') (unit ensure-sysext.service)... Jan 14 13:22:27.230065 systemd[1]: Reloading... Jan 14 13:22:27.259356 systemd-tmpfiles[1597]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 14 13:22:27.260943 systemd-tmpfiles[1597]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 14 13:22:27.262365 systemd-tmpfiles[1597]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 14 13:22:27.262877 systemd-tmpfiles[1597]: ACLs are not supported, ignoring. Jan 14 13:22:27.263067 systemd-tmpfiles[1597]: ACLs are not supported, ignoring. Jan 14 13:22:27.306795 zram_generator::config[1625]: No configuration found. Jan 14 13:22:27.325158 systemd-tmpfiles[1597]: Detected autofs mount point /boot during canonicalization of boot. Jan 14 13:22:27.325178 systemd-tmpfiles[1597]: Skipping /boot Jan 14 13:22:27.336466 systemd-tmpfiles[1597]: Detected autofs mount point /boot during canonicalization of boot. Jan 14 13:22:27.336481 systemd-tmpfiles[1597]: Skipping /boot Jan 14 13:22:27.458180 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 14 13:22:27.540519 systemd[1]: Reloading finished in 309 ms. Jan 14 13:22:27.562059 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 13:22:27.578920 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 14 13:22:27.617934 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 14 13:22:27.624858 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 14 13:22:27.637940 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 14 13:22:27.643949 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 14 13:22:27.661333 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 13:22:27.661629 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 13:22:27.666263 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 13:22:27.679113 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 13:22:27.693034 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 13:22:27.703635 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 13:22:27.703864 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 13:22:27.705154 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 13:22:27.705395 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 13:22:27.717977 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 13:22:27.719236 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 13:22:27.729412 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 13:22:27.731999 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 13:22:27.749235 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 14 13:22:27.749548 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 14 13:22:27.755378 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 14 13:22:27.763626 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 14 13:22:27.770968 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 13:22:27.771821 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 13:22:27.779041 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 13:22:27.788399 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 13:22:27.795055 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 13:22:27.798101 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 13:22:27.798286 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 13:22:27.801786 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 13:22:27.802019 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 13:22:27.808034 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 13:22:27.808265 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 13:22:27.812694 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 13:22:27.813186 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 13:22:27.824730 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 13:22:27.825040 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 13:22:27.831305 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 13:22:27.836104 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 14 13:22:27.842070 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 13:22:27.856116 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 13:22:27.862060 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 13:22:27.863664 systemd[1]: Reached target time-set.target - System Time Set. Jan 14 13:22:27.867263 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 13:22:27.870422 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 13:22:27.870637 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 13:22:27.875356 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 14 13:22:27.875591 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 14 13:22:27.878997 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 13:22:27.879176 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 13:22:27.882912 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 13:22:27.883139 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 13:22:27.893164 systemd[1]: Finished ensure-sysext.service. Jan 14 13:22:27.899372 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 14 13:22:27.899433 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 14 13:22:27.902059 augenrules[1756]: No rules Jan 14 13:22:27.903386 systemd[1]: audit-rules.service: Deactivated successfully. Jan 14 13:22:27.903721 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 14 13:22:27.918708 systemd-resolved[1696]: Positive Trust Anchors: Jan 14 13:22:27.918725 systemd-resolved[1696]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 14 13:22:27.918786 systemd-resolved[1696]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 14 13:22:27.953453 systemd-resolved[1696]: Using system hostname 'ci-4152.2.0-a-0907529617'. Jan 14 13:22:27.970945 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 14 13:22:27.974196 systemd[1]: Reached target network.target - Network. Jan 14 13:22:27.976406 systemd[1]: Reached target network-online.target - Network is Online. Jan 14 13:22:27.979805 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 14 13:22:28.388726 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 14 13:22:28.394053 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 14 13:22:31.606591 ldconfig[1486]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 14 13:22:31.616438 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 14 13:22:31.628069 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 14 13:22:31.793678 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 14 13:22:31.797465 systemd[1]: Reached target sysinit.target - System Initialization. Jan 14 13:22:31.800714 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 14 13:22:31.805937 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 14 13:22:31.809399 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 14 13:22:31.812734 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 14 13:22:31.816002 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 14 13:22:31.819184 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 14 13:22:31.819224 systemd[1]: Reached target paths.target - Path Units. Jan 14 13:22:31.821520 systemd[1]: Reached target timers.target - Timer Units. Jan 14 13:22:31.824830 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 14 13:22:31.829402 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 14 13:22:31.851501 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 14 13:22:31.854875 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 14 13:22:31.857958 systemd[1]: Reached target sockets.target - Socket Units. Jan 14 13:22:31.860358 systemd[1]: Reached target basic.target - Basic System. Jan 14 13:22:31.862797 systemd[1]: System is tainted: cgroupsv1 Jan 14 13:22:31.862873 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 14 13:22:31.862914 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 14 13:22:31.896890 systemd[1]: Starting chronyd.service - NTP client/server... Jan 14 13:22:31.902914 systemd[1]: Starting containerd.service - containerd container runtime... Jan 14 13:22:31.911938 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 14 13:22:31.920717 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 14 13:22:31.938862 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 14 13:22:31.946931 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 14 13:22:31.954303 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 14 13:22:31.954367 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Jan 14 13:22:31.963734 (chronyd)[1772]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jan 14 13:22:31.982127 jq[1780]: false Jan 14 13:22:31.966946 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jan 14 13:22:31.971305 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jan 14 13:22:31.975873 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:22:31.984555 KVP[1782]: KVP starting; pid is:1782 Jan 14 13:22:31.986394 chronyd[1787]: chronyd version 4.6 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jan 14 13:22:31.989108 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 14 13:22:32.000774 KVP[1782]: KVP LIC Version: 3.1 Jan 14 13:22:32.004522 kernel: hv_utils: KVP IC version 4.0 Jan 14 13:22:32.003972 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 14 13:22:32.014944 chronyd[1787]: Timezone right/UTC failed leap second check, ignoring Jan 14 13:22:32.015245 chronyd[1787]: Loaded seccomp filter (level 2) Jan 14 13:22:32.017124 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 14 13:22:32.031333 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 14 13:22:32.045980 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 14 13:22:32.050774 extend-filesystems[1781]: Found loop4 Jan 14 13:22:32.050774 extend-filesystems[1781]: Found loop5 Jan 14 13:22:32.050774 extend-filesystems[1781]: Found loop6 Jan 14 13:22:32.050774 extend-filesystems[1781]: Found loop7 Jan 14 13:22:32.050774 extend-filesystems[1781]: Found sda Jan 14 13:22:32.050774 extend-filesystems[1781]: Found sda1 Jan 14 13:22:32.050774 extend-filesystems[1781]: Found sda2 Jan 14 13:22:32.050774 extend-filesystems[1781]: Found sda3 Jan 14 13:22:32.050774 extend-filesystems[1781]: Found usr Jan 14 13:22:32.050774 extend-filesystems[1781]: Found sda4 Jan 14 13:22:32.050774 extend-filesystems[1781]: Found sda6 Jan 14 13:22:32.050774 extend-filesystems[1781]: Found sda7 Jan 14 13:22:32.050774 extend-filesystems[1781]: Found sda9 Jan 14 13:22:32.050774 extend-filesystems[1781]: Checking size of /dev/sda9 Jan 14 13:22:32.075054 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 14 13:22:32.084716 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 14 13:22:32.098736 systemd[1]: Starting update-engine.service - Update Engine... Jan 14 13:22:32.112907 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 14 13:22:32.119220 systemd[1]: Started chronyd.service - NTP client/server. Jan 14 13:22:32.124901 jq[1814]: true Jan 14 13:22:32.129161 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 14 13:22:32.129478 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 14 13:22:32.140208 systemd[1]: motdgen.service: Deactivated successfully. Jan 14 13:22:32.141045 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 14 13:22:32.156768 extend-filesystems[1781]: Old size kept for /dev/sda9 Jan 14 13:22:32.190022 extend-filesystems[1781]: Found sr0 Jan 14 13:22:32.174665 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 14 13:22:32.202709 update_engine[1812]: I20250114 13:22:32.192950 1812 main.cc:92] Flatcar Update Engine starting Jan 14 13:22:32.177161 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 14 13:22:32.183680 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 14 13:22:32.212510 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 14 13:22:32.212880 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 14 13:22:32.229501 (ntainerd)[1829]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 14 13:22:32.253310 dbus-daemon[1775]: [system] SELinux support is enabled Jan 14 13:22:32.253847 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 14 13:22:32.265523 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 14 13:22:32.265570 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 14 13:22:32.278895 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 14 13:22:32.286996 jq[1828]: true Jan 14 13:22:32.279316 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 14 13:22:32.302799 update_engine[1812]: I20250114 13:22:32.298377 1812 update_check_scheduler.cc:74] Next update check in 4m47s Jan 14 13:22:32.298998 systemd[1]: Started update-engine.service - Update Engine. Jan 14 13:22:32.305635 tar[1827]: linux-amd64/helm Jan 14 13:22:32.315661 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 14 13:22:32.320985 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 14 13:22:32.407293 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1856) Jan 14 13:22:32.412529 coreos-metadata[1774]: Jan 14 13:22:32.412 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 14 13:22:32.417480 coreos-metadata[1774]: Jan 14 13:22:32.417 INFO Fetch successful Jan 14 13:22:32.417480 coreos-metadata[1774]: Jan 14 13:22:32.417 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jan 14 13:22:32.425279 systemd-logind[1807]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 14 13:22:32.429991 coreos-metadata[1774]: Jan 14 13:22:32.426 INFO Fetch successful Jan 14 13:22:32.429991 coreos-metadata[1774]: Jan 14 13:22:32.429 INFO Fetching http://168.63.129.16/machine/a79d750a-1885-41ea-9f95-0a7f819fd909/631d08e6%2Dfaa8%2D4533%2D87b5%2De19f625dc420.%5Fci%2D4152.2.0%2Da%2D0907529617?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jan 14 13:22:32.430921 systemd-logind[1807]: New seat seat0. Jan 14 13:22:32.434200 systemd[1]: Started systemd-logind.service - User Login Management. Jan 14 13:22:32.440981 coreos-metadata[1774]: Jan 14 13:22:32.440 INFO Fetch successful Jan 14 13:22:32.444196 coreos-metadata[1774]: Jan 14 13:22:32.441 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jan 14 13:22:32.459796 coreos-metadata[1774]: Jan 14 13:22:32.459 INFO Fetch successful Jan 14 13:22:32.510361 bash[1874]: Updated "/home/core/.ssh/authorized_keys" Jan 14 13:22:32.516118 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 14 13:22:32.577558 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 14 13:22:32.593664 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 14 13:22:32.598145 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 14 13:22:32.686817 sshd_keygen[1819]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 14 13:22:32.731862 locksmithd[1851]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 14 13:22:32.736739 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 14 13:22:32.751141 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 14 13:22:32.760154 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jan 14 13:22:32.777434 systemd[1]: issuegen.service: Deactivated successfully. Jan 14 13:22:32.780402 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 14 13:22:32.799145 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 14 13:22:32.866190 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jan 14 13:22:32.873121 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 14 13:22:32.888149 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 14 13:22:32.900489 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 14 13:22:32.905051 systemd[1]: Reached target getty.target - Login Prompts. Jan 14 13:22:33.218025 tar[1827]: linux-amd64/LICENSE Jan 14 13:22:33.218025 tar[1827]: linux-amd64/README.md Jan 14 13:22:33.236842 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 14 13:22:33.717614 containerd[1829]: time="2025-01-14T13:22:33.717521300Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 14 13:22:33.753326 containerd[1829]: time="2025-01-14T13:22:33.753074800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 14 13:22:33.755006 containerd[1829]: time="2025-01-14T13:22:33.754958400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 14 13:22:33.755006 containerd[1829]: time="2025-01-14T13:22:33.754992100Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 14 13:22:33.755145 containerd[1829]: time="2025-01-14T13:22:33.755014900Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 14 13:22:33.755239 containerd[1829]: time="2025-01-14T13:22:33.755191500Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 14 13:22:33.755239 containerd[1829]: time="2025-01-14T13:22:33.755219600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 14 13:22:33.755239 containerd[1829]: time="2025-01-14T13:22:33.755301600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 14 13:22:33.755239 containerd[1829]: time="2025-01-14T13:22:33.755319500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 14 13:22:33.758119 containerd[1829]: time="2025-01-14T13:22:33.758081000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 14 13:22:33.758119 containerd[1829]: time="2025-01-14T13:22:33.758109100Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 14 13:22:33.758237 containerd[1829]: time="2025-01-14T13:22:33.758128700Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 14 13:22:33.758237 containerd[1829]: time="2025-01-14T13:22:33.758142500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 14 13:22:33.758311 containerd[1829]: time="2025-01-14T13:22:33.758253800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 14 13:22:33.759559 containerd[1829]: time="2025-01-14T13:22:33.758494900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 14 13:22:33.759559 containerd[1829]: time="2025-01-14T13:22:33.758706800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 14 13:22:33.759559 containerd[1829]: time="2025-01-14T13:22:33.758727300Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 14 13:22:33.761069 containerd[1829]: time="2025-01-14T13:22:33.760874200Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 14 13:22:33.761069 containerd[1829]: time="2025-01-14T13:22:33.760947200Z" level=info msg="metadata content store policy set" policy=shared Jan 14 13:22:33.779594 containerd[1829]: time="2025-01-14T13:22:33.779539900Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 14 13:22:33.779773 containerd[1829]: time="2025-01-14T13:22:33.779688600Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 14 13:22:33.779773 containerd[1829]: time="2025-01-14T13:22:33.779715300Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 14 13:22:33.779773 containerd[1829]: time="2025-01-14T13:22:33.779734200Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 14 13:22:33.779879 containerd[1829]: time="2025-01-14T13:22:33.779775500Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 14 13:22:33.780029 containerd[1829]: time="2025-01-14T13:22:33.779989500Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 14 13:22:33.780937 containerd[1829]: time="2025-01-14T13:22:33.780642200Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 14 13:22:33.780937 containerd[1829]: time="2025-01-14T13:22:33.780824100Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 14 13:22:33.780937 containerd[1829]: time="2025-01-14T13:22:33.780866900Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 14 13:22:33.780937 containerd[1829]: time="2025-01-14T13:22:33.780888100Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 14 13:22:33.780937 containerd[1829]: time="2025-01-14T13:22:33.780908400Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 14 13:22:33.781173 containerd[1829]: time="2025-01-14T13:22:33.780956000Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 14 13:22:33.781173 containerd[1829]: time="2025-01-14T13:22:33.780974200Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 14 13:22:33.781173 containerd[1829]: time="2025-01-14T13:22:33.780992700Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 14 13:22:33.781173 containerd[1829]: time="2025-01-14T13:22:33.781025500Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 14 13:22:33.781173 containerd[1829]: time="2025-01-14T13:22:33.781044900Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 14 13:22:33.781173 containerd[1829]: time="2025-01-14T13:22:33.781063200Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 14 13:22:33.781173 containerd[1829]: time="2025-01-14T13:22:33.781092800Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 14 13:22:33.781173 containerd[1829]: time="2025-01-14T13:22:33.781121700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 14 13:22:33.781173 containerd[1829]: time="2025-01-14T13:22:33.781146400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 14 13:22:33.781483 containerd[1829]: time="2025-01-14T13:22:33.781183200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 14 13:22:33.781483 containerd[1829]: time="2025-01-14T13:22:33.781205500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 14 13:22:33.781483 containerd[1829]: time="2025-01-14T13:22:33.781223000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 14 13:22:33.781483 containerd[1829]: time="2025-01-14T13:22:33.781273100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 14 13:22:33.781483 containerd[1829]: time="2025-01-14T13:22:33.781295100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 14 13:22:33.781483 containerd[1829]: time="2025-01-14T13:22:33.781313800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 14 13:22:33.781483 containerd[1829]: time="2025-01-14T13:22:33.781345600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 14 13:22:33.781483 containerd[1829]: time="2025-01-14T13:22:33.781367500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 14 13:22:33.781483 containerd[1829]: time="2025-01-14T13:22:33.781383500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 14 13:22:33.781483 containerd[1829]: time="2025-01-14T13:22:33.781412100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 14 13:22:33.781483 containerd[1829]: time="2025-01-14T13:22:33.781429600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 14 13:22:33.781483 containerd[1829]: time="2025-01-14T13:22:33.781450100Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 14 13:22:33.781898 containerd[1829]: time="2025-01-14T13:22:33.781490200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 14 13:22:33.781898 containerd[1829]: time="2025-01-14T13:22:33.781509200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 14 13:22:33.781898 containerd[1829]: time="2025-01-14T13:22:33.781525700Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 14 13:22:33.781898 containerd[1829]: time="2025-01-14T13:22:33.781601000Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 14 13:22:33.781898 containerd[1829]: time="2025-01-14T13:22:33.781627000Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 14 13:22:33.781898 containerd[1829]: time="2025-01-14T13:22:33.781712900Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 14 13:22:33.781898 containerd[1829]: time="2025-01-14T13:22:33.781731300Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 14 13:22:33.781898 containerd[1829]: time="2025-01-14T13:22:33.781744200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 14 13:22:33.781898 containerd[1829]: time="2025-01-14T13:22:33.781779400Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 14 13:22:33.781898 containerd[1829]: time="2025-01-14T13:22:33.781796100Z" level=info msg="NRI interface is disabled by configuration." Jan 14 13:22:33.781898 containerd[1829]: time="2025-01-14T13:22:33.781810200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 14 13:22:33.783018 containerd[1829]: time="2025-01-14T13:22:33.782272800Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 14 13:22:33.783018 containerd[1829]: time="2025-01-14T13:22:33.782365500Z" level=info msg="Connect containerd service" Jan 14 13:22:33.783018 containerd[1829]: time="2025-01-14T13:22:33.782429800Z" level=info msg="using legacy CRI server" Jan 14 13:22:33.783018 containerd[1829]: time="2025-01-14T13:22:33.782441400Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 14 13:22:33.783018 containerd[1829]: time="2025-01-14T13:22:33.782642600Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 14 13:22:33.783662 containerd[1829]: time="2025-01-14T13:22:33.783577900Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 14 13:22:33.783762 containerd[1829]: time="2025-01-14T13:22:33.783716600Z" level=info msg="Start subscribing containerd event" Jan 14 13:22:33.783816 containerd[1829]: time="2025-01-14T13:22:33.783796400Z" level=info msg="Start recovering state" Jan 14 13:22:33.783894 containerd[1829]: time="2025-01-14T13:22:33.783875200Z" level=info msg="Start event monitor" Jan 14 13:22:33.783933 containerd[1829]: time="2025-01-14T13:22:33.783892800Z" level=info msg="Start snapshots syncer" Jan 14 13:22:33.783933 containerd[1829]: time="2025-01-14T13:22:33.783905400Z" level=info msg="Start cni network conf syncer for default" Jan 14 13:22:33.783933 containerd[1829]: time="2025-01-14T13:22:33.783915600Z" level=info msg="Start streaming server" Jan 14 13:22:33.784476 containerd[1829]: time="2025-01-14T13:22:33.784445700Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 14 13:22:33.785028 containerd[1829]: time="2025-01-14T13:22:33.784569400Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 14 13:22:33.789665 systemd[1]: Started containerd.service - containerd container runtime. Jan 14 13:22:33.790841 containerd[1829]: time="2025-01-14T13:22:33.790820100Z" level=info msg="containerd successfully booted in 0.074428s" Jan 14 13:22:33.830944 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:22:33.836719 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 14 13:22:33.841444 systemd[1]: Startup finished in 853ms (firmware) + 34.327s (loader) + 14.815s (kernel) + 15.587s (userspace) = 1min 5.584s. Jan 14 13:22:33.846303 (kubelet)[1992]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 13:22:34.259602 login[1969]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 14 13:22:34.261974 login[1970]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 14 13:22:34.274487 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 14 13:22:34.283080 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 14 13:22:34.287372 systemd-logind[1807]: New session 1 of user core. Jan 14 13:22:34.295282 systemd-logind[1807]: New session 2 of user core. Jan 14 13:22:34.308799 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 14 13:22:34.320130 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 14 13:22:34.333191 (systemd)[2005]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 14 13:22:34.554809 systemd[2005]: Queued start job for default target default.target. Jan 14 13:22:34.555310 systemd[2005]: Created slice app.slice - User Application Slice. Jan 14 13:22:34.555339 systemd[2005]: Reached target paths.target - Paths. Jan 14 13:22:34.555355 systemd[2005]: Reached target timers.target - Timers. Jan 14 13:22:34.561853 systemd[2005]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 14 13:22:34.577977 systemd[2005]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 14 13:22:34.578086 systemd[2005]: Reached target sockets.target - Sockets. Jan 14 13:22:34.578112 systemd[2005]: Reached target basic.target - Basic System. Jan 14 13:22:34.578165 systemd[2005]: Reached target default.target - Main User Target. Jan 14 13:22:34.578201 systemd[2005]: Startup finished in 236ms. Jan 14 13:22:34.578347 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 14 13:22:34.586102 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 14 13:22:34.587154 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 14 13:22:34.774784 kubelet[1992]: E0114 13:22:34.774102 1992 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 13:22:34.777717 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 13:22:34.778104 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 13:22:35.058549 waagent[1966]: 2025-01-14T13:22:35.058428Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Jan 14 13:22:35.060800 waagent[1966]: 2025-01-14T13:22:35.060703Z INFO Daemon Daemon OS: flatcar 4152.2.0 Jan 14 13:22:35.061375 waagent[1966]: 2025-01-14T13:22:35.060945Z INFO Daemon Daemon Python: 3.11.10 Jan 14 13:22:35.061748 waagent[1966]: 2025-01-14T13:22:35.061688Z INFO Daemon Daemon Run daemon Jan 14 13:22:35.062012 waagent[1966]: 2025-01-14T13:22:35.061969Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4152.2.0' Jan 14 13:22:35.062138 waagent[1966]: 2025-01-14T13:22:35.062099Z INFO Daemon Daemon Using waagent for provisioning Jan 14 13:22:35.062399 waagent[1966]: 2025-01-14T13:22:35.062357Z INFO Daemon Daemon Activate resource disk Jan 14 13:22:35.062522 waagent[1966]: 2025-01-14T13:22:35.062484Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jan 14 13:22:35.086244 waagent[1966]: 2025-01-14T13:22:35.086143Z INFO Daemon Daemon Found device: None Jan 14 13:22:35.088693 waagent[1966]: 2025-01-14T13:22:35.088616Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jan 14 13:22:35.116632 waagent[1966]: 2025-01-14T13:22:35.089776Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jan 14 13:22:35.116632 waagent[1966]: 2025-01-14T13:22:35.090654Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 14 13:22:35.116632 waagent[1966]: 2025-01-14T13:22:35.091666Z INFO Daemon Daemon Running default provisioning handler Jan 14 13:22:35.116632 waagent[1966]: 2025-01-14T13:22:35.100317Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jan 14 13:22:35.116632 waagent[1966]: 2025-01-14T13:22:35.101953Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jan 14 13:22:35.116632 waagent[1966]: 2025-01-14T13:22:35.102330Z INFO Daemon Daemon cloud-init is enabled: False Jan 14 13:22:35.116632 waagent[1966]: 2025-01-14T13:22:35.103249Z INFO Daemon Daemon Copying ovf-env.xml Jan 14 13:22:35.226782 waagent[1966]: 2025-01-14T13:22:35.224000Z INFO Daemon Daemon Successfully mounted dvd Jan 14 13:22:35.238551 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jan 14 13:22:35.245791 waagent[1966]: 2025-01-14T13:22:35.240051Z INFO Daemon Daemon Detect protocol endpoint Jan 14 13:22:35.245791 waagent[1966]: 2025-01-14T13:22:35.241257Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 14 13:22:35.245791 waagent[1966]: 2025-01-14T13:22:35.242249Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jan 14 13:22:35.245791 waagent[1966]: 2025-01-14T13:22:35.242669Z INFO Daemon Daemon Test for route to 168.63.129.16 Jan 14 13:22:35.245791 waagent[1966]: 2025-01-14T13:22:35.243336Z INFO Daemon Daemon Route to 168.63.129.16 exists Jan 14 13:22:35.245791 waagent[1966]: 2025-01-14T13:22:35.243772Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jan 14 13:22:35.284251 waagent[1966]: 2025-01-14T13:22:35.284181Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jan 14 13:22:35.315664 waagent[1966]: 2025-01-14T13:22:35.309299Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jan 14 13:22:35.315664 waagent[1966]: 2025-01-14T13:22:35.310235Z INFO Daemon Daemon Server preferred version:2015-04-05 Jan 14 13:22:35.429607 waagent[1966]: 2025-01-14T13:22:35.429502Z INFO Daemon Daemon Initializing goal state during protocol detection Jan 14 13:22:35.432921 waagent[1966]: 2025-01-14T13:22:35.432845Z INFO Daemon Daemon Forcing an update of the goal state. Jan 14 13:22:35.439040 waagent[1966]: 2025-01-14T13:22:35.438985Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 14 13:22:35.480207 waagent[1966]: 2025-01-14T13:22:35.480129Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.162 Jan 14 13:22:35.484783 waagent[1966]: 2025-01-14T13:22:35.481981Z INFO Daemon Jan 14 13:22:35.484783 waagent[1966]: 2025-01-14T13:22:35.483890Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 2cb924c6-df25-44ab-bc6a-f97963be083b eTag: 11229353986554757358 source: Fabric] Jan 14 13:22:35.485235 waagent[1966]: 2025-01-14T13:22:35.485167Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jan 14 13:22:35.485235 waagent[1966]: 2025-01-14T13:22:35.486967Z INFO Daemon Jan 14 13:22:35.485235 waagent[1966]: 2025-01-14T13:22:35.487693Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jan 14 13:22:35.501284 waagent[1966]: 2025-01-14T13:22:35.493963Z INFO Daemon Daemon Downloading artifacts profile blob Jan 14 13:22:35.561938 waagent[1966]: 2025-01-14T13:22:35.561855Z INFO Daemon Downloaded certificate {'thumbprint': 'E55BE88D18EAE6809E0C41E2349931DAE95E7954', 'hasPrivateKey': True} Jan 14 13:22:35.568619 waagent[1966]: 2025-01-14T13:22:35.563114Z INFO Daemon Fetch goal state completed Jan 14 13:22:35.571304 waagent[1966]: 2025-01-14T13:22:35.571255Z INFO Daemon Daemon Starting provisioning Jan 14 13:22:35.575975 waagent[1966]: 2025-01-14T13:22:35.572532Z INFO Daemon Daemon Handle ovf-env.xml. Jan 14 13:22:35.575975 waagent[1966]: 2025-01-14T13:22:35.573117Z INFO Daemon Daemon Set hostname [ci-4152.2.0-a-0907529617] Jan 14 13:22:35.592818 waagent[1966]: 2025-01-14T13:22:35.592711Z INFO Daemon Daemon Publish hostname [ci-4152.2.0-a-0907529617] Jan 14 13:22:35.600721 waagent[1966]: 2025-01-14T13:22:35.594272Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jan 14 13:22:35.600721 waagent[1966]: 2025-01-14T13:22:35.594820Z INFO Daemon Daemon Primary interface is [eth0] Jan 14 13:22:35.622064 systemd-networkd[1366]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 14 13:22:35.622075 systemd-networkd[1366]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 13:22:35.622129 systemd-networkd[1366]: eth0: DHCP lease lost Jan 14 13:22:35.623767 waagent[1966]: 2025-01-14T13:22:35.623654Z INFO Daemon Daemon Create user account if not exists Jan 14 13:22:35.634779 waagent[1966]: 2025-01-14T13:22:35.624983Z INFO Daemon Daemon User core already exists, skip useradd Jan 14 13:22:35.634779 waagent[1966]: 2025-01-14T13:22:35.625784Z INFO Daemon Daemon Configure sudoer Jan 14 13:22:35.634779 waagent[1966]: 2025-01-14T13:22:35.626911Z INFO Daemon Daemon Configure sshd Jan 14 13:22:35.634779 waagent[1966]: 2025-01-14T13:22:35.627689Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jan 14 13:22:35.634779 waagent[1966]: 2025-01-14T13:22:35.627941Z INFO Daemon Daemon Deploy ssh public key. Jan 14 13:22:35.643197 systemd-networkd[1366]: eth0: DHCPv6 lease lost Jan 14 13:22:35.678856 systemd-networkd[1366]: eth0: DHCPv4 address 10.200.4.19/24, gateway 10.200.4.1 acquired from 168.63.129.16 Jan 14 13:22:36.735799 waagent[1966]: 2025-01-14T13:22:36.735702Z INFO Daemon Daemon Provisioning complete Jan 14 13:22:36.748073 waagent[1966]: 2025-01-14T13:22:36.748013Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jan 14 13:22:36.755196 waagent[1966]: 2025-01-14T13:22:36.749380Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jan 14 13:22:36.755196 waagent[1966]: 2025-01-14T13:22:36.749810Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Jan 14 13:22:36.879584 waagent[2061]: 2025-01-14T13:22:36.879468Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Jan 14 13:22:36.880039 waagent[2061]: 2025-01-14T13:22:36.879647Z INFO ExtHandler ExtHandler OS: flatcar 4152.2.0 Jan 14 13:22:36.880039 waagent[2061]: 2025-01-14T13:22:36.879734Z INFO ExtHandler ExtHandler Python: 3.11.10 Jan 14 13:22:36.941913 waagent[2061]: 2025-01-14T13:22:36.941805Z INFO ExtHandler ExtHandler Distro: flatcar-4152.2.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.10; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jan 14 13:22:36.942196 waagent[2061]: 2025-01-14T13:22:36.942139Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 14 13:22:36.942314 waagent[2061]: 2025-01-14T13:22:36.942261Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 14 13:22:36.950281 waagent[2061]: 2025-01-14T13:22:36.950203Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 14 13:22:36.956789 waagent[2061]: 2025-01-14T13:22:36.956726Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.162 Jan 14 13:22:36.957286 waagent[2061]: 2025-01-14T13:22:36.957236Z INFO ExtHandler Jan 14 13:22:36.957395 waagent[2061]: 2025-01-14T13:22:36.957329Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: a7be3bc8-523b-463f-b97e-04949870a6b7 eTag: 11229353986554757358 source: Fabric] Jan 14 13:22:36.957727 waagent[2061]: 2025-01-14T13:22:36.957681Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 14 13:22:36.958313 waagent[2061]: 2025-01-14T13:22:36.958256Z INFO ExtHandler Jan 14 13:22:36.958383 waagent[2061]: 2025-01-14T13:22:36.958342Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jan 14 13:22:36.962468 waagent[2061]: 2025-01-14T13:22:36.962425Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 14 13:22:37.023151 waagent[2061]: 2025-01-14T13:22:37.023061Z INFO ExtHandler Downloaded certificate {'thumbprint': 'E55BE88D18EAE6809E0C41E2349931DAE95E7954', 'hasPrivateKey': True} Jan 14 13:22:37.023680 waagent[2061]: 2025-01-14T13:22:37.023622Z INFO ExtHandler Fetch goal state completed Jan 14 13:22:37.038579 waagent[2061]: 2025-01-14T13:22:37.038502Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 2061 Jan 14 13:22:37.038739 waagent[2061]: 2025-01-14T13:22:37.038689Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jan 14 13:22:37.040324 waagent[2061]: 2025-01-14T13:22:37.040266Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4152.2.0', '', 'Flatcar Container Linux by Kinvolk'] Jan 14 13:22:37.040684 waagent[2061]: 2025-01-14T13:22:37.040634Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jan 14 13:22:37.066053 waagent[2061]: 2025-01-14T13:22:37.065993Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jan 14 13:22:37.066305 waagent[2061]: 2025-01-14T13:22:37.066250Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jan 14 13:22:37.073534 waagent[2061]: 2025-01-14T13:22:37.073493Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jan 14 13:22:37.080584 systemd[1]: Reloading requested from client PID 2074 ('systemctl') (unit waagent.service)... Jan 14 13:22:37.080601 systemd[1]: Reloading... Jan 14 13:22:37.177812 zram_generator::config[2114]: No configuration found. Jan 14 13:22:37.294549 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 14 13:22:37.373957 systemd[1]: Reloading finished in 292 ms. Jan 14 13:22:37.398106 waagent[2061]: 2025-01-14T13:22:37.397978Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Jan 14 13:22:37.405587 systemd[1]: Reloading requested from client PID 2170 ('systemctl') (unit waagent.service)... Jan 14 13:22:37.405602 systemd[1]: Reloading... Jan 14 13:22:37.485803 zram_generator::config[2204]: No configuration found. Jan 14 13:22:37.615703 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 14 13:22:37.694572 systemd[1]: Reloading finished in 288 ms. Jan 14 13:22:37.720787 waagent[2061]: 2025-01-14T13:22:37.720387Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jan 14 13:22:37.720787 waagent[2061]: 2025-01-14T13:22:37.720576Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jan 14 13:22:38.066347 waagent[2061]: 2025-01-14T13:22:38.066237Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jan 14 13:22:38.067177 waagent[2061]: 2025-01-14T13:22:38.067104Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Jan 14 13:22:38.068076 waagent[2061]: 2025-01-14T13:22:38.068014Z INFO ExtHandler ExtHandler Starting env monitor service. Jan 14 13:22:38.068547 waagent[2061]: 2025-01-14T13:22:38.068475Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jan 14 13:22:38.068698 waagent[2061]: 2025-01-14T13:22:38.068644Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 14 13:22:38.068882 waagent[2061]: 2025-01-14T13:22:38.068803Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 14 13:22:38.069285 waagent[2061]: 2025-01-14T13:22:38.069153Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 14 13:22:38.069285 waagent[2061]: 2025-01-14T13:22:38.069226Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 14 13:22:38.069780 waagent[2061]: 2025-01-14T13:22:38.069629Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jan 14 13:22:38.069975 waagent[2061]: 2025-01-14T13:22:38.069921Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jan 14 13:22:38.070341 waagent[2061]: 2025-01-14T13:22:38.070278Z INFO EnvHandler ExtHandler Configure routes Jan 14 13:22:38.070507 waagent[2061]: 2025-01-14T13:22:38.070433Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jan 14 13:22:38.070806 waagent[2061]: 2025-01-14T13:22:38.070737Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jan 14 13:22:38.070806 waagent[2061]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jan 14 13:22:38.070806 waagent[2061]: eth0 00000000 0104C80A 0003 0 0 1024 00000000 0 0 0 Jan 14 13:22:38.070806 waagent[2061]: eth0 0004C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jan 14 13:22:38.070806 waagent[2061]: eth0 0104C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jan 14 13:22:38.070806 waagent[2061]: eth0 10813FA8 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 14 13:22:38.070806 waagent[2061]: eth0 FEA9FEA9 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 14 13:22:38.071068 waagent[2061]: 2025-01-14T13:22:38.071003Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jan 14 13:22:38.071174 waagent[2061]: 2025-01-14T13:22:38.071128Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jan 14 13:22:38.071408 waagent[2061]: 2025-01-14T13:22:38.071240Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jan 14 13:22:38.071706 waagent[2061]: 2025-01-14T13:22:38.071643Z INFO EnvHandler ExtHandler Gateway:None Jan 14 13:22:38.073424 waagent[2061]: 2025-01-14T13:22:38.073236Z INFO EnvHandler ExtHandler Routes:None Jan 14 13:22:38.080215 waagent[2061]: 2025-01-14T13:22:38.080176Z INFO ExtHandler ExtHandler Jan 14 13:22:38.080379 waagent[2061]: 2025-01-14T13:22:38.080348Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 2040254f-a49c-4712-8e86-a2255175de0b correlation f9478bc4-d95a-4efa-aebe-7daa8c5c97ac created: 2025-01-14T13:21:16.267159Z] Jan 14 13:22:38.080798 waagent[2061]: 2025-01-14T13:22:38.080736Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 14 13:22:38.081777 waagent[2061]: 2025-01-14T13:22:38.081312Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Jan 14 13:22:38.119454 waagent[2061]: 2025-01-14T13:22:38.119385Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: B20D24C2-7B61-40FB-80E9-3FCF989F82AB;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Jan 14 13:22:38.122974 waagent[2061]: 2025-01-14T13:22:38.122905Z INFO MonitorHandler ExtHandler Network interfaces: Jan 14 13:22:38.122974 waagent[2061]: Executing ['ip', '-a', '-o', 'link']: Jan 14 13:22:38.122974 waagent[2061]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jan 14 13:22:38.122974 waagent[2061]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:d6:5c:b3 brd ff:ff:ff:ff:ff:ff Jan 14 13:22:38.122974 waagent[2061]: 3: enP3240s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:d6:5c:b3 brd ff:ff:ff:ff:ff:ff\ altname enP3240p0s2 Jan 14 13:22:38.122974 waagent[2061]: Executing ['ip', '-4', '-a', '-o', 'address']: Jan 14 13:22:38.122974 waagent[2061]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jan 14 13:22:38.122974 waagent[2061]: 2: eth0 inet 10.200.4.19/24 metric 1024 brd 10.200.4.255 scope global eth0\ valid_lft forever preferred_lft forever Jan 14 13:22:38.122974 waagent[2061]: Executing ['ip', '-6', '-a', '-o', 'address']: Jan 14 13:22:38.122974 waagent[2061]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jan 14 13:22:38.122974 waagent[2061]: 2: eth0 inet6 fe80::20d:3aff:fed6:5cb3/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 14 13:22:38.122974 waagent[2061]: 3: enP3240s1 inet6 fe80::20d:3aff:fed6:5cb3/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 14 13:22:38.191428 waagent[2061]: 2025-01-14T13:22:38.191342Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Jan 14 13:22:38.191428 waagent[2061]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 14 13:22:38.191428 waagent[2061]: pkts bytes target prot opt in out source destination Jan 14 13:22:38.191428 waagent[2061]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 14 13:22:38.191428 waagent[2061]: pkts bytes target prot opt in out source destination Jan 14 13:22:38.191428 waagent[2061]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 14 13:22:38.191428 waagent[2061]: pkts bytes target prot opt in out source destination Jan 14 13:22:38.191428 waagent[2061]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 14 13:22:38.191428 waagent[2061]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 14 13:22:38.191428 waagent[2061]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 14 13:22:38.195443 waagent[2061]: 2025-01-14T13:22:38.195377Z INFO EnvHandler ExtHandler Current Firewall rules: Jan 14 13:22:38.195443 waagent[2061]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 14 13:22:38.195443 waagent[2061]: pkts bytes target prot opt in out source destination Jan 14 13:22:38.195443 waagent[2061]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 14 13:22:38.195443 waagent[2061]: pkts bytes target prot opt in out source destination Jan 14 13:22:38.195443 waagent[2061]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 14 13:22:38.195443 waagent[2061]: pkts bytes target prot opt in out source destination Jan 14 13:22:38.195443 waagent[2061]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 14 13:22:38.195443 waagent[2061]: 4 594 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 14 13:22:38.195443 waagent[2061]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 14 13:22:38.195935 waagent[2061]: 2025-01-14T13:22:38.195703Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jan 14 13:22:44.988614 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 14 13:22:44.994000 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:22:45.138874 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:22:45.142693 (kubelet)[2309]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 13:22:45.677697 kubelet[2309]: E0114 13:22:45.677612 2309 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 13:22:45.682525 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 13:22:45.682938 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 13:22:55.738730 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 14 13:22:55.752018 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:22:55.805471 chronyd[1787]: Selected source PHC0 Jan 14 13:22:55.853945 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:22:55.859572 (kubelet)[2330]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 13:22:56.447608 kubelet[2330]: E0114 13:22:56.447501 2330 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 13:22:56.450600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 13:22:56.450937 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 13:23:06.488742 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 14 13:23:06.494026 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:23:06.594952 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:23:06.605181 (kubelet)[2353]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 13:23:06.650712 kubelet[2353]: E0114 13:23:06.650649 2353 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 13:23:06.653853 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 13:23:06.654177 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 13:23:12.592775 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Jan 14 13:23:16.738597 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 14 13:23:16.743977 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:23:16.856937 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:23:16.857207 (kubelet)[2374]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 13:23:17.361297 kubelet[2374]: E0114 13:23:17.361187 2374 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 13:23:17.364178 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 13:23:17.364506 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 13:23:17.481198 update_engine[1812]: I20250114 13:23:17.481089 1812 update_attempter.cc:509] Updating boot flags... Jan 14 13:23:17.548778 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2398) Jan 14 13:23:24.064747 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 14 13:23:24.070078 systemd[1]: Started sshd@0-10.200.4.19:22-10.200.16.10:38040.service - OpenSSH per-connection server daemon (10.200.16.10:38040). Jan 14 13:23:24.931100 sshd[2446]: Accepted publickey for core from 10.200.16.10 port 38040 ssh2: RSA SHA256:5BL0MHBLBPplNMyfHIepoEZk0FL953xzvqueGYBPke0 Jan 14 13:23:24.932502 sshd-session[2446]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:23:24.937067 systemd-logind[1807]: New session 3 of user core. Jan 14 13:23:24.946107 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 14 13:23:25.493115 systemd[1]: Started sshd@1-10.200.4.19:22-10.200.16.10:38044.service - OpenSSH per-connection server daemon (10.200.16.10:38044). Jan 14 13:23:26.100526 sshd[2451]: Accepted publickey for core from 10.200.16.10 port 38044 ssh2: RSA SHA256:5BL0MHBLBPplNMyfHIepoEZk0FL953xzvqueGYBPke0 Jan 14 13:23:26.102067 sshd-session[2451]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:23:26.108122 systemd-logind[1807]: New session 4 of user core. Jan 14 13:23:26.118104 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 14 13:23:26.528391 sshd[2454]: Connection closed by 10.200.16.10 port 38044 Jan 14 13:23:26.529382 sshd-session[2451]: pam_unix(sshd:session): session closed for user core Jan 14 13:23:26.532711 systemd[1]: sshd@1-10.200.4.19:22-10.200.16.10:38044.service: Deactivated successfully. Jan 14 13:23:26.539077 systemd-logind[1807]: Session 4 logged out. Waiting for processes to exit. Jan 14 13:23:26.539694 systemd[1]: session-4.scope: Deactivated successfully. Jan 14 13:23:26.540581 systemd-logind[1807]: Removed session 4. Jan 14 13:23:26.640106 systemd[1]: Started sshd@2-10.200.4.19:22-10.200.16.10:41822.service - OpenSSH per-connection server daemon (10.200.16.10:41822). Jan 14 13:23:27.245217 sshd[2459]: Accepted publickey for core from 10.200.16.10 port 41822 ssh2: RSA SHA256:5BL0MHBLBPplNMyfHIepoEZk0FL953xzvqueGYBPke0 Jan 14 13:23:27.246927 sshd-session[2459]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:23:27.252250 systemd-logind[1807]: New session 5 of user core. Jan 14 13:23:27.261994 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 14 13:23:27.488684 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 14 13:23:27.496219 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:23:27.670823 sshd[2462]: Connection closed by 10.200.16.10 port 41822 Jan 14 13:23:27.671021 sshd-session[2459]: pam_unix(sshd:session): session closed for user core Jan 14 13:23:27.681632 systemd[1]: sshd@2-10.200.4.19:22-10.200.16.10:41822.service: Deactivated successfully. Jan 14 13:23:27.684559 systemd[1]: session-5.scope: Deactivated successfully. Jan 14 13:23:27.701093 systemd-logind[1807]: Session 5 logged out. Waiting for processes to exit. Jan 14 13:23:27.703382 systemd-logind[1807]: Removed session 5. Jan 14 13:23:27.713944 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:23:27.718233 (kubelet)[2479]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 13:23:27.765062 kubelet[2479]: E0114 13:23:27.764496 2479 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 13:23:27.767500 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 13:23:27.767856 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 13:23:27.778036 systemd[1]: Started sshd@3-10.200.4.19:22-10.200.16.10:41830.service - OpenSSH per-connection server daemon (10.200.16.10:41830). Jan 14 13:23:28.384380 sshd[2489]: Accepted publickey for core from 10.200.16.10 port 41830 ssh2: RSA SHA256:5BL0MHBLBPplNMyfHIepoEZk0FL953xzvqueGYBPke0 Jan 14 13:23:28.386073 sshd-session[2489]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:23:28.391624 systemd-logind[1807]: New session 6 of user core. Jan 14 13:23:28.400369 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 14 13:23:28.824258 sshd[2492]: Connection closed by 10.200.16.10 port 41830 Jan 14 13:23:28.825171 sshd-session[2489]: pam_unix(sshd:session): session closed for user core Jan 14 13:23:28.830568 systemd[1]: sshd@3-10.200.4.19:22-10.200.16.10:41830.service: Deactivated successfully. Jan 14 13:23:28.834459 systemd-logind[1807]: Session 6 logged out. Waiting for processes to exit. Jan 14 13:23:28.835111 systemd[1]: session-6.scope: Deactivated successfully. Jan 14 13:23:28.836075 systemd-logind[1807]: Removed session 6. Jan 14 13:23:28.928460 systemd[1]: Started sshd@4-10.200.4.19:22-10.200.16.10:41838.service - OpenSSH per-connection server daemon (10.200.16.10:41838). Jan 14 13:23:29.533146 sshd[2497]: Accepted publickey for core from 10.200.16.10 port 41838 ssh2: RSA SHA256:5BL0MHBLBPplNMyfHIepoEZk0FL953xzvqueGYBPke0 Jan 14 13:23:29.534841 sshd-session[2497]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:23:29.540223 systemd-logind[1807]: New session 7 of user core. Jan 14 13:23:29.546076 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 14 13:23:30.048255 sudo[2501]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 14 13:23:30.048647 sudo[2501]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 13:23:30.091552 sudo[2501]: pam_unix(sudo:session): session closed for user root Jan 14 13:23:30.187717 sshd[2500]: Connection closed by 10.200.16.10 port 41838 Jan 14 13:23:30.188987 sshd-session[2497]: pam_unix(sshd:session): session closed for user core Jan 14 13:23:30.194672 systemd[1]: sshd@4-10.200.4.19:22-10.200.16.10:41838.service: Deactivated successfully. Jan 14 13:23:30.198262 systemd[1]: session-7.scope: Deactivated successfully. Jan 14 13:23:30.199262 systemd-logind[1807]: Session 7 logged out. Waiting for processes to exit. Jan 14 13:23:30.200361 systemd-logind[1807]: Removed session 7. Jan 14 13:23:30.293285 systemd[1]: Started sshd@5-10.200.4.19:22-10.200.16.10:41850.service - OpenSSH per-connection server daemon (10.200.16.10:41850). Jan 14 13:23:30.903476 sshd[2506]: Accepted publickey for core from 10.200.16.10 port 41850 ssh2: RSA SHA256:5BL0MHBLBPplNMyfHIepoEZk0FL953xzvqueGYBPke0 Jan 14 13:23:30.908530 sshd-session[2506]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:23:30.926287 systemd-logind[1807]: New session 8 of user core. Jan 14 13:23:30.945677 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 14 13:23:31.235307 sudo[2511]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 14 13:23:31.235670 sudo[2511]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 13:23:31.239265 sudo[2511]: pam_unix(sudo:session): session closed for user root Jan 14 13:23:31.244586 sudo[2510]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 14 13:23:31.244956 sudo[2510]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 13:23:31.258132 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 14 13:23:31.284937 augenrules[2533]: No rules Jan 14 13:23:31.286716 systemd[1]: audit-rules.service: Deactivated successfully. Jan 14 13:23:31.287210 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 14 13:23:31.290167 sudo[2510]: pam_unix(sudo:session): session closed for user root Jan 14 13:23:31.390233 sshd[2509]: Connection closed by 10.200.16.10 port 41850 Jan 14 13:23:31.391096 sshd-session[2506]: pam_unix(sshd:session): session closed for user core Jan 14 13:23:31.396930 systemd[1]: sshd@5-10.200.4.19:22-10.200.16.10:41850.service: Deactivated successfully. Jan 14 13:23:31.400110 systemd[1]: session-8.scope: Deactivated successfully. Jan 14 13:23:31.400876 systemd-logind[1807]: Session 8 logged out. Waiting for processes to exit. Jan 14 13:23:31.401835 systemd-logind[1807]: Removed session 8. Jan 14 13:23:31.494380 systemd[1]: Started sshd@6-10.200.4.19:22-10.200.16.10:41856.service - OpenSSH per-connection server daemon (10.200.16.10:41856). Jan 14 13:23:32.096493 sshd[2542]: Accepted publickey for core from 10.200.16.10 port 41856 ssh2: RSA SHA256:5BL0MHBLBPplNMyfHIepoEZk0FL953xzvqueGYBPke0 Jan 14 13:23:32.097946 sshd-session[2542]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:23:32.102870 systemd-logind[1807]: New session 9 of user core. Jan 14 13:23:32.112475 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 14 13:23:32.428069 sudo[2546]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 14 13:23:32.428433 sudo[2546]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 13:23:34.205096 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 14 13:23:34.206444 (dockerd)[2565]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 14 13:23:35.812876 dockerd[2565]: time="2025-01-14T13:23:35.812814494Z" level=info msg="Starting up" Jan 14 13:23:36.361569 dockerd[2565]: time="2025-01-14T13:23:36.361515914Z" level=info msg="Loading containers: start." Jan 14 13:23:36.613788 kernel: Initializing XFRM netlink socket Jan 14 13:23:36.730549 systemd-networkd[1366]: docker0: Link UP Jan 14 13:23:36.769114 dockerd[2565]: time="2025-01-14T13:23:36.769066849Z" level=info msg="Loading containers: done." Jan 14 13:23:36.828555 dockerd[2565]: time="2025-01-14T13:23:36.828503164Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 14 13:23:36.829053 dockerd[2565]: time="2025-01-14T13:23:36.828633565Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Jan 14 13:23:36.829053 dockerd[2565]: time="2025-01-14T13:23:36.828783366Z" level=info msg="Daemon has completed initialization" Jan 14 13:23:36.879428 dockerd[2565]: time="2025-01-14T13:23:36.878351896Z" level=info msg="API listen on /run/docker.sock" Jan 14 13:23:36.878989 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 14 13:23:37.988894 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 14 13:23:37.998358 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:23:38.691982 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:23:38.698217 (kubelet)[2765]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 13:23:38.758179 kubelet[2765]: E0114 13:23:38.758098 2765 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 13:23:38.761011 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 13:23:38.761329 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 13:23:38.993194 containerd[1829]: time="2025-01-14T13:23:38.993151236Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Jan 14 13:23:39.770195 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1229471432.mount: Deactivated successfully. Jan 14 13:23:41.788102 containerd[1829]: time="2025-01-14T13:23:41.788037574Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:23:41.806160 containerd[1829]: time="2025-01-14T13:23:41.805876828Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.12: active requests=0, bytes read=35139262" Jan 14 13:23:41.814395 containerd[1829]: time="2025-01-14T13:23:41.814312501Z" level=info msg="ImageCreate event name:\"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:23:41.821142 containerd[1829]: time="2025-01-14T13:23:41.821053860Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:23:41.822406 containerd[1829]: time="2025-01-14T13:23:41.822196470Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.12\" with image id \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\", size \"35136054\" in 2.829003434s" Jan 14 13:23:41.822406 containerd[1829]: time="2025-01-14T13:23:41.822242970Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\"" Jan 14 13:23:41.846317 containerd[1829]: time="2025-01-14T13:23:41.846272879Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Jan 14 13:23:43.834895 containerd[1829]: time="2025-01-14T13:23:43.834833824Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:23:43.837223 containerd[1829]: time="2025-01-14T13:23:43.837169044Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.12: active requests=0, bytes read=32217740" Jan 14 13:23:43.839772 containerd[1829]: time="2025-01-14T13:23:43.839698366Z" level=info msg="ImageCreate event name:\"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:23:43.844962 containerd[1829]: time="2025-01-14T13:23:43.844917711Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:23:43.846086 containerd[1829]: time="2025-01-14T13:23:43.845924120Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.12\" with image id \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\", size \"33662844\" in 1.999613641s" Jan 14 13:23:43.846086 containerd[1829]: time="2025-01-14T13:23:43.845964720Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\"" Jan 14 13:23:43.870474 containerd[1829]: time="2025-01-14T13:23:43.870428732Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Jan 14 13:23:45.275275 containerd[1829]: time="2025-01-14T13:23:45.275212873Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:23:45.277256 containerd[1829]: time="2025-01-14T13:23:45.277198188Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.12: active requests=0, bytes read=17332830" Jan 14 13:23:45.280503 containerd[1829]: time="2025-01-14T13:23:45.280445912Z" level=info msg="ImageCreate event name:\"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:23:45.285201 containerd[1829]: time="2025-01-14T13:23:45.285166247Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:23:45.286648 containerd[1829]: time="2025-01-14T13:23:45.286146755Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.12\" with image id \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\", size \"18777952\" in 1.415677721s" Jan 14 13:23:45.286648 containerd[1829]: time="2025-01-14T13:23:45.286184855Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\"" Jan 14 13:23:45.309520 containerd[1829]: time="2025-01-14T13:23:45.309481828Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Jan 14 13:23:46.583064 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount273644706.mount: Deactivated successfully. Jan 14 13:23:47.025389 containerd[1829]: time="2025-01-14T13:23:47.025332564Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:23:47.027079 containerd[1829]: time="2025-01-14T13:23:47.027021877Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=28619966" Jan 14 13:23:47.030107 containerd[1829]: time="2025-01-14T13:23:47.030050699Z" level=info msg="ImageCreate event name:\"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:23:47.035119 containerd[1829]: time="2025-01-14T13:23:47.035060836Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:23:47.036031 containerd[1829]: time="2025-01-14T13:23:47.035621540Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"28618977\" in 1.726098012s" Jan 14 13:23:47.036031 containerd[1829]: time="2025-01-14T13:23:47.035661341Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Jan 14 13:23:47.059328 containerd[1829]: time="2025-01-14T13:23:47.059284916Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 14 13:23:47.754646 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount678384629.mount: Deactivated successfully. Jan 14 13:23:48.988514 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jan 14 13:23:48.990390 containerd[1829]: time="2025-01-14T13:23:48.989921946Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:23:48.992739 containerd[1829]: time="2025-01-14T13:23:48.992678667Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Jan 14 13:23:48.996473 containerd[1829]: time="2025-01-14T13:23:48.996260594Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:23:48.998033 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:23:49.004187 containerd[1829]: time="2025-01-14T13:23:49.002840042Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:23:49.005982 containerd[1829]: time="2025-01-14T13:23:49.005944565Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.946602749s" Jan 14 13:23:49.006135 containerd[1829]: time="2025-01-14T13:23:49.006115667Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 14 13:23:49.043939 containerd[1829]: time="2025-01-14T13:23:49.043613745Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 14 13:23:49.114002 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:23:49.125204 (kubelet)[2925]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 13:23:49.171004 kubelet[2925]: E0114 13:23:49.170929 2925 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 13:23:49.173766 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 13:23:49.174088 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 13:23:50.200515 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3295655189.mount: Deactivated successfully. Jan 14 13:23:50.220445 containerd[1829]: time="2025-01-14T13:23:50.220389680Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:23:50.222469 containerd[1829]: time="2025-01-14T13:23:50.222274794Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" Jan 14 13:23:50.228248 containerd[1829]: time="2025-01-14T13:23:50.226989029Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:23:50.231281 containerd[1829]: time="2025-01-14T13:23:50.230470155Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:23:50.231281 containerd[1829]: time="2025-01-14T13:23:50.231136060Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 1.187474215s" Jan 14 13:23:50.231281 containerd[1829]: time="2025-01-14T13:23:50.231168160Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 14 13:23:50.255283 containerd[1829]: time="2025-01-14T13:23:50.255238039Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jan 14 13:23:50.924320 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2043894317.mount: Deactivated successfully. Jan 14 13:23:53.108159 containerd[1829]: time="2025-01-14T13:23:53.108031661Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:23:53.110078 containerd[1829]: time="2025-01-14T13:23:53.110013078Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651633" Jan 14 13:23:53.112546 containerd[1829]: time="2025-01-14T13:23:53.112485699Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:23:53.116505 containerd[1829]: time="2025-01-14T13:23:53.116431833Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:23:53.117699 containerd[1829]: time="2025-01-14T13:23:53.117499942Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 2.862223703s" Jan 14 13:23:53.117699 containerd[1829]: time="2025-01-14T13:23:53.117539142Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jan 14 13:23:56.159115 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:23:56.165165 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:23:56.199734 systemd[1]: Reloading requested from client PID 3056 ('systemctl') (unit session-9.scope)... Jan 14 13:23:56.199774 systemd[1]: Reloading... Jan 14 13:23:56.311880 zram_generator::config[3099]: No configuration found. Jan 14 13:23:56.454902 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 14 13:23:56.542696 systemd[1]: Reloading finished in 342 ms. Jan 14 13:23:56.596905 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:23:56.602221 systemd[1]: kubelet.service: Deactivated successfully. Jan 14 13:23:56.602799 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:23:56.612926 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:23:56.842005 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:23:56.858378 (kubelet)[3181]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 14 13:23:57.404176 kubelet[3181]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 13:23:57.404176 kubelet[3181]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 14 13:23:57.404176 kubelet[3181]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 13:23:57.404700 kubelet[3181]: I0114 13:23:57.404235 3181 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 14 13:23:57.900624 kubelet[3181]: I0114 13:23:57.900583 3181 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 14 13:23:57.900624 kubelet[3181]: I0114 13:23:57.900614 3181 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 14 13:23:57.900922 kubelet[3181]: I0114 13:23:57.900902 3181 server.go:919] "Client rotation is on, will bootstrap in background" Jan 14 13:23:57.919024 kubelet[3181]: E0114 13:23:57.918973 3181 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.4.19:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.4.19:6443: connect: connection refused Jan 14 13:23:57.919861 kubelet[3181]: I0114 13:23:57.919808 3181 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 14 13:23:57.930940 kubelet[3181]: I0114 13:23:57.930315 3181 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 14 13:23:57.932764 kubelet[3181]: I0114 13:23:57.932293 3181 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 14 13:23:57.932764 kubelet[3181]: I0114 13:23:57.932675 3181 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 14 13:23:57.933826 kubelet[3181]: I0114 13:23:57.933561 3181 topology_manager.go:138] "Creating topology manager with none policy" Jan 14 13:23:57.933826 kubelet[3181]: I0114 13:23:57.933601 3181 container_manager_linux.go:301] "Creating device plugin manager" Jan 14 13:23:57.933996 kubelet[3181]: I0114 13:23:57.933983 3181 state_mem.go:36] "Initialized new in-memory state store" Jan 14 13:23:57.934169 kubelet[3181]: I0114 13:23:57.934158 3181 kubelet.go:396] "Attempting to sync node with API server" Jan 14 13:23:57.934253 kubelet[3181]: I0114 13:23:57.934246 3181 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 14 13:23:57.935243 kubelet[3181]: I0114 13:23:57.935222 3181 kubelet.go:312] "Adding apiserver pod source" Jan 14 13:23:57.935544 kubelet[3181]: I0114 13:23:57.935365 3181 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 14 13:23:57.935611 kubelet[3181]: W0114 13:23:57.935564 3181 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.200.4.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152.2.0-a-0907529617&limit=500&resourceVersion=0": dial tcp 10.200.4.19:6443: connect: connection refused Jan 14 13:23:57.935663 kubelet[3181]: E0114 13:23:57.935625 3181 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.4.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152.2.0-a-0907529617&limit=500&resourceVersion=0": dial tcp 10.200.4.19:6443: connect: connection refused Jan 14 13:23:57.937158 kubelet[3181]: W0114 13:23:57.937113 3181 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.200.4.19:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.4.19:6443: connect: connection refused Jan 14 13:23:57.937669 kubelet[3181]: E0114 13:23:57.937259 3181 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.4.19:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.4.19:6443: connect: connection refused Jan 14 13:23:57.937669 kubelet[3181]: I0114 13:23:57.937367 3181 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 14 13:23:57.940994 kubelet[3181]: I0114 13:23:57.940841 3181 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 14 13:23:57.941793 kubelet[3181]: W0114 13:23:57.941770 3181 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 14 13:23:57.942388 kubelet[3181]: I0114 13:23:57.942366 3181 server.go:1256] "Started kubelet" Jan 14 13:23:57.943891 kubelet[3181]: I0114 13:23:57.943590 3181 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 14 13:23:57.948766 kubelet[3181]: E0114 13:23:57.948578 3181 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.4.19:6443/api/v1/namespaces/default/events\": dial tcp 10.200.4.19:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4152.2.0-a-0907529617.181a91e966c4f5c1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4152.2.0-a-0907529617,UID:ci-4152.2.0-a-0907529617,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4152.2.0-a-0907529617,},FirstTimestamp:2025-01-14 13:23:57.942339009 +0000 UTC m=+1.079061608,LastTimestamp:2025-01-14 13:23:57.942339009 +0000 UTC m=+1.079061608,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152.2.0-a-0907529617,}" Jan 14 13:23:57.951243 kubelet[3181]: I0114 13:23:57.950214 3181 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 14 13:23:57.951362 kubelet[3181]: I0114 13:23:57.951344 3181 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 14 13:23:57.951971 kubelet[3181]: I0114 13:23:57.951953 3181 server.go:461] "Adding debug handlers to kubelet server" Jan 14 13:23:57.953959 kubelet[3181]: I0114 13:23:57.953710 3181 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 14 13:23:57.954324 kubelet[3181]: I0114 13:23:57.954308 3181 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 14 13:23:57.954456 kubelet[3181]: I0114 13:23:57.954393 3181 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 14 13:23:57.956449 kubelet[3181]: I0114 13:23:57.954460 3181 reconciler_new.go:29] "Reconciler: start to sync state" Jan 14 13:23:57.956961 kubelet[3181]: E0114 13:23:57.956942 3181 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152.2.0-a-0907529617?timeout=10s\": dial tcp 10.200.4.19:6443: connect: connection refused" interval="200ms" Jan 14 13:23:57.957390 kubelet[3181]: I0114 13:23:57.957314 3181 factory.go:221] Registration of the systemd container factory successfully Jan 14 13:23:57.957569 kubelet[3181]: I0114 13:23:57.957495 3181 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 14 13:23:57.959115 kubelet[3181]: W0114 13:23:57.958589 3181 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.200.4.19:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.4.19:6443: connect: connection refused Jan 14 13:23:57.959115 kubelet[3181]: E0114 13:23:57.958640 3181 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.4.19:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.4.19:6443: connect: connection refused Jan 14 13:23:57.960909 kubelet[3181]: I0114 13:23:57.960893 3181 factory.go:221] Registration of the containerd container factory successfully Jan 14 13:23:57.968970 kubelet[3181]: I0114 13:23:57.968942 3181 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 14 13:23:57.970539 kubelet[3181]: I0114 13:23:57.970515 3181 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 14 13:23:57.970676 kubelet[3181]: I0114 13:23:57.970666 3181 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 14 13:23:57.971770 kubelet[3181]: I0114 13:23:57.970749 3181 kubelet.go:2329] "Starting kubelet main sync loop" Jan 14 13:23:57.971770 kubelet[3181]: E0114 13:23:57.970823 3181 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 14 13:23:57.971770 kubelet[3181]: E0114 13:23:57.971094 3181 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 14 13:23:57.982033 kubelet[3181]: W0114 13:23:57.981968 3181 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.200.4.19:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.4.19:6443: connect: connection refused Jan 14 13:23:57.982033 kubelet[3181]: E0114 13:23:57.982035 3181 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.4.19:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.4.19:6443: connect: connection refused Jan 14 13:23:58.021713 kubelet[3181]: I0114 13:23:58.021675 3181 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 14 13:23:58.021713 kubelet[3181]: I0114 13:23:58.021711 3181 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 14 13:23:58.021966 kubelet[3181]: I0114 13:23:58.021738 3181 state_mem.go:36] "Initialized new in-memory state store" Jan 14 13:23:58.028380 kubelet[3181]: I0114 13:23:58.028345 3181 policy_none.go:49] "None policy: Start" Jan 14 13:23:58.029419 kubelet[3181]: I0114 13:23:58.029392 3181 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 14 13:23:58.029529 kubelet[3181]: I0114 13:23:58.029427 3181 state_mem.go:35] "Initializing new in-memory state store" Jan 14 13:23:58.038206 kubelet[3181]: I0114 13:23:58.038172 3181 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 14 13:23:58.039747 kubelet[3181]: I0114 13:23:58.038686 3181 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 14 13:23:58.041332 kubelet[3181]: E0114 13:23:58.041308 3181 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4152.2.0-a-0907529617\" not found" Jan 14 13:23:58.054251 kubelet[3181]: I0114 13:23:58.054224 3181 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152.2.0-a-0907529617" Jan 14 13:23:58.054629 kubelet[3181]: E0114 13:23:58.054609 3181 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.4.19:6443/api/v1/nodes\": dial tcp 10.200.4.19:6443: connect: connection refused" node="ci-4152.2.0-a-0907529617" Jan 14 13:23:58.071169 kubelet[3181]: I0114 13:23:58.071104 3181 topology_manager.go:215] "Topology Admit Handler" podUID="8ce0772ab8168cf2de36137be655c472" podNamespace="kube-system" podName="kube-apiserver-ci-4152.2.0-a-0907529617" Jan 14 13:23:58.073408 kubelet[3181]: I0114 13:23:58.073377 3181 topology_manager.go:215] "Topology Admit Handler" podUID="fbc3eabd66c1752dbe541378dc06665e" podNamespace="kube-system" podName="kube-controller-manager-ci-4152.2.0-a-0907529617" Jan 14 13:23:58.075696 kubelet[3181]: I0114 13:23:58.075401 3181 topology_manager.go:215] "Topology Admit Handler" podUID="614a52aa9937c1bae7fd8d2f782c4330" podNamespace="kube-system" podName="kube-scheduler-ci-4152.2.0-a-0907529617" Jan 14 13:23:58.157556 kubelet[3181]: I0114 13:23:58.156911 3181 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8ce0772ab8168cf2de36137be655c472-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4152.2.0-a-0907529617\" (UID: \"8ce0772ab8168cf2de36137be655c472\") " pod="kube-system/kube-apiserver-ci-4152.2.0-a-0907529617" Jan 14 13:23:58.157556 kubelet[3181]: I0114 13:23:58.156981 3181 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fbc3eabd66c1752dbe541378dc06665e-flexvolume-dir\") pod \"kube-controller-manager-ci-4152.2.0-a-0907529617\" (UID: \"fbc3eabd66c1752dbe541378dc06665e\") " pod="kube-system/kube-controller-manager-ci-4152.2.0-a-0907529617" Jan 14 13:23:58.157556 kubelet[3181]: I0114 13:23:58.157035 3181 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fbc3eabd66c1752dbe541378dc06665e-kubeconfig\") pod \"kube-controller-manager-ci-4152.2.0-a-0907529617\" (UID: \"fbc3eabd66c1752dbe541378dc06665e\") " pod="kube-system/kube-controller-manager-ci-4152.2.0-a-0907529617" Jan 14 13:23:58.157556 kubelet[3181]: I0114 13:23:58.157077 3181 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/614a52aa9937c1bae7fd8d2f782c4330-kubeconfig\") pod \"kube-scheduler-ci-4152.2.0-a-0907529617\" (UID: \"614a52aa9937c1bae7fd8d2f782c4330\") " pod="kube-system/kube-scheduler-ci-4152.2.0-a-0907529617" Jan 14 13:23:58.157556 kubelet[3181]: I0114 13:23:58.157121 3181 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fbc3eabd66c1752dbe541378dc06665e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4152.2.0-a-0907529617\" (UID: \"fbc3eabd66c1752dbe541378dc06665e\") " pod="kube-system/kube-controller-manager-ci-4152.2.0-a-0907529617" Jan 14 13:23:58.157962 kubelet[3181]: I0114 13:23:58.157151 3181 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8ce0772ab8168cf2de36137be655c472-ca-certs\") pod \"kube-apiserver-ci-4152.2.0-a-0907529617\" (UID: \"8ce0772ab8168cf2de36137be655c472\") " pod="kube-system/kube-apiserver-ci-4152.2.0-a-0907529617" Jan 14 13:23:58.157962 kubelet[3181]: I0114 13:23:58.157185 3181 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8ce0772ab8168cf2de36137be655c472-k8s-certs\") pod \"kube-apiserver-ci-4152.2.0-a-0907529617\" (UID: \"8ce0772ab8168cf2de36137be655c472\") " pod="kube-system/kube-apiserver-ci-4152.2.0-a-0907529617" Jan 14 13:23:58.157962 kubelet[3181]: I0114 13:23:58.157217 3181 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fbc3eabd66c1752dbe541378dc06665e-ca-certs\") pod \"kube-controller-manager-ci-4152.2.0-a-0907529617\" (UID: \"fbc3eabd66c1752dbe541378dc06665e\") " pod="kube-system/kube-controller-manager-ci-4152.2.0-a-0907529617" Jan 14 13:23:58.157962 kubelet[3181]: I0114 13:23:58.157254 3181 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fbc3eabd66c1752dbe541378dc06665e-k8s-certs\") pod \"kube-controller-manager-ci-4152.2.0-a-0907529617\" (UID: \"fbc3eabd66c1752dbe541378dc06665e\") " pod="kube-system/kube-controller-manager-ci-4152.2.0-a-0907529617" Jan 14 13:23:58.158655 kubelet[3181]: E0114 13:23:58.158620 3181 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152.2.0-a-0907529617?timeout=10s\": dial tcp 10.200.4.19:6443: connect: connection refused" interval="400ms" Jan 14 13:23:58.256975 kubelet[3181]: I0114 13:23:58.256936 3181 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152.2.0-a-0907529617" Jan 14 13:23:58.257394 kubelet[3181]: E0114 13:23:58.257349 3181 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.4.19:6443/api/v1/nodes\": dial tcp 10.200.4.19:6443: connect: connection refused" node="ci-4152.2.0-a-0907529617" Jan 14 13:23:58.380925 containerd[1829]: time="2025-01-14T13:23:58.380868051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4152.2.0-a-0907529617,Uid:8ce0772ab8168cf2de36137be655c472,Namespace:kube-system,Attempt:0,}" Jan 14 13:23:58.386537 containerd[1829]: time="2025-01-14T13:23:58.386495599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4152.2.0-a-0907529617,Uid:fbc3eabd66c1752dbe541378dc06665e,Namespace:kube-system,Attempt:0,}" Jan 14 13:23:58.393308 containerd[1829]: time="2025-01-14T13:23:58.393267756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4152.2.0-a-0907529617,Uid:614a52aa9937c1bae7fd8d2f782c4330,Namespace:kube-system,Attempt:0,}" Jan 14 13:23:58.559810 kubelet[3181]: E0114 13:23:58.559744 3181 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152.2.0-a-0907529617?timeout=10s\": dial tcp 10.200.4.19:6443: connect: connection refused" interval="800ms" Jan 14 13:23:58.659937 kubelet[3181]: I0114 13:23:58.659906 3181 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152.2.0-a-0907529617" Jan 14 13:23:58.660316 kubelet[3181]: E0114 13:23:58.660294 3181 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.4.19:6443/api/v1/nodes\": dial tcp 10.200.4.19:6443: connect: connection refused" node="ci-4152.2.0-a-0907529617" Jan 14 13:23:58.950956 kubelet[3181]: W0114 13:23:58.950810 3181 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.200.4.19:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.4.19:6443: connect: connection refused Jan 14 13:23:58.950956 kubelet[3181]: E0114 13:23:58.950877 3181 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.4.19:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.4.19:6443: connect: connection refused Jan 14 13:23:59.035030 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4276693399.mount: Deactivated successfully. Jan 14 13:23:59.084970 containerd[1829]: time="2025-01-14T13:23:59.084900658Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 13:23:59.096206 containerd[1829]: time="2025-01-14T13:23:59.096121453Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jan 14 13:23:59.099142 containerd[1829]: time="2025-01-14T13:23:59.099097879Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 13:23:59.103288 containerd[1829]: time="2025-01-14T13:23:59.103238414Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 13:23:59.109844 containerd[1829]: time="2025-01-14T13:23:59.109460167Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 14 13:23:59.113472 containerd[1829]: time="2025-01-14T13:23:59.113424901Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 13:23:59.116409 containerd[1829]: time="2025-01-14T13:23:59.116360626Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 13:23:59.117226 containerd[1829]: time="2025-01-14T13:23:59.117181533Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 736.199981ms" Jan 14 13:23:59.118862 containerd[1829]: time="2025-01-14T13:23:59.118721046Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 14 13:23:59.126624 containerd[1829]: time="2025-01-14T13:23:59.126571213Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 739.974014ms" Jan 14 13:23:59.157783 containerd[1829]: time="2025-01-14T13:23:59.156520469Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 763.150112ms" Jan 14 13:23:59.164993 kubelet[3181]: W0114 13:23:59.164930 3181 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.200.4.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152.2.0-a-0907529617&limit=500&resourceVersion=0": dial tcp 10.200.4.19:6443: connect: connection refused Jan 14 13:23:59.164993 kubelet[3181]: E0114 13:23:59.164997 3181 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.4.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152.2.0-a-0907529617&limit=500&resourceVersion=0": dial tcp 10.200.4.19:6443: connect: connection refused Jan 14 13:23:59.265026 kubelet[3181]: W0114 13:23:59.264953 3181 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.200.4.19:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.4.19:6443: connect: connection refused Jan 14 13:23:59.265026 kubelet[3181]: E0114 13:23:59.265027 3181 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.4.19:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.4.19:6443: connect: connection refused Jan 14 13:23:59.360982 kubelet[3181]: E0114 13:23:59.360931 3181 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152.2.0-a-0907529617?timeout=10s\": dial tcp 10.200.4.19:6443: connect: connection refused" interval="1.6s" Jan 14 13:23:59.384604 kubelet[3181]: W0114 13:23:59.384538 3181 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.200.4.19:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.4.19:6443: connect: connection refused Jan 14 13:23:59.384604 kubelet[3181]: E0114 13:23:59.384604 3181 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.4.19:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.4.19:6443: connect: connection refused Jan 14 13:23:59.463539 kubelet[3181]: I0114 13:23:59.463474 3181 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152.2.0-a-0907529617" Jan 14 13:23:59.464164 kubelet[3181]: E0114 13:23:59.464132 3181 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.4.19:6443/api/v1/nodes\": dial tcp 10.200.4.19:6443: connect: connection refused" node="ci-4152.2.0-a-0907529617" Jan 14 13:23:59.985096 kubelet[3181]: E0114 13:23:59.985050 3181 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.4.19:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.4.19:6443: connect: connection refused Jan 14 13:24:00.036267 containerd[1829]: time="2025-01-14T13:24:00.035610562Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 13:24:00.036267 containerd[1829]: time="2025-01-14T13:24:00.035693762Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 13:24:00.036267 containerd[1829]: time="2025-01-14T13:24:00.035710163Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:24:00.036901 containerd[1829]: time="2025-01-14T13:24:00.036147066Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 13:24:00.036901 containerd[1829]: time="2025-01-14T13:24:00.036216767Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 13:24:00.037463 containerd[1829]: time="2025-01-14T13:24:00.036239367Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:24:00.037603 containerd[1829]: time="2025-01-14T13:24:00.037056474Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 13:24:00.037603 containerd[1829]: time="2025-01-14T13:24:00.037233075Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 13:24:00.037603 containerd[1829]: time="2025-01-14T13:24:00.037277976Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:24:00.037792 containerd[1829]: time="2025-01-14T13:24:00.037185375Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:24:00.037912 containerd[1829]: time="2025-01-14T13:24:00.037844781Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:24:00.041779 containerd[1829]: time="2025-01-14T13:24:00.039448294Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:24:00.162276 containerd[1829]: time="2025-01-14T13:24:00.162230235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4152.2.0-a-0907529617,Uid:8ce0772ab8168cf2de36137be655c472,Namespace:kube-system,Attempt:0,} returns sandbox id \"93b81ce42ec30116fbd2c7203e44b16f127ce2d2a3bedb069642e83037518465\"" Jan 14 13:24:00.171659 containerd[1829]: time="2025-01-14T13:24:00.171553714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4152.2.0-a-0907529617,Uid:614a52aa9937c1bae7fd8d2f782c4330,Namespace:kube-system,Attempt:0,} returns sandbox id \"2f1241c734f8a6b3c50a7bf1cfd25f420158a45103d3827b6d0200d0ea19d260\"" Jan 14 13:24:00.174773 containerd[1829]: time="2025-01-14T13:24:00.174438938Z" level=info msg="CreateContainer within sandbox \"93b81ce42ec30116fbd2c7203e44b16f127ce2d2a3bedb069642e83037518465\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 14 13:24:00.178089 containerd[1829]: time="2025-01-14T13:24:00.178050869Z" level=info msg="CreateContainer within sandbox \"2f1241c734f8a6b3c50a7bf1cfd25f420158a45103d3827b6d0200d0ea19d260\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 14 13:24:00.179273 containerd[1829]: time="2025-01-14T13:24:00.179004577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4152.2.0-a-0907529617,Uid:fbc3eabd66c1752dbe541378dc06665e,Namespace:kube-system,Attempt:0,} returns sandbox id \"c0e3a9b72ab6dfcfa64a47e03645fcbbe145510aa36754a4949fa236a5d36b44\"" Jan 14 13:24:00.182809 containerd[1829]: time="2025-01-14T13:24:00.182780909Z" level=info msg="CreateContainer within sandbox \"c0e3a9b72ab6dfcfa64a47e03645fcbbe145510aa36754a4949fa236a5d36b44\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 14 13:24:00.250531 containerd[1829]: time="2025-01-14T13:24:00.250474682Z" level=info msg="CreateContainer within sandbox \"93b81ce42ec30116fbd2c7203e44b16f127ce2d2a3bedb069642e83037518465\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b80738b4bf9e3cec90007be96ad9d1d7e19799c0e617a4d626f382a14494b312\"" Jan 14 13:24:00.251354 containerd[1829]: time="2025-01-14T13:24:00.251300889Z" level=info msg="StartContainer for \"b80738b4bf9e3cec90007be96ad9d1d7e19799c0e617a4d626f382a14494b312\"" Jan 14 13:24:00.254837 containerd[1829]: time="2025-01-14T13:24:00.254747418Z" level=info msg="CreateContainer within sandbox \"c0e3a9b72ab6dfcfa64a47e03645fcbbe145510aa36754a4949fa236a5d36b44\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6170998190c1177f0f55b180d755922cd39254758c4ac117f34d21e29415fbee\"" Jan 14 13:24:00.261990 containerd[1829]: time="2025-01-14T13:24:00.261634777Z" level=info msg="CreateContainer within sandbox \"2f1241c734f8a6b3c50a7bf1cfd25f420158a45103d3827b6d0200d0ea19d260\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d9a2aaa1df555661d1c5509d84346731c9796a9743d55891a09ce5945811c1c1\"" Jan 14 13:24:00.263955 containerd[1829]: time="2025-01-14T13:24:00.262695186Z" level=info msg="StartContainer for \"d9a2aaa1df555661d1c5509d84346731c9796a9743d55891a09ce5945811c1c1\"" Jan 14 13:24:00.264135 containerd[1829]: time="2025-01-14T13:24:00.264107998Z" level=info msg="StartContainer for \"6170998190c1177f0f55b180d755922cd39254758c4ac117f34d21e29415fbee\"" Jan 14 13:24:00.401782 containerd[1829]: time="2025-01-14T13:24:00.400152750Z" level=info msg="StartContainer for \"b80738b4bf9e3cec90007be96ad9d1d7e19799c0e617a4d626f382a14494b312\" returns successfully" Jan 14 13:24:00.412783 containerd[1829]: time="2025-01-14T13:24:00.410880341Z" level=info msg="StartContainer for \"6170998190c1177f0f55b180d755922cd39254758c4ac117f34d21e29415fbee\" returns successfully" Jan 14 13:24:00.441596 containerd[1829]: time="2025-01-14T13:24:00.441549001Z" level=info msg="StartContainer for \"d9a2aaa1df555661d1c5509d84346731c9796a9743d55891a09ce5945811c1c1\" returns successfully" Jan 14 13:24:01.056026 systemd[1]: run-containerd-runc-k8s.io-93b81ce42ec30116fbd2c7203e44b16f127ce2d2a3bedb069642e83037518465-runc.lDcH9H.mount: Deactivated successfully. Jan 14 13:24:01.067303 kubelet[3181]: I0114 13:24:01.067270 3181 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152.2.0-a-0907529617" Jan 14 13:24:02.801180 kubelet[3181]: E0114 13:24:02.801123 3181 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4152.2.0-a-0907529617\" not found" node="ci-4152.2.0-a-0907529617" Jan 14 13:24:02.831232 kubelet[3181]: I0114 13:24:02.830936 3181 kubelet_node_status.go:76] "Successfully registered node" node="ci-4152.2.0-a-0907529617" Jan 14 13:24:02.940016 kubelet[3181]: I0114 13:24:02.939969 3181 apiserver.go:52] "Watching apiserver" Jan 14 13:24:02.955654 kubelet[3181]: I0114 13:24:02.955608 3181 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 14 13:24:05.589637 systemd[1]: Reloading requested from client PID 3451 ('systemctl') (unit session-9.scope)... Jan 14 13:24:05.589656 systemd[1]: Reloading... Jan 14 13:24:05.682792 zram_generator::config[3491]: No configuration found. Jan 14 13:24:05.821738 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 14 13:24:05.909041 systemd[1]: Reloading finished in 317 ms. Jan 14 13:24:05.940540 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:24:05.961395 systemd[1]: kubelet.service: Deactivated successfully. Jan 14 13:24:05.962552 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:24:05.971353 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:24:06.161997 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:24:06.177847 (kubelet)[3568]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 14 13:24:06.241392 kubelet[3568]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 13:24:06.241392 kubelet[3568]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 14 13:24:06.241392 kubelet[3568]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 13:24:06.241940 kubelet[3568]: I0114 13:24:06.241450 3568 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 14 13:24:06.247202 kubelet[3568]: I0114 13:24:06.247165 3568 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 14 13:24:06.247202 kubelet[3568]: I0114 13:24:06.247192 3568 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 14 13:24:06.247433 kubelet[3568]: I0114 13:24:06.247413 3568 server.go:919] "Client rotation is on, will bootstrap in background" Jan 14 13:24:06.250478 kubelet[3568]: I0114 13:24:06.250215 3568 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 14 13:24:06.252209 kubelet[3568]: I0114 13:24:06.252186 3568 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 14 13:24:06.262579 kubelet[3568]: I0114 13:24:06.262546 3568 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 14 13:24:06.263123 kubelet[3568]: I0114 13:24:06.263097 3568 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 14 13:24:06.263298 kubelet[3568]: I0114 13:24:06.263279 3568 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 14 13:24:06.263434 kubelet[3568]: I0114 13:24:06.263311 3568 topology_manager.go:138] "Creating topology manager with none policy" Jan 14 13:24:06.263434 kubelet[3568]: I0114 13:24:06.263323 3568 container_manager_linux.go:301] "Creating device plugin manager" Jan 14 13:24:06.263434 kubelet[3568]: I0114 13:24:06.263370 3568 state_mem.go:36] "Initialized new in-memory state store" Jan 14 13:24:06.264868 kubelet[3568]: I0114 13:24:06.264799 3568 kubelet.go:396] "Attempting to sync node with API server" Jan 14 13:24:06.264868 kubelet[3568]: I0114 13:24:06.264828 3568 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 14 13:24:06.265086 kubelet[3568]: I0114 13:24:06.265023 3568 kubelet.go:312] "Adding apiserver pod source" Jan 14 13:24:06.265086 kubelet[3568]: I0114 13:24:06.265043 3568 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 14 13:24:06.280600 kubelet[3568]: I0114 13:24:06.278988 3568 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 14 13:24:06.280600 kubelet[3568]: I0114 13:24:06.279246 3568 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 14 13:24:06.280600 kubelet[3568]: I0114 13:24:06.279923 3568 server.go:1256] "Started kubelet" Jan 14 13:24:06.285278 kubelet[3568]: I0114 13:24:06.285250 3568 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 14 13:24:06.294645 kubelet[3568]: I0114 13:24:06.294618 3568 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 14 13:24:06.296478 kubelet[3568]: I0114 13:24:06.296455 3568 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 14 13:24:06.296619 kubelet[3568]: I0114 13:24:06.293746 3568 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 14 13:24:06.298367 kubelet[3568]: I0114 13:24:06.298351 3568 reconciler_new.go:29] "Reconciler: start to sync state" Jan 14 13:24:06.299500 kubelet[3568]: I0114 13:24:06.295280 3568 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 14 13:24:06.299853 kubelet[3568]: I0114 13:24:06.299836 3568 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 14 13:24:06.300931 kubelet[3568]: I0114 13:24:06.300914 3568 server.go:461] "Adding debug handlers to kubelet server" Jan 14 13:24:06.303308 kubelet[3568]: I0114 13:24:06.303006 3568 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 14 13:24:06.304631 kubelet[3568]: I0114 13:24:06.304324 3568 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 14 13:24:06.304631 kubelet[3568]: I0114 13:24:06.304357 3568 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 14 13:24:06.304631 kubelet[3568]: I0114 13:24:06.304376 3568 kubelet.go:2329] "Starting kubelet main sync loop" Jan 14 13:24:06.304631 kubelet[3568]: E0114 13:24:06.304427 3568 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 14 13:24:06.306688 kubelet[3568]: I0114 13:24:06.306666 3568 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 14 13:24:06.307921 kubelet[3568]: E0114 13:24:06.307906 3568 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 14 13:24:06.312100 kubelet[3568]: I0114 13:24:06.312083 3568 factory.go:221] Registration of the containerd container factory successfully Jan 14 13:24:06.312222 kubelet[3568]: I0114 13:24:06.312212 3568 factory.go:221] Registration of the systemd container factory successfully Jan 14 13:24:06.379891 kubelet[3568]: I0114 13:24:06.379856 3568 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 14 13:24:06.379891 kubelet[3568]: I0114 13:24:06.379883 3568 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 14 13:24:06.379891 kubelet[3568]: I0114 13:24:06.379904 3568 state_mem.go:36] "Initialized new in-memory state store" Jan 14 13:24:06.380144 kubelet[3568]: I0114 13:24:06.380092 3568 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 14 13:24:06.380144 kubelet[3568]: I0114 13:24:06.380120 3568 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 14 13:24:06.380144 kubelet[3568]: I0114 13:24:06.380129 3568 policy_none.go:49] "None policy: Start" Jan 14 13:24:06.380791 kubelet[3568]: I0114 13:24:06.380775 3568 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 14 13:24:06.380910 kubelet[3568]: I0114 13:24:06.380892 3568 state_mem.go:35] "Initializing new in-memory state store" Jan 14 13:24:06.381084 kubelet[3568]: I0114 13:24:06.381067 3568 state_mem.go:75] "Updated machine memory state" Jan 14 13:24:06.382294 kubelet[3568]: I0114 13:24:06.382191 3568 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 14 13:24:06.382482 kubelet[3568]: I0114 13:24:06.382464 3568 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 14 13:24:06.403267 kubelet[3568]: I0114 13:24:06.403233 3568 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152.2.0-a-0907529617" Jan 14 13:24:06.406055 kubelet[3568]: I0114 13:24:06.405401 3568 topology_manager.go:215] "Topology Admit Handler" podUID="8ce0772ab8168cf2de36137be655c472" podNamespace="kube-system" podName="kube-apiserver-ci-4152.2.0-a-0907529617" Jan 14 13:24:06.406055 kubelet[3568]: I0114 13:24:06.405494 3568 topology_manager.go:215] "Topology Admit Handler" podUID="fbc3eabd66c1752dbe541378dc06665e" podNamespace="kube-system" podName="kube-controller-manager-ci-4152.2.0-a-0907529617" Jan 14 13:24:06.406055 kubelet[3568]: I0114 13:24:06.405541 3568 topology_manager.go:215] "Topology Admit Handler" podUID="614a52aa9937c1bae7fd8d2f782c4330" podNamespace="kube-system" podName="kube-scheduler-ci-4152.2.0-a-0907529617" Jan 14 13:24:06.423437 kubelet[3568]: W0114 13:24:06.421354 3568 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 14 13:24:06.423437 kubelet[3568]: W0114 13:24:06.421362 3568 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 14 13:24:06.423437 kubelet[3568]: I0114 13:24:06.421484 3568 kubelet_node_status.go:112] "Node was previously registered" node="ci-4152.2.0-a-0907529617" Jan 14 13:24:06.423655 kubelet[3568]: I0114 13:24:06.423511 3568 kubelet_node_status.go:76] "Successfully registered node" node="ci-4152.2.0-a-0907529617" Jan 14 13:24:06.423655 kubelet[3568]: W0114 13:24:06.421534 3568 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 14 13:24:06.502561 kubelet[3568]: I0114 13:24:06.502480 3568 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fbc3eabd66c1752dbe541378dc06665e-kubeconfig\") pod \"kube-controller-manager-ci-4152.2.0-a-0907529617\" (UID: \"fbc3eabd66c1752dbe541378dc06665e\") " pod="kube-system/kube-controller-manager-ci-4152.2.0-a-0907529617" Jan 14 13:24:06.502561 kubelet[3568]: I0114 13:24:06.502546 3568 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8ce0772ab8168cf2de36137be655c472-ca-certs\") pod \"kube-apiserver-ci-4152.2.0-a-0907529617\" (UID: \"8ce0772ab8168cf2de36137be655c472\") " pod="kube-system/kube-apiserver-ci-4152.2.0-a-0907529617" Jan 14 13:24:06.502561 kubelet[3568]: I0114 13:24:06.502584 3568 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8ce0772ab8168cf2de36137be655c472-k8s-certs\") pod \"kube-apiserver-ci-4152.2.0-a-0907529617\" (UID: \"8ce0772ab8168cf2de36137be655c472\") " pod="kube-system/kube-apiserver-ci-4152.2.0-a-0907529617" Jan 14 13:24:06.503008 kubelet[3568]: I0114 13:24:06.502620 3568 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fbc3eabd66c1752dbe541378dc06665e-ca-certs\") pod \"kube-controller-manager-ci-4152.2.0-a-0907529617\" (UID: \"fbc3eabd66c1752dbe541378dc06665e\") " pod="kube-system/kube-controller-manager-ci-4152.2.0-a-0907529617" Jan 14 13:24:06.503008 kubelet[3568]: I0114 13:24:06.502655 3568 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fbc3eabd66c1752dbe541378dc06665e-k8s-certs\") pod \"kube-controller-manager-ci-4152.2.0-a-0907529617\" (UID: \"fbc3eabd66c1752dbe541378dc06665e\") " pod="kube-system/kube-controller-manager-ci-4152.2.0-a-0907529617" Jan 14 13:24:06.503008 kubelet[3568]: I0114 13:24:06.502694 3568 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8ce0772ab8168cf2de36137be655c472-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4152.2.0-a-0907529617\" (UID: \"8ce0772ab8168cf2de36137be655c472\") " pod="kube-system/kube-apiserver-ci-4152.2.0-a-0907529617" Jan 14 13:24:06.503008 kubelet[3568]: I0114 13:24:06.502728 3568 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fbc3eabd66c1752dbe541378dc06665e-flexvolume-dir\") pod \"kube-controller-manager-ci-4152.2.0-a-0907529617\" (UID: \"fbc3eabd66c1752dbe541378dc06665e\") " pod="kube-system/kube-controller-manager-ci-4152.2.0-a-0907529617" Jan 14 13:24:06.503008 kubelet[3568]: I0114 13:24:06.502778 3568 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fbc3eabd66c1752dbe541378dc06665e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4152.2.0-a-0907529617\" (UID: \"fbc3eabd66c1752dbe541378dc06665e\") " pod="kube-system/kube-controller-manager-ci-4152.2.0-a-0907529617" Jan 14 13:24:06.503163 kubelet[3568]: I0114 13:24:06.502834 3568 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/614a52aa9937c1bae7fd8d2f782c4330-kubeconfig\") pod \"kube-scheduler-ci-4152.2.0-a-0907529617\" (UID: \"614a52aa9937c1bae7fd8d2f782c4330\") " pod="kube-system/kube-scheduler-ci-4152.2.0-a-0907529617" Jan 14 13:24:06.563884 sudo[3598]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 14 13:24:06.564262 sudo[3598]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 14 13:24:07.082493 sudo[3598]: pam_unix(sudo:session): session closed for user root Jan 14 13:24:07.268308 kubelet[3568]: I0114 13:24:07.266567 3568 apiserver.go:52] "Watching apiserver" Jan 14 13:24:07.297002 kubelet[3568]: I0114 13:24:07.296916 3568 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 14 13:24:07.352929 kubelet[3568]: W0114 13:24:07.352750 3568 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 14 13:24:07.355774 kubelet[3568]: E0114 13:24:07.354355 3568 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4152.2.0-a-0907529617\" already exists" pod="kube-system/kube-controller-manager-ci-4152.2.0-a-0907529617" Jan 14 13:24:07.378035 kubelet[3568]: I0114 13:24:07.378002 3568 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4152.2.0-a-0907529617" podStartSLOduration=1.376813563 podStartE2EDuration="1.376813563s" podCreationTimestamp="2025-01-14 13:24:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-14 13:24:07.375594552 +0000 UTC m=+1.191541697" watchObservedRunningTime="2025-01-14 13:24:07.376813563 +0000 UTC m=+1.192760808" Jan 14 13:24:07.417151 kubelet[3568]: I0114 13:24:07.417116 3568 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4152.2.0-a-0907529617" podStartSLOduration=1.417067904 podStartE2EDuration="1.417067904s" podCreationTimestamp="2025-01-14 13:24:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-14 13:24:07.399666056 +0000 UTC m=+1.215613201" watchObservedRunningTime="2025-01-14 13:24:07.417067904 +0000 UTC m=+1.233015049" Jan 14 13:24:07.432183 kubelet[3568]: I0114 13:24:07.431544 3568 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4152.2.0-a-0907529617" podStartSLOduration=1.431482226 podStartE2EDuration="1.431482226s" podCreationTimestamp="2025-01-14 13:24:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-14 13:24:07.417744209 +0000 UTC m=+1.233691354" watchObservedRunningTime="2025-01-14 13:24:07.431482226 +0000 UTC m=+1.247429471" Jan 14 13:24:08.441668 sudo[2546]: pam_unix(sudo:session): session closed for user root Jan 14 13:24:08.541684 sshd[2545]: Connection closed by 10.200.16.10 port 41856 Jan 14 13:24:08.542425 sshd-session[2542]: pam_unix(sshd:session): session closed for user core Jan 14 13:24:08.545576 systemd[1]: sshd@6-10.200.4.19:22-10.200.16.10:41856.service: Deactivated successfully. Jan 14 13:24:08.551408 systemd-logind[1807]: Session 9 logged out. Waiting for processes to exit. Jan 14 13:24:08.551424 systemd[1]: session-9.scope: Deactivated successfully. Jan 14 13:24:08.553070 systemd-logind[1807]: Removed session 9. Jan 14 13:24:18.537323 kubelet[3568]: I0114 13:24:18.537126 3568 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 14 13:24:18.538035 containerd[1829]: time="2025-01-14T13:24:18.537998680Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 14 13:24:18.538488 kubelet[3568]: I0114 13:24:18.538233 3568 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 14 13:24:19.462779 kubelet[3568]: I0114 13:24:19.461467 3568 topology_manager.go:215] "Topology Admit Handler" podUID="2450a5fb-22df-442a-978a-13712a65357d" podNamespace="kube-system" podName="kube-proxy-j8nd5" Jan 14 13:24:19.464180 kubelet[3568]: I0114 13:24:19.464124 3568 topology_manager.go:215] "Topology Admit Handler" podUID="6d13ace9-7237-4ca8-b3a4-687877cea7f5" podNamespace="kube-system" podName="cilium-gjjns" Jan 14 13:24:19.489918 kubelet[3568]: I0114 13:24:19.489880 3568 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6d13ace9-7237-4ca8-b3a4-687877cea7f5-cilium-run\") pod \"cilium-gjjns\" (UID: \"6d13ace9-7237-4ca8-b3a4-687877cea7f5\") " pod="kube-system/cilium-gjjns" Jan 14 13:24:19.489918 kubelet[3568]: I0114 13:24:19.489930 3568 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2450a5fb-22df-442a-978a-13712a65357d-kube-proxy\") pod \"kube-proxy-j8nd5\" (UID: \"2450a5fb-22df-442a-978a-13712a65357d\") " pod="kube-system/kube-proxy-j8nd5" Jan 14 13:24:19.490136 kubelet[3568]: I0114 13:24:19.489958 3568 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6d13ace9-7237-4ca8-b3a4-687877cea7f5-hostproc\") pod \"cilium-gjjns\" (UID: \"6d13ace9-7237-4ca8-b3a4-687877cea7f5\") " pod="kube-system/cilium-gjjns" Jan 14 13:24:19.490136 kubelet[3568]: I0114 13:24:19.489982 3568 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6d13ace9-7237-4ca8-b3a4-687877cea7f5-xtables-lock\") pod \"cilium-gjjns\" (UID: \"6d13ace9-7237-4ca8-b3a4-687877cea7f5\") " pod="kube-system/cilium-gjjns" Jan 14 13:24:19.490136 kubelet[3568]: I0114 13:24:19.490010 3568 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6d13ace9-7237-4ca8-b3a4-687877cea7f5-host-proc-sys-kernel\") pod \"cilium-gjjns\" (UID: \"6d13ace9-7237-4ca8-b3a4-687877cea7f5\") " pod="kube-system/cilium-gjjns" Jan 14 13:24:19.490136 kubelet[3568]: I0114 13:24:19.490036 3568 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2450a5fb-22df-442a-978a-13712a65357d-lib-modules\") pod \"kube-proxy-j8nd5\" (UID: \"2450a5fb-22df-442a-978a-13712a65357d\") " pod="kube-system/kube-proxy-j8nd5" Jan 14 13:24:19.490136 kubelet[3568]: I0114 13:24:19.490063 3568 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6d13ace9-7237-4ca8-b3a4-687877cea7f5-bpf-maps\") pod \"cilium-gjjns\" (UID: \"6d13ace9-7237-4ca8-b3a4-687877cea7f5\") " pod="kube-system/cilium-gjjns" Jan 14 13:24:19.490136 kubelet[3568]: I0114 13:24:19.490088 3568 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6d13ace9-7237-4ca8-b3a4-687877cea7f5-host-proc-sys-net\") pod \"cilium-gjjns\" (UID: \"6d13ace9-7237-4ca8-b3a4-687877cea7f5\") " pod="kube-system/cilium-gjjns" Jan 14 13:24:19.490381 kubelet[3568]: I0114 13:24:19.490122 3568 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6d13ace9-7237-4ca8-b3a4-687877cea7f5-hubble-tls\") pod \"cilium-gjjns\" (UID: \"6d13ace9-7237-4ca8-b3a4-687877cea7f5\") " pod="kube-system/cilium-gjjns" Jan 14 13:24:19.490381 kubelet[3568]: I0114 13:24:19.490154 3568 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6d13ace9-7237-4ca8-b3a4-687877cea7f5-etc-cni-netd\") pod \"cilium-gjjns\" (UID: \"6d13ace9-7237-4ca8-b3a4-687877cea7f5\") " pod="kube-system/cilium-gjjns" Jan 14 13:24:19.490381 kubelet[3568]: I0114 13:24:19.490185 3568 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6d13ace9-7237-4ca8-b3a4-687877cea7f5-cilium-config-path\") pod \"cilium-gjjns\" (UID: \"6d13ace9-7237-4ca8-b3a4-687877cea7f5\") " pod="kube-system/cilium-gjjns" Jan 14 13:24:19.490381 kubelet[3568]: I0114 13:24:19.490212 3568 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6d13ace9-7237-4ca8-b3a4-687877cea7f5-cilium-cgroup\") pod \"cilium-gjjns\" (UID: \"6d13ace9-7237-4ca8-b3a4-687877cea7f5\") " pod="kube-system/cilium-gjjns" Jan 14 13:24:19.490381 kubelet[3568]: I0114 13:24:19.490241 3568 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5z58\" (UniqueName: \"kubernetes.io/projected/2450a5fb-22df-442a-978a-13712a65357d-kube-api-access-d5z58\") pod \"kube-proxy-j8nd5\" (UID: \"2450a5fb-22df-442a-978a-13712a65357d\") " pod="kube-system/kube-proxy-j8nd5" Jan 14 13:24:19.490570 kubelet[3568]: I0114 13:24:19.490273 3568 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6d13ace9-7237-4ca8-b3a4-687877cea7f5-clustermesh-secrets\") pod \"cilium-gjjns\" (UID: \"6d13ace9-7237-4ca8-b3a4-687877cea7f5\") " pod="kube-system/cilium-gjjns" Jan 14 13:24:19.490570 kubelet[3568]: I0114 13:24:19.490300 3568 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6d13ace9-7237-4ca8-b3a4-687877cea7f5-cni-path\") pod \"cilium-gjjns\" (UID: \"6d13ace9-7237-4ca8-b3a4-687877cea7f5\") " pod="kube-system/cilium-gjjns" Jan 14 13:24:19.490570 kubelet[3568]: I0114 13:24:19.490327 3568 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6d13ace9-7237-4ca8-b3a4-687877cea7f5-lib-modules\") pod \"cilium-gjjns\" (UID: \"6d13ace9-7237-4ca8-b3a4-687877cea7f5\") " pod="kube-system/cilium-gjjns" Jan 14 13:24:19.490570 kubelet[3568]: I0114 13:24:19.490355 3568 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knm8s\" (UniqueName: \"kubernetes.io/projected/6d13ace9-7237-4ca8-b3a4-687877cea7f5-kube-api-access-knm8s\") pod \"cilium-gjjns\" (UID: \"6d13ace9-7237-4ca8-b3a4-687877cea7f5\") " pod="kube-system/cilium-gjjns" Jan 14 13:24:19.490570 kubelet[3568]: I0114 13:24:19.490388 3568 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2450a5fb-22df-442a-978a-13712a65357d-xtables-lock\") pod \"kube-proxy-j8nd5\" (UID: \"2450a5fb-22df-442a-978a-13712a65357d\") " pod="kube-system/kube-proxy-j8nd5" Jan 14 13:24:19.565950 kubelet[3568]: I0114 13:24:19.565898 3568 topology_manager.go:215] "Topology Admit Handler" podUID="7141d984-4995-478e-9f1d-b12fd40144ce" podNamespace="kube-system" podName="cilium-operator-5cc964979-7cm98" Jan 14 13:24:19.591819 kubelet[3568]: I0114 13:24:19.591372 3568 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7141d984-4995-478e-9f1d-b12fd40144ce-cilium-config-path\") pod \"cilium-operator-5cc964979-7cm98\" (UID: \"7141d984-4995-478e-9f1d-b12fd40144ce\") " pod="kube-system/cilium-operator-5cc964979-7cm98" Jan 14 13:24:19.591819 kubelet[3568]: I0114 13:24:19.591407 3568 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9hb9\" (UniqueName: \"kubernetes.io/projected/7141d984-4995-478e-9f1d-b12fd40144ce-kube-api-access-d9hb9\") pod \"cilium-operator-5cc964979-7cm98\" (UID: \"7141d984-4995-478e-9f1d-b12fd40144ce\") " pod="kube-system/cilium-operator-5cc964979-7cm98" Jan 14 13:24:19.777191 containerd[1829]: time="2025-01-14T13:24:19.776744594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gjjns,Uid:6d13ace9-7237-4ca8-b3a4-687877cea7f5,Namespace:kube-system,Attempt:0,}" Jan 14 13:24:19.786534 containerd[1829]: time="2025-01-14T13:24:19.786481862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-j8nd5,Uid:2450a5fb-22df-442a-978a-13712a65357d,Namespace:kube-system,Attempt:0,}" Jan 14 13:24:19.877098 containerd[1829]: time="2025-01-14T13:24:19.877047791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-7cm98,Uid:7141d984-4995-478e-9f1d-b12fd40144ce,Namespace:kube-system,Attempt:0,}" Jan 14 13:24:20.736284 containerd[1829]: time="2025-01-14T13:24:20.735900464Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 13:24:20.736284 containerd[1829]: time="2025-01-14T13:24:20.735968064Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 13:24:20.736284 containerd[1829]: time="2025-01-14T13:24:20.735990464Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:24:20.736284 containerd[1829]: time="2025-01-14T13:24:20.736092265Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:24:20.760062 containerd[1829]: time="2025-01-14T13:24:20.759716429Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 13:24:20.760062 containerd[1829]: time="2025-01-14T13:24:20.759799230Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 13:24:20.760062 containerd[1829]: time="2025-01-14T13:24:20.759823130Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:24:20.760062 containerd[1829]: time="2025-01-14T13:24:20.759946431Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:24:20.765463 containerd[1829]: time="2025-01-14T13:24:20.762910851Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 13:24:20.765463 containerd[1829]: time="2025-01-14T13:24:20.762970752Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 13:24:20.765463 containerd[1829]: time="2025-01-14T13:24:20.762987052Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:24:20.765463 containerd[1829]: time="2025-01-14T13:24:20.763094253Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:24:20.862038 containerd[1829]: time="2025-01-14T13:24:20.861864639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gjjns,Uid:6d13ace9-7237-4ca8-b3a4-687877cea7f5,Namespace:kube-system,Attempt:0,} returns sandbox id \"fa792318dc660424fe1b9061dfde60f3198252f4d3d38d6f1bb7df7926ed6912\"" Jan 14 13:24:20.865947 containerd[1829]: time="2025-01-14T13:24:20.864516058Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 14 13:24:20.873056 containerd[1829]: time="2025-01-14T13:24:20.872831816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-7cm98,Uid:7141d984-4995-478e-9f1d-b12fd40144ce,Namespace:kube-system,Attempt:0,} returns sandbox id \"92f9b68ad1c3eb56076300e66b02b342e1936e15372e4a5849b492f3a5fb1f86\"" Jan 14 13:24:20.873952 containerd[1829]: time="2025-01-14T13:24:20.873905723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-j8nd5,Uid:2450a5fb-22df-442a-978a-13712a65357d,Namespace:kube-system,Attempt:0,} returns sandbox id \"201393bd2cb5342a18f22e16b7dd8da69a7bab928440f4558a8bf03c6d57e724\"" Jan 14 13:24:20.877590 containerd[1829]: time="2025-01-14T13:24:20.877335647Z" level=info msg="CreateContainer within sandbox \"201393bd2cb5342a18f22e16b7dd8da69a7bab928440f4558a8bf03c6d57e724\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 14 13:24:20.930554 containerd[1829]: time="2025-01-14T13:24:20.930501417Z" level=info msg="CreateContainer within sandbox \"201393bd2cb5342a18f22e16b7dd8da69a7bab928440f4558a8bf03c6d57e724\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6d8e88584a5936ddab94c052e789221a90794971bf6edbbd5ef71294c3796f37\"" Jan 14 13:24:20.932245 containerd[1829]: time="2025-01-14T13:24:20.931303122Z" level=info msg="StartContainer for \"6d8e88584a5936ddab94c052e789221a90794971bf6edbbd5ef71294c3796f37\"" Jan 14 13:24:20.999637 containerd[1829]: time="2025-01-14T13:24:20.999574397Z" level=info msg="StartContainer for \"6d8e88584a5936ddab94c052e789221a90794971bf6edbbd5ef71294c3796f37\" returns successfully" Jan 14 13:24:21.380275 kubelet[3568]: I0114 13:24:21.379705 3568 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-j8nd5" podStartSLOduration=2.37965954 podStartE2EDuration="2.37965954s" podCreationTimestamp="2025-01-14 13:24:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-14 13:24:21.379432238 +0000 UTC m=+15.195379383" watchObservedRunningTime="2025-01-14 13:24:21.37965954 +0000 UTC m=+15.195606685" Jan 14 13:24:27.174966 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount421636573.mount: Deactivated successfully. Jan 14 13:24:29.378719 containerd[1829]: time="2025-01-14T13:24:29.378656491Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:24:29.380831 containerd[1829]: time="2025-01-14T13:24:29.380767705Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166735387" Jan 14 13:24:29.384449 containerd[1829]: time="2025-01-14T13:24:29.384391930Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:24:29.386685 containerd[1829]: time="2025-01-14T13:24:29.386638545Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.522081687s" Jan 14 13:24:29.386685 containerd[1829]: time="2025-01-14T13:24:29.386683245Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 14 13:24:29.388512 containerd[1829]: time="2025-01-14T13:24:29.388188355Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 14 13:24:29.389515 containerd[1829]: time="2025-01-14T13:24:29.389492364Z" level=info msg="CreateContainer within sandbox \"fa792318dc660424fe1b9061dfde60f3198252f4d3d38d6f1bb7df7926ed6912\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 14 13:24:29.420174 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount171805372.mount: Deactivated successfully. Jan 14 13:24:29.428994 containerd[1829]: time="2025-01-14T13:24:29.428943228Z" level=info msg="CreateContainer within sandbox \"fa792318dc660424fe1b9061dfde60f3198252f4d3d38d6f1bb7df7926ed6912\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ca3775b626a04e4bc640ae704b0b1ef0a3168d6a77c91895fcce6c58bac5d9e6\"" Jan 14 13:24:29.430631 containerd[1829]: time="2025-01-14T13:24:29.429608433Z" level=info msg="StartContainer for \"ca3775b626a04e4bc640ae704b0b1ef0a3168d6a77c91895fcce6c58bac5d9e6\"" Jan 14 13:24:29.491987 containerd[1829]: time="2025-01-14T13:24:29.491890050Z" level=info msg="StartContainer for \"ca3775b626a04e4bc640ae704b0b1ef0a3168d6a77c91895fcce6c58bac5d9e6\" returns successfully" Jan 14 13:24:30.414488 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ca3775b626a04e4bc640ae704b0b1ef0a3168d6a77c91895fcce6c58bac5d9e6-rootfs.mount: Deactivated successfully. Jan 14 13:24:33.808860 containerd[1829]: time="2025-01-14T13:24:33.808804031Z" level=info msg="shim disconnected" id=ca3775b626a04e4bc640ae704b0b1ef0a3168d6a77c91895fcce6c58bac5d9e6 namespace=k8s.io Jan 14 13:24:33.809439 containerd[1829]: time="2025-01-14T13:24:33.808890632Z" level=warning msg="cleaning up after shim disconnected" id=ca3775b626a04e4bc640ae704b0b1ef0a3168d6a77c91895fcce6c58bac5d9e6 namespace=k8s.io Jan 14 13:24:33.809439 containerd[1829]: time="2025-01-14T13:24:33.808903532Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 14 13:24:33.822222 containerd[1829]: time="2025-01-14T13:24:33.822171220Z" level=warning msg="cleanup warnings time=\"2025-01-14T13:24:33Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 14 13:24:34.398945 containerd[1829]: time="2025-01-14T13:24:34.398733437Z" level=info msg="CreateContainer within sandbox \"fa792318dc660424fe1b9061dfde60f3198252f4d3d38d6f1bb7df7926ed6912\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 14 13:24:34.440777 containerd[1829]: time="2025-01-14T13:24:34.440726015Z" level=info msg="CreateContainer within sandbox \"fa792318dc660424fe1b9061dfde60f3198252f4d3d38d6f1bb7df7926ed6912\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"7343084cd52ae403c5025d94474c5cd592b7df67dba6068cfb3160a1de05c14b\"" Jan 14 13:24:34.442577 containerd[1829]: time="2025-01-14T13:24:34.441380320Z" level=info msg="StartContainer for \"7343084cd52ae403c5025d94474c5cd592b7df67dba6068cfb3160a1de05c14b\"" Jan 14 13:24:34.525596 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 14 13:24:34.526049 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 14 13:24:34.526134 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 14 13:24:34.536719 containerd[1829]: time="2025-01-14T13:24:34.536682051Z" level=info msg="StartContainer for \"7343084cd52ae403c5025d94474c5cd592b7df67dba6068cfb3160a1de05c14b\" returns successfully" Jan 14 13:24:34.542315 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 14 13:24:34.577706 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 14 13:24:34.604668 containerd[1829]: time="2025-01-14T13:24:34.604594900Z" level=info msg="shim disconnected" id=7343084cd52ae403c5025d94474c5cd592b7df67dba6068cfb3160a1de05c14b namespace=k8s.io Jan 14 13:24:34.604668 containerd[1829]: time="2025-01-14T13:24:34.604660201Z" level=warning msg="cleaning up after shim disconnected" id=7343084cd52ae403c5025d94474c5cd592b7df67dba6068cfb3160a1de05c14b namespace=k8s.io Jan 14 13:24:34.604668 containerd[1829]: time="2025-01-14T13:24:34.604670201Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 14 13:24:35.401424 containerd[1829]: time="2025-01-14T13:24:35.400210268Z" level=info msg="CreateContainer within sandbox \"fa792318dc660424fe1b9061dfde60f3198252f4d3d38d6f1bb7df7926ed6912\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 14 13:24:35.426777 containerd[1829]: time="2025-01-14T13:24:35.426701844Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:24:35.427409 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2506967043.mount: Deactivated successfully. Jan 14 13:24:35.427661 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7343084cd52ae403c5025d94474c5cd592b7df67dba6068cfb3160a1de05c14b-rootfs.mount: Deactivated successfully. Jan 14 13:24:35.430351 containerd[1829]: time="2025-01-14T13:24:35.430302768Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907153" Jan 14 13:24:35.456989 containerd[1829]: time="2025-01-14T13:24:35.456948144Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:24:35.469433 containerd[1829]: time="2025-01-14T13:24:35.469389726Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 6.081163671s" Jan 14 13:24:35.469433 containerd[1829]: time="2025-01-14T13:24:35.469430727Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 14 13:24:35.472787 containerd[1829]: time="2025-01-14T13:24:35.472279746Z" level=info msg="CreateContainer within sandbox \"92f9b68ad1c3eb56076300e66b02b342e1936e15372e4a5849b492f3a5fb1f86\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 14 13:24:35.481765 containerd[1829]: time="2025-01-14T13:24:35.481716908Z" level=info msg="CreateContainer within sandbox \"fa792318dc660424fe1b9061dfde60f3198252f4d3d38d6f1bb7df7926ed6912\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9b47c7aad60cd2329fd63bf0873f99617fecd69c8407d6a0e6e0722b2337d0f6\"" Jan 14 13:24:35.493814 containerd[1829]: time="2025-01-14T13:24:35.493772888Z" level=info msg="StartContainer for \"9b47c7aad60cd2329fd63bf0873f99617fecd69c8407d6a0e6e0722b2337d0f6\"" Jan 14 13:24:35.521168 containerd[1829]: time="2025-01-14T13:24:35.521120169Z" level=info msg="CreateContainer within sandbox \"92f9b68ad1c3eb56076300e66b02b342e1936e15372e4a5849b492f3a5fb1f86\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"4872bdce808cb57178e551c0f5c52116003c96cdf26fae95302115efbade4f2e\"" Jan 14 13:24:35.522020 containerd[1829]: time="2025-01-14T13:24:35.521943174Z" level=info msg="StartContainer for \"4872bdce808cb57178e551c0f5c52116003c96cdf26fae95302115efbade4f2e\"" Jan 14 13:24:35.593772 containerd[1829]: time="2025-01-14T13:24:35.593340547Z" level=info msg="StartContainer for \"9b47c7aad60cd2329fd63bf0873f99617fecd69c8407d6a0e6e0722b2337d0f6\" returns successfully" Jan 14 13:24:35.619836 containerd[1829]: time="2025-01-14T13:24:35.619620021Z" level=info msg="StartContainer for \"4872bdce808cb57178e551c0f5c52116003c96cdf26fae95302115efbade4f2e\" returns successfully" Jan 14 13:24:36.068536 containerd[1829]: time="2025-01-14T13:24:36.068412693Z" level=info msg="shim disconnected" id=9b47c7aad60cd2329fd63bf0873f99617fecd69c8407d6a0e6e0722b2337d0f6 namespace=k8s.io Jan 14 13:24:36.068536 containerd[1829]: time="2025-01-14T13:24:36.068492093Z" level=warning msg="cleaning up after shim disconnected" id=9b47c7aad60cd2329fd63bf0873f99617fecd69c8407d6a0e6e0722b2337d0f6 namespace=k8s.io Jan 14 13:24:36.068536 containerd[1829]: time="2025-01-14T13:24:36.068505093Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 14 13:24:36.087939 containerd[1829]: time="2025-01-14T13:24:36.087406319Z" level=warning msg="cleanup warnings time=\"2025-01-14T13:24:36Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 14 13:24:36.419842 containerd[1829]: time="2025-01-14T13:24:36.419698519Z" level=info msg="CreateContainer within sandbox \"fa792318dc660424fe1b9061dfde60f3198252f4d3d38d6f1bb7df7926ed6912\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 14 13:24:36.479044 containerd[1829]: time="2025-01-14T13:24:36.478843210Z" level=info msg="CreateContainer within sandbox \"fa792318dc660424fe1b9061dfde60f3198252f4d3d38d6f1bb7df7926ed6912\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"eb6423c3483bc43a04a95c16dd4ce66dbb50820642ebe9ba5fe7a05651c79c0f\"" Jan 14 13:24:36.482021 containerd[1829]: time="2025-01-14T13:24:36.481980731Z" level=info msg="StartContainer for \"eb6423c3483bc43a04a95c16dd4ce66dbb50820642ebe9ba5fe7a05651c79c0f\"" Jan 14 13:24:36.486592 kubelet[3568]: I0114 13:24:36.484525 3568 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-7cm98" podStartSLOduration=2.88975475 podStartE2EDuration="17.484469748s" podCreationTimestamp="2025-01-14 13:24:19 +0000 UTC" firstStartedPulling="2025-01-14 13:24:20.875169232 +0000 UTC m=+14.691116377" lastFinishedPulling="2025-01-14 13:24:35.46988423 +0000 UTC m=+29.285831375" observedRunningTime="2025-01-14 13:24:36.437385736 +0000 UTC m=+30.253332981" watchObservedRunningTime="2025-01-14 13:24:36.484469748 +0000 UTC m=+30.300416993" Jan 14 13:24:36.686858 containerd[1829]: time="2025-01-14T13:24:36.685963682Z" level=info msg="StartContainer for \"eb6423c3483bc43a04a95c16dd4ce66dbb50820642ebe9ba5fe7a05651c79c0f\" returns successfully" Jan 14 13:24:36.736599 containerd[1829]: time="2025-01-14T13:24:36.736510716Z" level=info msg="shim disconnected" id=eb6423c3483bc43a04a95c16dd4ce66dbb50820642ebe9ba5fe7a05651c79c0f namespace=k8s.io Jan 14 13:24:36.736599 containerd[1829]: time="2025-01-14T13:24:36.736602917Z" level=warning msg="cleaning up after shim disconnected" id=eb6423c3483bc43a04a95c16dd4ce66dbb50820642ebe9ba5fe7a05651c79c0f namespace=k8s.io Jan 14 13:24:36.736923 containerd[1829]: time="2025-01-14T13:24:36.736614317Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 14 13:24:36.778003 containerd[1829]: time="2025-01-14T13:24:36.777925991Z" level=warning msg="cleanup warnings time=\"2025-01-14T13:24:36Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 14 13:24:37.427229 containerd[1829]: time="2025-01-14T13:24:37.427049089Z" level=info msg="CreateContainer within sandbox \"fa792318dc660424fe1b9061dfde60f3198252f4d3d38d6f1bb7df7926ed6912\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 14 13:24:37.430828 systemd[1]: run-containerd-runc-k8s.io-eb6423c3483bc43a04a95c16dd4ce66dbb50820642ebe9ba5fe7a05651c79c0f-runc.Sa36Jf.mount: Deactivated successfully. Jan 14 13:24:37.431213 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eb6423c3483bc43a04a95c16dd4ce66dbb50820642ebe9ba5fe7a05651c79c0f-rootfs.mount: Deactivated successfully. Jan 14 13:24:37.466130 containerd[1829]: time="2025-01-14T13:24:37.466078347Z" level=info msg="CreateContainer within sandbox \"fa792318dc660424fe1b9061dfde60f3198252f4d3d38d6f1bb7df7926ed6912\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ec55ec88ccea956446908c626971ce58ece2c979ed196c535f674c4cb9b0fe44\"" Jan 14 13:24:37.466825 containerd[1829]: time="2025-01-14T13:24:37.466664451Z" level=info msg="StartContainer for \"ec55ec88ccea956446908c626971ce58ece2c979ed196c535f674c4cb9b0fe44\"" Jan 14 13:24:37.557816 containerd[1829]: time="2025-01-14T13:24:37.556905448Z" level=info msg="StartContainer for \"ec55ec88ccea956446908c626971ce58ece2c979ed196c535f674c4cb9b0fe44\" returns successfully" Jan 14 13:24:37.703355 kubelet[3568]: I0114 13:24:37.701960 3568 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 14 13:24:37.800340 kubelet[3568]: I0114 13:24:37.799658 3568 topology_manager.go:215] "Topology Admit Handler" podUID="6429fc0d-9418-4130-9565-c4635a2ad349" podNamespace="kube-system" podName="coredns-76f75df574-bzp4j" Jan 14 13:24:37.808164 kubelet[3568]: I0114 13:24:37.807874 3568 topology_manager.go:215] "Topology Admit Handler" podUID="2a30ac94-4a26-4b9c-9210-d02b447b7c8c" podNamespace="kube-system" podName="coredns-76f75df574-svznx" Jan 14 13:24:37.822597 kubelet[3568]: I0114 13:24:37.822529 3568 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bchx2\" (UniqueName: \"kubernetes.io/projected/2a30ac94-4a26-4b9c-9210-d02b447b7c8c-kube-api-access-bchx2\") pod \"coredns-76f75df574-svznx\" (UID: \"2a30ac94-4a26-4b9c-9210-d02b447b7c8c\") " pod="kube-system/coredns-76f75df574-svznx" Jan 14 13:24:37.823855 kubelet[3568]: I0114 13:24:37.822647 3568 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a30ac94-4a26-4b9c-9210-d02b447b7c8c-config-volume\") pod \"coredns-76f75df574-svznx\" (UID: \"2a30ac94-4a26-4b9c-9210-d02b447b7c8c\") " pod="kube-system/coredns-76f75df574-svznx" Jan 14 13:24:37.823855 kubelet[3568]: I0114 13:24:37.822700 3568 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqj5v\" (UniqueName: \"kubernetes.io/projected/6429fc0d-9418-4130-9565-c4635a2ad349-kube-api-access-rqj5v\") pod \"coredns-76f75df574-bzp4j\" (UID: \"6429fc0d-9418-4130-9565-c4635a2ad349\") " pod="kube-system/coredns-76f75df574-bzp4j" Jan 14 13:24:37.823855 kubelet[3568]: I0114 13:24:37.822729 3568 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6429fc0d-9418-4130-9565-c4635a2ad349-config-volume\") pod \"coredns-76f75df574-bzp4j\" (UID: \"6429fc0d-9418-4130-9565-c4635a2ad349\") " pod="kube-system/coredns-76f75df574-bzp4j" Jan 14 13:24:37.824885 kubelet[3568]: W0114 13:24:37.824748 3568 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-4152.2.0-a-0907529617" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4152.2.0-a-0907529617' and this object Jan 14 13:24:37.825012 kubelet[3568]: E0114 13:24:37.825001 3568 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-4152.2.0-a-0907529617" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4152.2.0-a-0907529617' and this object Jan 14 13:24:38.459000 kubelet[3568]: I0114 13:24:38.458937 3568 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-gjjns" podStartSLOduration=10.935589726 podStartE2EDuration="19.458878521s" podCreationTimestamp="2025-01-14 13:24:19 +0000 UTC" firstStartedPulling="2025-01-14 13:24:20.863797253 +0000 UTC m=+14.679744498" lastFinishedPulling="2025-01-14 13:24:29.387086148 +0000 UTC m=+23.203033293" observedRunningTime="2025-01-14 13:24:38.45882222 +0000 UTC m=+32.274769365" watchObservedRunningTime="2025-01-14 13:24:38.458878521 +0000 UTC m=+32.274825666" Jan 14 13:24:39.013379 containerd[1829]: time="2025-01-14T13:24:39.013326692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-bzp4j,Uid:6429fc0d-9418-4130-9565-c4635a2ad349,Namespace:kube-system,Attempt:0,}" Jan 14 13:24:39.024071 containerd[1829]: time="2025-01-14T13:24:39.024032063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-svznx,Uid:2a30ac94-4a26-4b9c-9210-d02b447b7c8c,Namespace:kube-system,Attempt:0,}" Jan 14 13:24:39.757731 systemd-networkd[1366]: cilium_host: Link UP Jan 14 13:24:39.759677 systemd-networkd[1366]: cilium_net: Link UP Jan 14 13:24:39.760222 systemd-networkd[1366]: cilium_net: Gained carrier Jan 14 13:24:39.760742 systemd-networkd[1366]: cilium_host: Gained carrier Jan 14 13:24:39.799932 systemd-networkd[1366]: cilium_net: Gained IPv6LL Jan 14 13:24:39.986962 systemd-networkd[1366]: cilium_vxlan: Link UP Jan 14 13:24:39.986972 systemd-networkd[1366]: cilium_vxlan: Gained carrier Jan 14 13:24:40.335931 kernel: NET: Registered PF_ALG protocol family Jan 14 13:24:40.806013 systemd-networkd[1366]: cilium_host: Gained IPv6LL Jan 14 13:24:41.142159 systemd-networkd[1366]: lxc_health: Link UP Jan 14 13:24:41.152331 systemd-networkd[1366]: lxc_health: Gained carrier Jan 14 13:24:41.381917 systemd-networkd[1366]: cilium_vxlan: Gained IPv6LL Jan 14 13:24:41.605179 systemd-networkd[1366]: lxcde52b178aafd: Link UP Jan 14 13:24:41.618887 kernel: eth0: renamed from tmp85fe7 Jan 14 13:24:41.638521 kernel: eth0: renamed from tmp2243e Jan 14 13:24:41.631064 systemd-networkd[1366]: lxcde52b178aafd: Gained carrier Jan 14 13:24:41.631886 systemd-networkd[1366]: lxc215f0cc75563: Link UP Jan 14 13:24:41.655217 systemd-networkd[1366]: lxc215f0cc75563: Gained carrier Jan 14 13:24:42.725062 systemd-networkd[1366]: lxcde52b178aafd: Gained IPv6LL Jan 14 13:24:43.172943 systemd-networkd[1366]: lxc_health: Gained IPv6LL Jan 14 13:24:43.557033 systemd-networkd[1366]: lxc215f0cc75563: Gained IPv6LL Jan 14 13:24:45.492073 containerd[1829]: time="2025-01-14T13:24:45.491446347Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 13:24:45.492596 containerd[1829]: time="2025-01-14T13:24:45.492101452Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 13:24:45.492596 containerd[1829]: time="2025-01-14T13:24:45.492152652Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:24:45.492596 containerd[1829]: time="2025-01-14T13:24:45.492474754Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:24:45.535355 containerd[1829]: time="2025-01-14T13:24:45.535009538Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 13:24:45.535515 containerd[1829]: time="2025-01-14T13:24:45.535402441Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 13:24:45.538773 containerd[1829]: time="2025-01-14T13:24:45.535463941Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:24:45.538773 containerd[1829]: time="2025-01-14T13:24:45.536734349Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:24:45.672437 containerd[1829]: time="2025-01-14T13:24:45.672301053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-svznx,Uid:2a30ac94-4a26-4b9c-9210-d02b447b7c8c,Namespace:kube-system,Attempt:0,} returns sandbox id \"2243e1948edbba8e797864dd64fbef8332d510d16357eca1b39bfbeb35bb1cb5\"" Jan 14 13:24:45.683670 containerd[1829]: time="2025-01-14T13:24:45.683506828Z" level=info msg="CreateContainer within sandbox \"2243e1948edbba8e797864dd64fbef8332d510d16357eca1b39bfbeb35bb1cb5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 14 13:24:45.690087 containerd[1829]: time="2025-01-14T13:24:45.690046772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-bzp4j,Uid:6429fc0d-9418-4130-9565-c4635a2ad349,Namespace:kube-system,Attempt:0,} returns sandbox id \"85fe7a94a79711c0ac6642c9f474aa9bbb3b034960532f5aab8942ce3e7923a5\"" Jan 14 13:24:45.696226 containerd[1829]: time="2025-01-14T13:24:45.696174113Z" level=info msg="CreateContainer within sandbox \"85fe7a94a79711c0ac6642c9f474aa9bbb3b034960532f5aab8942ce3e7923a5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 14 13:24:45.728743 containerd[1829]: time="2025-01-14T13:24:45.728693230Z" level=info msg="CreateContainer within sandbox \"2243e1948edbba8e797864dd64fbef8332d510d16357eca1b39bfbeb35bb1cb5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7abd1a9097c033d0017fe75c9abec602cb4ef186e90e66ba278ed6f11ae93a52\"" Jan 14 13:24:45.729411 containerd[1829]: time="2025-01-14T13:24:45.729308134Z" level=info msg="StartContainer for \"7abd1a9097c033d0017fe75c9abec602cb4ef186e90e66ba278ed6f11ae93a52\"" Jan 14 13:24:45.735994 containerd[1829]: time="2025-01-14T13:24:45.735168573Z" level=info msg="CreateContainer within sandbox \"85fe7a94a79711c0ac6642c9f474aa9bbb3b034960532f5aab8942ce3e7923a5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e46350bfd2cbdbb4d36c9df1555247d3bf550294c97a4e68f579f6516bb8103e\"" Jan 14 13:24:45.737391 containerd[1829]: time="2025-01-14T13:24:45.737361287Z" level=info msg="StartContainer for \"e46350bfd2cbdbb4d36c9df1555247d3bf550294c97a4e68f579f6516bb8103e\"" Jan 14 13:24:45.814854 containerd[1829]: time="2025-01-14T13:24:45.814310800Z" level=info msg="StartContainer for \"7abd1a9097c033d0017fe75c9abec602cb4ef186e90e66ba278ed6f11ae93a52\" returns successfully" Jan 14 13:24:45.833671 containerd[1829]: time="2025-01-14T13:24:45.833448228Z" level=info msg="StartContainer for \"e46350bfd2cbdbb4d36c9df1555247d3bf550294c97a4e68f579f6516bb8103e\" returns successfully" Jan 14 13:24:46.471539 kubelet[3568]: I0114 13:24:46.471497 3568 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-svznx" podStartSLOduration=27.471449983 podStartE2EDuration="27.471449983s" podCreationTimestamp="2025-01-14 13:24:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-14 13:24:46.470669678 +0000 UTC m=+40.286616823" watchObservedRunningTime="2025-01-14 13:24:46.471449983 +0000 UTC m=+40.287397228" Jan 14 13:24:46.491718 kubelet[3568]: I0114 13:24:46.491131 3568 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-bzp4j" podStartSLOduration=27.491080114 podStartE2EDuration="27.491080114s" podCreationTimestamp="2025-01-14 13:24:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-14 13:24:46.490840712 +0000 UTC m=+40.306787857" watchObservedRunningTime="2025-01-14 13:24:46.491080114 +0000 UTC m=+40.307027259" Jan 14 13:26:34.913956 systemd[1]: Started sshd@7-10.200.4.19:22-10.200.16.10:54236.service - OpenSSH per-connection server daemon (10.200.16.10:54236). Jan 14 13:26:35.522604 sshd[4944]: Accepted publickey for core from 10.200.16.10 port 54236 ssh2: RSA SHA256:5BL0MHBLBPplNMyfHIepoEZk0FL953xzvqueGYBPke0 Jan 14 13:26:35.524265 sshd-session[4944]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:26:35.528861 systemd-logind[1807]: New session 10 of user core. Jan 14 13:26:35.534332 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 14 13:26:36.029356 sshd[4947]: Connection closed by 10.200.16.10 port 54236 Jan 14 13:26:36.030261 sshd-session[4944]: pam_unix(sshd:session): session closed for user core Jan 14 13:26:36.033898 systemd[1]: sshd@7-10.200.4.19:22-10.200.16.10:54236.service: Deactivated successfully. Jan 14 13:26:36.039056 systemd[1]: session-10.scope: Deactivated successfully. Jan 14 13:26:36.040690 systemd-logind[1807]: Session 10 logged out. Waiting for processes to exit. Jan 14 13:26:36.041771 systemd-logind[1807]: Removed session 10. Jan 14 13:26:41.139103 systemd[1]: Started sshd@8-10.200.4.19:22-10.200.16.10:48274.service - OpenSSH per-connection server daemon (10.200.16.10:48274). Jan 14 13:26:41.745605 sshd[4958]: Accepted publickey for core from 10.200.16.10 port 48274 ssh2: RSA SHA256:5BL0MHBLBPplNMyfHIepoEZk0FL953xzvqueGYBPke0 Jan 14 13:26:41.747336 sshd-session[4958]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:26:41.753559 systemd-logind[1807]: New session 11 of user core. Jan 14 13:26:41.759060 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 14 13:26:42.231711 sshd[4961]: Connection closed by 10.200.16.10 port 48274 Jan 14 13:26:42.232129 sshd-session[4958]: pam_unix(sshd:session): session closed for user core Jan 14 13:26:42.237569 systemd[1]: sshd@8-10.200.4.19:22-10.200.16.10:48274.service: Deactivated successfully. Jan 14 13:26:42.242360 systemd[1]: session-11.scope: Deactivated successfully. Jan 14 13:26:42.242527 systemd-logind[1807]: Session 11 logged out. Waiting for processes to exit. Jan 14 13:26:42.244141 systemd-logind[1807]: Removed session 11. Jan 14 13:26:47.336375 systemd[1]: Started sshd@9-10.200.4.19:22-10.200.16.10:41118.service - OpenSSH per-connection server daemon (10.200.16.10:41118). Jan 14 13:26:47.950533 sshd[4973]: Accepted publickey for core from 10.200.16.10 port 41118 ssh2: RSA SHA256:5BL0MHBLBPplNMyfHIepoEZk0FL953xzvqueGYBPke0 Jan 14 13:26:47.952209 sshd-session[4973]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:26:47.956861 systemd-logind[1807]: New session 12 of user core. Jan 14 13:26:47.960196 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 14 13:26:48.439736 sshd[4976]: Connection closed by 10.200.16.10 port 41118 Jan 14 13:26:48.440647 sshd-session[4973]: pam_unix(sshd:session): session closed for user core Jan 14 13:26:48.444004 systemd[1]: sshd@9-10.200.4.19:22-10.200.16.10:41118.service: Deactivated successfully. Jan 14 13:26:48.450254 systemd[1]: session-12.scope: Deactivated successfully. Jan 14 13:26:48.450460 systemd-logind[1807]: Session 12 logged out. Waiting for processes to exit. Jan 14 13:26:48.453013 systemd-logind[1807]: Removed session 12. Jan 14 13:26:53.543335 systemd[1]: Started sshd@10-10.200.4.19:22-10.200.16.10:41132.service - OpenSSH per-connection server daemon (10.200.16.10:41132). Jan 14 13:26:54.149656 sshd[4991]: Accepted publickey for core from 10.200.16.10 port 41132 ssh2: RSA SHA256:5BL0MHBLBPplNMyfHIepoEZk0FL953xzvqueGYBPke0 Jan 14 13:26:54.151389 sshd-session[4991]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:26:54.157367 systemd-logind[1807]: New session 13 of user core. Jan 14 13:26:54.162014 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 14 13:26:54.644588 sshd[4994]: Connection closed by 10.200.16.10 port 41132 Jan 14 13:26:54.645377 sshd-session[4991]: pam_unix(sshd:session): session closed for user core Jan 14 13:26:54.648035 systemd[1]: sshd@10-10.200.4.19:22-10.200.16.10:41132.service: Deactivated successfully. Jan 14 13:26:54.652533 systemd-logind[1807]: Session 13 logged out. Waiting for processes to exit. Jan 14 13:26:54.654693 systemd[1]: session-13.scope: Deactivated successfully. Jan 14 13:26:54.655575 systemd-logind[1807]: Removed session 13. Jan 14 13:26:59.749421 systemd[1]: Started sshd@11-10.200.4.19:22-10.200.16.10:54020.service - OpenSSH per-connection server daemon (10.200.16.10:54020). Jan 14 13:27:00.353964 sshd[5006]: Accepted publickey for core from 10.200.16.10 port 54020 ssh2: RSA SHA256:5BL0MHBLBPplNMyfHIepoEZk0FL953xzvqueGYBPke0 Jan 14 13:27:00.355652 sshd-session[5006]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:27:00.360020 systemd-logind[1807]: New session 14 of user core. Jan 14 13:27:00.368247 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 14 13:27:00.848692 sshd[5009]: Connection closed by 10.200.16.10 port 54020 Jan 14 13:27:00.849592 sshd-session[5006]: pam_unix(sshd:session): session closed for user core Jan 14 13:27:00.854236 systemd[1]: sshd@11-10.200.4.19:22-10.200.16.10:54020.service: Deactivated successfully. Jan 14 13:27:00.860947 systemd[1]: session-14.scope: Deactivated successfully. Jan 14 13:27:00.862172 systemd-logind[1807]: Session 14 logged out. Waiting for processes to exit. Jan 14 13:27:00.863338 systemd-logind[1807]: Removed session 14. Jan 14 13:27:00.952275 systemd[1]: Started sshd@12-10.200.4.19:22-10.200.16.10:54036.service - OpenSSH per-connection server daemon (10.200.16.10:54036). Jan 14 13:27:01.560968 sshd[5021]: Accepted publickey for core from 10.200.16.10 port 54036 ssh2: RSA SHA256:5BL0MHBLBPplNMyfHIepoEZk0FL953xzvqueGYBPke0 Jan 14 13:27:01.562999 sshd-session[5021]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:27:01.567934 systemd-logind[1807]: New session 15 of user core. Jan 14 13:27:01.576105 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 14 13:27:02.090086 sshd[5024]: Connection closed by 10.200.16.10 port 54036 Jan 14 13:27:02.090957 sshd-session[5021]: pam_unix(sshd:session): session closed for user core Jan 14 13:27:02.094970 systemd[1]: sshd@12-10.200.4.19:22-10.200.16.10:54036.service: Deactivated successfully. Jan 14 13:27:02.099697 systemd[1]: session-15.scope: Deactivated successfully. Jan 14 13:27:02.100889 systemd-logind[1807]: Session 15 logged out. Waiting for processes to exit. Jan 14 13:27:02.101844 systemd-logind[1807]: Removed session 15. Jan 14 13:27:02.194372 systemd[1]: Started sshd@13-10.200.4.19:22-10.200.16.10:54042.service - OpenSSH per-connection server daemon (10.200.16.10:54042). Jan 14 13:27:02.801906 sshd[5033]: Accepted publickey for core from 10.200.16.10 port 54042 ssh2: RSA SHA256:5BL0MHBLBPplNMyfHIepoEZk0FL953xzvqueGYBPke0 Jan 14 13:27:02.803787 sshd-session[5033]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:27:02.811241 systemd-logind[1807]: New session 16 of user core. Jan 14 13:27:02.817126 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 14 13:27:03.296036 sshd[5036]: Connection closed by 10.200.16.10 port 54042 Jan 14 13:27:03.296819 sshd-session[5033]: pam_unix(sshd:session): session closed for user core Jan 14 13:27:03.301554 systemd[1]: sshd@13-10.200.4.19:22-10.200.16.10:54042.service: Deactivated successfully. Jan 14 13:27:03.307343 systemd[1]: session-16.scope: Deactivated successfully. Jan 14 13:27:03.308247 systemd-logind[1807]: Session 16 logged out. Waiting for processes to exit. Jan 14 13:27:03.309188 systemd-logind[1807]: Removed session 16. Jan 14 13:27:08.401409 systemd[1]: Started sshd@14-10.200.4.19:22-10.200.16.10:36524.service - OpenSSH per-connection server daemon (10.200.16.10:36524). Jan 14 13:27:09.007824 sshd[5049]: Accepted publickey for core from 10.200.16.10 port 36524 ssh2: RSA SHA256:5BL0MHBLBPplNMyfHIepoEZk0FL953xzvqueGYBPke0 Jan 14 13:27:09.009417 sshd-session[5049]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:27:09.014266 systemd-logind[1807]: New session 17 of user core. Jan 14 13:27:09.022131 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 14 13:27:09.504356 sshd[5052]: Connection closed by 10.200.16.10 port 36524 Jan 14 13:27:09.505235 sshd-session[5049]: pam_unix(sshd:session): session closed for user core Jan 14 13:27:09.508014 systemd[1]: sshd@14-10.200.4.19:22-10.200.16.10:36524.service: Deactivated successfully. Jan 14 13:27:09.512315 systemd-logind[1807]: Session 17 logged out. Waiting for processes to exit. Jan 14 13:27:09.513342 systemd[1]: session-17.scope: Deactivated successfully. Jan 14 13:27:09.515415 systemd-logind[1807]: Removed session 17. Jan 14 13:27:09.608070 systemd[1]: Started sshd@15-10.200.4.19:22-10.200.16.10:36528.service - OpenSSH per-connection server daemon (10.200.16.10:36528). Jan 14 13:27:10.211627 sshd[5063]: Accepted publickey for core from 10.200.16.10 port 36528 ssh2: RSA SHA256:5BL0MHBLBPplNMyfHIepoEZk0FL953xzvqueGYBPke0 Jan 14 13:27:10.213212 sshd-session[5063]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:27:10.217547 systemd-logind[1807]: New session 18 of user core. Jan 14 13:27:10.224147 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 14 13:27:10.749168 sshd[5066]: Connection closed by 10.200.16.10 port 36528 Jan 14 13:27:10.750221 sshd-session[5063]: pam_unix(sshd:session): session closed for user core Jan 14 13:27:10.754303 systemd[1]: sshd@15-10.200.4.19:22-10.200.16.10:36528.service: Deactivated successfully. Jan 14 13:27:10.759730 systemd-logind[1807]: Session 18 logged out. Waiting for processes to exit. Jan 14 13:27:10.760309 systemd[1]: session-18.scope: Deactivated successfully. Jan 14 13:27:10.761688 systemd-logind[1807]: Removed session 18. Jan 14 13:27:10.854079 systemd[1]: Started sshd@16-10.200.4.19:22-10.200.16.10:36530.service - OpenSSH per-connection server daemon (10.200.16.10:36530). Jan 14 13:27:11.458993 sshd[5075]: Accepted publickey for core from 10.200.16.10 port 36530 ssh2: RSA SHA256:5BL0MHBLBPplNMyfHIepoEZk0FL953xzvqueGYBPke0 Jan 14 13:27:11.460451 sshd-session[5075]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:27:11.464463 systemd-logind[1807]: New session 19 of user core. Jan 14 13:27:11.472041 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 14 13:27:13.422117 sshd[5078]: Connection closed by 10.200.16.10 port 36530 Jan 14 13:27:13.423023 sshd-session[5075]: pam_unix(sshd:session): session closed for user core Jan 14 13:27:13.426925 systemd[1]: sshd@16-10.200.4.19:22-10.200.16.10:36530.service: Deactivated successfully. Jan 14 13:27:13.433499 systemd[1]: session-19.scope: Deactivated successfully. Jan 14 13:27:13.434418 systemd-logind[1807]: Session 19 logged out. Waiting for processes to exit. Jan 14 13:27:13.435370 systemd-logind[1807]: Removed session 19. Jan 14 13:27:13.528061 systemd[1]: Started sshd@17-10.200.4.19:22-10.200.16.10:36538.service - OpenSSH per-connection server daemon (10.200.16.10:36538). Jan 14 13:27:14.129506 sshd[5094]: Accepted publickey for core from 10.200.16.10 port 36538 ssh2: RSA SHA256:5BL0MHBLBPplNMyfHIepoEZk0FL953xzvqueGYBPke0 Jan 14 13:27:14.131145 sshd-session[5094]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:27:14.135251 systemd-logind[1807]: New session 20 of user core. Jan 14 13:27:14.142184 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 14 13:27:14.729918 sshd[5097]: Connection closed by 10.200.16.10 port 36538 Jan 14 13:27:14.730662 sshd-session[5094]: pam_unix(sshd:session): session closed for user core Jan 14 13:27:14.734633 systemd[1]: sshd@17-10.200.4.19:22-10.200.16.10:36538.service: Deactivated successfully. Jan 14 13:27:14.739222 systemd-logind[1807]: Session 20 logged out. Waiting for processes to exit. Jan 14 13:27:14.739555 systemd[1]: session-20.scope: Deactivated successfully. Jan 14 13:27:14.741426 systemd-logind[1807]: Removed session 20. Jan 14 13:27:14.835332 systemd[1]: Started sshd@18-10.200.4.19:22-10.200.16.10:36546.service - OpenSSH per-connection server daemon (10.200.16.10:36546). Jan 14 13:27:15.438283 sshd[5106]: Accepted publickey for core from 10.200.16.10 port 36546 ssh2: RSA SHA256:5BL0MHBLBPplNMyfHIepoEZk0FL953xzvqueGYBPke0 Jan 14 13:27:15.439718 sshd-session[5106]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:27:15.444081 systemd-logind[1807]: New session 21 of user core. Jan 14 13:27:15.451339 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 14 13:27:15.926533 sshd[5109]: Connection closed by 10.200.16.10 port 36546 Jan 14 13:27:15.927159 sshd-session[5106]: pam_unix(sshd:session): session closed for user core Jan 14 13:27:15.930917 systemd[1]: sshd@18-10.200.4.19:22-10.200.16.10:36546.service: Deactivated successfully. Jan 14 13:27:15.937161 systemd-logind[1807]: Session 21 logged out. Waiting for processes to exit. Jan 14 13:27:15.938069 systemd[1]: session-21.scope: Deactivated successfully. Jan 14 13:27:15.939522 systemd-logind[1807]: Removed session 21. Jan 14 13:27:19.443695 update_engine[1812]: I20250114 13:27:19.443087 1812 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 14 13:27:19.443695 update_engine[1812]: I20250114 13:27:19.443152 1812 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 14 13:27:19.443695 update_engine[1812]: I20250114 13:27:19.443348 1812 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 14 13:27:19.447668 update_engine[1812]: I20250114 13:27:19.443980 1812 omaha_request_params.cc:62] Current group set to stable Jan 14 13:27:19.447668 update_engine[1812]: I20250114 13:27:19.444120 1812 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 14 13:27:19.447668 update_engine[1812]: I20250114 13:27:19.444133 1812 update_attempter.cc:643] Scheduling an action processor start. Jan 14 13:27:19.447668 update_engine[1812]: I20250114 13:27:19.444154 1812 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 14 13:27:19.447668 update_engine[1812]: I20250114 13:27:19.444195 1812 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 14 13:27:19.447668 update_engine[1812]: I20250114 13:27:19.447566 1812 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 14 13:27:19.447668 update_engine[1812]: I20250114 13:27:19.447659 1812 omaha_request_action.cc:272] Request: Jan 14 13:27:19.447668 update_engine[1812]: Jan 14 13:27:19.447668 update_engine[1812]: Jan 14 13:27:19.447668 update_engine[1812]: Jan 14 13:27:19.447668 update_engine[1812]: Jan 14 13:27:19.447668 update_engine[1812]: Jan 14 13:27:19.447668 update_engine[1812]: Jan 14 13:27:19.447668 update_engine[1812]: Jan 14 13:27:19.447668 update_engine[1812]: Jan 14 13:27:19.448210 locksmithd[1851]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 14 13:27:19.448504 update_engine[1812]: I20250114 13:27:19.447669 1812 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 14 13:27:19.449195 update_engine[1812]: I20250114 13:27:19.449163 1812 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 14 13:27:19.449541 update_engine[1812]: I20250114 13:27:19.449509 1812 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 14 13:27:19.456064 update_engine[1812]: E20250114 13:27:19.456023 1812 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 14 13:27:19.456162 update_engine[1812]: I20250114 13:27:19.456112 1812 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 14 13:27:21.030058 systemd[1]: Started sshd@19-10.200.4.19:22-10.200.16.10:48982.service - OpenSSH per-connection server daemon (10.200.16.10:48982). Jan 14 13:27:21.633311 sshd[5123]: Accepted publickey for core from 10.200.16.10 port 48982 ssh2: RSA SHA256:5BL0MHBLBPplNMyfHIepoEZk0FL953xzvqueGYBPke0 Jan 14 13:27:21.634883 sshd-session[5123]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:27:21.638928 systemd-logind[1807]: New session 22 of user core. Jan 14 13:27:21.643063 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 14 13:27:22.125173 sshd[5128]: Connection closed by 10.200.16.10 port 48982 Jan 14 13:27:22.126081 sshd-session[5123]: pam_unix(sshd:session): session closed for user core Jan 14 13:27:22.130545 systemd[1]: sshd@19-10.200.4.19:22-10.200.16.10:48982.service: Deactivated successfully. Jan 14 13:27:22.134882 systemd[1]: session-22.scope: Deactivated successfully. Jan 14 13:27:22.136033 systemd-logind[1807]: Session 22 logged out. Waiting for processes to exit. Jan 14 13:27:22.137027 systemd-logind[1807]: Removed session 22. Jan 14 13:27:27.230334 systemd[1]: Started sshd@20-10.200.4.19:22-10.200.16.10:39348.service - OpenSSH per-connection server daemon (10.200.16.10:39348). Jan 14 13:27:27.836080 sshd[5140]: Accepted publickey for core from 10.200.16.10 port 39348 ssh2: RSA SHA256:5BL0MHBLBPplNMyfHIepoEZk0FL953xzvqueGYBPke0 Jan 14 13:27:27.837579 sshd-session[5140]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:27:27.843565 systemd-logind[1807]: New session 23 of user core. Jan 14 13:27:27.847129 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 14 13:27:28.326879 sshd[5146]: Connection closed by 10.200.16.10 port 39348 Jan 14 13:27:28.327650 sshd-session[5140]: pam_unix(sshd:session): session closed for user core Jan 14 13:27:28.330431 systemd[1]: sshd@20-10.200.4.19:22-10.200.16.10:39348.service: Deactivated successfully. Jan 14 13:27:28.335389 systemd-logind[1807]: Session 23 logged out. Waiting for processes to exit. Jan 14 13:27:28.336617 systemd[1]: session-23.scope: Deactivated successfully. Jan 14 13:27:28.338411 systemd-logind[1807]: Removed session 23. Jan 14 13:27:29.439982 update_engine[1812]: I20250114 13:27:29.439896 1812 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 14 13:27:29.440570 update_engine[1812]: I20250114 13:27:29.440253 1812 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 14 13:27:29.440633 update_engine[1812]: I20250114 13:27:29.440601 1812 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 14 13:27:29.455898 update_engine[1812]: E20250114 13:27:29.455839 1812 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 14 13:27:29.456041 update_engine[1812]: I20250114 13:27:29.455928 1812 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 14 13:27:33.432126 systemd[1]: Started sshd@21-10.200.4.19:22-10.200.16.10:39354.service - OpenSSH per-connection server daemon (10.200.16.10:39354). Jan 14 13:27:34.038477 sshd[5157]: Accepted publickey for core from 10.200.16.10 port 39354 ssh2: RSA SHA256:5BL0MHBLBPplNMyfHIepoEZk0FL953xzvqueGYBPke0 Jan 14 13:27:34.040029 sshd-session[5157]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:27:34.050887 systemd-logind[1807]: New session 24 of user core. Jan 14 13:27:34.056197 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 14 13:27:34.525893 sshd[5160]: Connection closed by 10.200.16.10 port 39354 Jan 14 13:27:34.526614 sshd-session[5157]: pam_unix(sshd:session): session closed for user core Jan 14 13:27:34.531286 systemd[1]: sshd@21-10.200.4.19:22-10.200.16.10:39354.service: Deactivated successfully. Jan 14 13:27:34.535192 systemd[1]: session-24.scope: Deactivated successfully. Jan 14 13:27:34.535999 systemd-logind[1807]: Session 24 logged out. Waiting for processes to exit. Jan 14 13:27:34.536941 systemd-logind[1807]: Removed session 24. Jan 14 13:27:34.630111 systemd[1]: Started sshd@22-10.200.4.19:22-10.200.16.10:39370.service - OpenSSH per-connection server daemon (10.200.16.10:39370). Jan 14 13:27:35.232413 sshd[5171]: Accepted publickey for core from 10.200.16.10 port 39370 ssh2: RSA SHA256:5BL0MHBLBPplNMyfHIepoEZk0FL953xzvqueGYBPke0 Jan 14 13:27:35.233863 sshd-session[5171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:27:35.238193 systemd-logind[1807]: New session 25 of user core. Jan 14 13:27:35.244465 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 14 13:27:36.891239 systemd[1]: run-containerd-runc-k8s.io-ec55ec88ccea956446908c626971ce58ece2c979ed196c535f674c4cb9b0fe44-runc.6qtg9J.mount: Deactivated successfully. Jan 14 13:27:36.895607 containerd[1829]: time="2025-01-14T13:27:36.895567043Z" level=info msg="StopContainer for \"4872bdce808cb57178e551c0f5c52116003c96cdf26fae95302115efbade4f2e\" with timeout 30 (s)" Jan 14 13:27:36.897786 containerd[1829]: time="2025-01-14T13:27:36.896366938Z" level=info msg="Stop container \"4872bdce808cb57178e551c0f5c52116003c96cdf26fae95302115efbade4f2e\" with signal terminated" Jan 14 13:27:36.910201 containerd[1829]: time="2025-01-14T13:27:36.909685960Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 14 13:27:36.919905 containerd[1829]: time="2025-01-14T13:27:36.919839101Z" level=info msg="StopContainer for \"ec55ec88ccea956446908c626971ce58ece2c979ed196c535f674c4cb9b0fe44\" with timeout 2 (s)" Jan 14 13:27:36.924037 containerd[1829]: time="2025-01-14T13:27:36.923832478Z" level=info msg="Stop container \"ec55ec88ccea956446908c626971ce58ece2c979ed196c535f674c4cb9b0fe44\" with signal terminated" Jan 14 13:27:36.955433 systemd-networkd[1366]: lxc_health: Link DOWN Jan 14 13:27:36.955448 systemd-networkd[1366]: lxc_health: Lost carrier Jan 14 13:27:36.972950 kubelet[3568]: E0114 13:27:36.972081 3568 cadvisor_stats_provider.go:501] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods/besteffort/pod7141d984-4995-478e-9f1d-b12fd40144ce/4872bdce808cb57178e551c0f5c52116003c96cdf26fae95302115efbade4f2e\": RecentStats: unable to find data in memory cache]" Jan 14 13:27:36.983972 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4872bdce808cb57178e551c0f5c52116003c96cdf26fae95302115efbade4f2e-rootfs.mount: Deactivated successfully. Jan 14 13:27:37.004576 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ec55ec88ccea956446908c626971ce58ece2c979ed196c535f674c4cb9b0fe44-rootfs.mount: Deactivated successfully. Jan 14 13:27:37.064032 containerd[1829]: time="2025-01-14T13:27:37.063938762Z" level=info msg="shim disconnected" id=4872bdce808cb57178e551c0f5c52116003c96cdf26fae95302115efbade4f2e namespace=k8s.io Jan 14 13:27:37.064032 containerd[1829]: time="2025-01-14T13:27:37.064058161Z" level=warning msg="cleaning up after shim disconnected" id=4872bdce808cb57178e551c0f5c52116003c96cdf26fae95302115efbade4f2e namespace=k8s.io Jan 14 13:27:37.064032 containerd[1829]: time="2025-01-14T13:27:37.064077761Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 14 13:27:37.064871 containerd[1829]: time="2025-01-14T13:27:37.064078261Z" level=info msg="shim disconnected" id=ec55ec88ccea956446908c626971ce58ece2c979ed196c535f674c4cb9b0fe44 namespace=k8s.io Jan 14 13:27:37.064871 containerd[1829]: time="2025-01-14T13:27:37.064652257Z" level=warning msg="cleaning up after shim disconnected" id=ec55ec88ccea956446908c626971ce58ece2c979ed196c535f674c4cb9b0fe44 namespace=k8s.io Jan 14 13:27:37.064871 containerd[1829]: time="2025-01-14T13:27:37.064667457Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 14 13:27:37.085634 containerd[1829]: time="2025-01-14T13:27:37.085562036Z" level=warning msg="cleanup warnings time=\"2025-01-14T13:27:37Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 14 13:27:37.091180 containerd[1829]: time="2025-01-14T13:27:37.091141703Z" level=info msg="StopContainer for \"4872bdce808cb57178e551c0f5c52116003c96cdf26fae95302115efbade4f2e\" returns successfully" Jan 14 13:27:37.092899 containerd[1829]: time="2025-01-14T13:27:37.091971698Z" level=info msg="StopPodSandbox for \"92f9b68ad1c3eb56076300e66b02b342e1936e15372e4a5849b492f3a5fb1f86\"" Jan 14 13:27:37.092899 containerd[1829]: time="2025-01-14T13:27:37.092013598Z" level=info msg="Container to stop \"4872bdce808cb57178e551c0f5c52116003c96cdf26fae95302115efbade4f2e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 14 13:27:37.094778 containerd[1829]: time="2025-01-14T13:27:37.093351790Z" level=info msg="StopContainer for \"ec55ec88ccea956446908c626971ce58ece2c979ed196c535f674c4cb9b0fe44\" returns successfully" Jan 14 13:27:37.095522 containerd[1829]: time="2025-01-14T13:27:37.095438578Z" level=info msg="StopPodSandbox for \"fa792318dc660424fe1b9061dfde60f3198252f4d3d38d6f1bb7df7926ed6912\"" Jan 14 13:27:37.095706 containerd[1829]: time="2025-01-14T13:27:37.095594577Z" level=info msg="Container to stop \"ca3775b626a04e4bc640ae704b0b1ef0a3168d6a77c91895fcce6c58bac5d9e6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 14 13:27:37.096828 containerd[1829]: time="2025-01-14T13:27:37.095816876Z" level=info msg="Container to stop \"eb6423c3483bc43a04a95c16dd4ce66dbb50820642ebe9ba5fe7a05651c79c0f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 14 13:27:37.096828 containerd[1829]: time="2025-01-14T13:27:37.095840376Z" level=info msg="Container to stop \"ec55ec88ccea956446908c626971ce58ece2c979ed196c535f674c4cb9b0fe44\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 14 13:27:37.096828 containerd[1829]: time="2025-01-14T13:27:37.095852776Z" level=info msg="Container to stop \"7343084cd52ae403c5025d94474c5cd592b7df67dba6068cfb3160a1de05c14b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 14 13:27:37.097208 containerd[1829]: time="2025-01-14T13:27:37.095866276Z" level=info msg="Container to stop \"9b47c7aad60cd2329fd63bf0873f99617fecd69c8407d6a0e6e0722b2337d0f6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 14 13:27:37.097456 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-92f9b68ad1c3eb56076300e66b02b342e1936e15372e4a5849b492f3a5fb1f86-shm.mount: Deactivated successfully. Jan 14 13:27:37.163141 containerd[1829]: time="2025-01-14T13:27:37.162738586Z" level=info msg="shim disconnected" id=92f9b68ad1c3eb56076300e66b02b342e1936e15372e4a5849b492f3a5fb1f86 namespace=k8s.io Jan 14 13:27:37.163141 containerd[1829]: time="2025-01-14T13:27:37.162826185Z" level=warning msg="cleaning up after shim disconnected" id=92f9b68ad1c3eb56076300e66b02b342e1936e15372e4a5849b492f3a5fb1f86 namespace=k8s.io Jan 14 13:27:37.163141 containerd[1829]: time="2025-01-14T13:27:37.162841085Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 14 13:27:37.166572 containerd[1829]: time="2025-01-14T13:27:37.164209677Z" level=info msg="shim disconnected" id=fa792318dc660424fe1b9061dfde60f3198252f4d3d38d6f1bb7df7926ed6912 namespace=k8s.io Jan 14 13:27:37.166572 containerd[1829]: time="2025-01-14T13:27:37.164273977Z" level=warning msg="cleaning up after shim disconnected" id=fa792318dc660424fe1b9061dfde60f3198252f4d3d38d6f1bb7df7926ed6912 namespace=k8s.io Jan 14 13:27:37.166572 containerd[1829]: time="2025-01-14T13:27:37.164285877Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 14 13:27:37.185620 containerd[1829]: time="2025-01-14T13:27:37.185567753Z" level=warning msg="cleanup warnings time=\"2025-01-14T13:27:37Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 14 13:27:37.186430 containerd[1829]: time="2025-01-14T13:27:37.186395048Z" level=info msg="TearDown network for sandbox \"92f9b68ad1c3eb56076300e66b02b342e1936e15372e4a5849b492f3a5fb1f86\" successfully" Jan 14 13:27:37.186430 containerd[1829]: time="2025-01-14T13:27:37.186429348Z" level=info msg="StopPodSandbox for \"92f9b68ad1c3eb56076300e66b02b342e1936e15372e4a5849b492f3a5fb1f86\" returns successfully" Jan 14 13:27:37.187826 containerd[1829]: time="2025-01-14T13:27:37.187605941Z" level=info msg="TearDown network for sandbox \"fa792318dc660424fe1b9061dfde60f3198252f4d3d38d6f1bb7df7926ed6912\" successfully" Jan 14 13:27:37.187826 containerd[1829]: time="2025-01-14T13:27:37.187637741Z" level=info msg="StopPodSandbox for \"fa792318dc660424fe1b9061dfde60f3198252f4d3d38d6f1bb7df7926ed6912\" returns successfully" Jan 14 13:27:37.349303 kubelet[3568]: I0114 13:27:37.349262 3568 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6d13ace9-7237-4ca8-b3a4-687877cea7f5-host-proc-sys-net\") pod \"6d13ace9-7237-4ca8-b3a4-687877cea7f5\" (UID: \"6d13ace9-7237-4ca8-b3a4-687877cea7f5\") " Jan 14 13:27:37.349534 kubelet[3568]: I0114 13:27:37.349317 3568 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6d13ace9-7237-4ca8-b3a4-687877cea7f5-etc-cni-netd\") pod \"6d13ace9-7237-4ca8-b3a4-687877cea7f5\" (UID: \"6d13ace9-7237-4ca8-b3a4-687877cea7f5\") " Jan 14 13:27:37.349534 kubelet[3568]: I0114 13:27:37.349362 3568 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d9hb9\" (UniqueName: \"kubernetes.io/projected/7141d984-4995-478e-9f1d-b12fd40144ce-kube-api-access-d9hb9\") pod \"7141d984-4995-478e-9f1d-b12fd40144ce\" (UID: \"7141d984-4995-478e-9f1d-b12fd40144ce\") " Jan 14 13:27:37.349534 kubelet[3568]: I0114 13:27:37.349410 3568 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6d13ace9-7237-4ca8-b3a4-687877cea7f5-cilium-cgroup\") pod \"6d13ace9-7237-4ca8-b3a4-687877cea7f5\" (UID: \"6d13ace9-7237-4ca8-b3a4-687877cea7f5\") " Jan 14 13:27:37.349534 kubelet[3568]: I0114 13:27:37.349440 3568 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6d13ace9-7237-4ca8-b3a4-687877cea7f5-cilium-run\") pod \"6d13ace9-7237-4ca8-b3a4-687877cea7f5\" (UID: \"6d13ace9-7237-4ca8-b3a4-687877cea7f5\") " Jan 14 13:27:37.349534 kubelet[3568]: I0114 13:27:37.349470 3568 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6d13ace9-7237-4ca8-b3a4-687877cea7f5-hostproc\") pod \"6d13ace9-7237-4ca8-b3a4-687877cea7f5\" (UID: \"6d13ace9-7237-4ca8-b3a4-687877cea7f5\") " Jan 14 13:27:37.349534 kubelet[3568]: I0114 13:27:37.349500 3568 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6d13ace9-7237-4ca8-b3a4-687877cea7f5-xtables-lock\") pod \"6d13ace9-7237-4ca8-b3a4-687877cea7f5\" (UID: \"6d13ace9-7237-4ca8-b3a4-687877cea7f5\") " Jan 14 13:27:37.349912 kubelet[3568]: I0114 13:27:37.349540 3568 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6d13ace9-7237-4ca8-b3a4-687877cea7f5-hubble-tls\") pod \"6d13ace9-7237-4ca8-b3a4-687877cea7f5\" (UID: \"6d13ace9-7237-4ca8-b3a4-687877cea7f5\") " Jan 14 13:27:37.349912 kubelet[3568]: I0114 13:27:37.349569 3568 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6d13ace9-7237-4ca8-b3a4-687877cea7f5-cni-path\") pod \"6d13ace9-7237-4ca8-b3a4-687877cea7f5\" (UID: \"6d13ace9-7237-4ca8-b3a4-687877cea7f5\") " Jan 14 13:27:37.349912 kubelet[3568]: I0114 13:27:37.349609 3568 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7141d984-4995-478e-9f1d-b12fd40144ce-cilium-config-path\") pod \"7141d984-4995-478e-9f1d-b12fd40144ce\" (UID: \"7141d984-4995-478e-9f1d-b12fd40144ce\") " Jan 14 13:27:37.349912 kubelet[3568]: I0114 13:27:37.349641 3568 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6d13ace9-7237-4ca8-b3a4-687877cea7f5-bpf-maps\") pod \"6d13ace9-7237-4ca8-b3a4-687877cea7f5\" (UID: \"6d13ace9-7237-4ca8-b3a4-687877cea7f5\") " Jan 14 13:27:37.349912 kubelet[3568]: I0114 13:27:37.349678 3568 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-knm8s\" (UniqueName: \"kubernetes.io/projected/6d13ace9-7237-4ca8-b3a4-687877cea7f5-kube-api-access-knm8s\") pod \"6d13ace9-7237-4ca8-b3a4-687877cea7f5\" (UID: \"6d13ace9-7237-4ca8-b3a4-687877cea7f5\") " Jan 14 13:27:37.349912 kubelet[3568]: I0114 13:27:37.349717 3568 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6d13ace9-7237-4ca8-b3a4-687877cea7f5-clustermesh-secrets\") pod \"6d13ace9-7237-4ca8-b3a4-687877cea7f5\" (UID: \"6d13ace9-7237-4ca8-b3a4-687877cea7f5\") " Jan 14 13:27:37.350207 kubelet[3568]: I0114 13:27:37.349750 3568 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6d13ace9-7237-4ca8-b3a4-687877cea7f5-host-proc-sys-kernel\") pod \"6d13ace9-7237-4ca8-b3a4-687877cea7f5\" (UID: \"6d13ace9-7237-4ca8-b3a4-687877cea7f5\") " Jan 14 13:27:37.350207 kubelet[3568]: I0114 13:27:37.349818 3568 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6d13ace9-7237-4ca8-b3a4-687877cea7f5-cilium-config-path\") pod \"6d13ace9-7237-4ca8-b3a4-687877cea7f5\" (UID: \"6d13ace9-7237-4ca8-b3a4-687877cea7f5\") " Jan 14 13:27:37.350207 kubelet[3568]: I0114 13:27:37.349850 3568 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6d13ace9-7237-4ca8-b3a4-687877cea7f5-lib-modules\") pod \"6d13ace9-7237-4ca8-b3a4-687877cea7f5\" (UID: \"6d13ace9-7237-4ca8-b3a4-687877cea7f5\") " Jan 14 13:27:37.350207 kubelet[3568]: I0114 13:27:37.349939 3568 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d13ace9-7237-4ca8-b3a4-687877cea7f5-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6d13ace9-7237-4ca8-b3a4-687877cea7f5" (UID: "6d13ace9-7237-4ca8-b3a4-687877cea7f5"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 14 13:27:37.350207 kubelet[3568]: I0114 13:27:37.349997 3568 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d13ace9-7237-4ca8-b3a4-687877cea7f5-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "6d13ace9-7237-4ca8-b3a4-687877cea7f5" (UID: "6d13ace9-7237-4ca8-b3a4-687877cea7f5"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 14 13:27:37.352154 kubelet[3568]: I0114 13:27:37.350023 3568 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d13ace9-7237-4ca8-b3a4-687877cea7f5-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "6d13ace9-7237-4ca8-b3a4-687877cea7f5" (UID: "6d13ace9-7237-4ca8-b3a4-687877cea7f5"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 14 13:27:37.352154 kubelet[3568]: I0114 13:27:37.350381 3568 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d13ace9-7237-4ca8-b3a4-687877cea7f5-cni-path" (OuterVolumeSpecName: "cni-path") pod "6d13ace9-7237-4ca8-b3a4-687877cea7f5" (UID: "6d13ace9-7237-4ca8-b3a4-687877cea7f5"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 14 13:27:37.352154 kubelet[3568]: I0114 13:27:37.350431 3568 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d13ace9-7237-4ca8-b3a4-687877cea7f5-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "6d13ace9-7237-4ca8-b3a4-687877cea7f5" (UID: "6d13ace9-7237-4ca8-b3a4-687877cea7f5"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 14 13:27:37.352154 kubelet[3568]: I0114 13:27:37.350460 3568 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d13ace9-7237-4ca8-b3a4-687877cea7f5-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "6d13ace9-7237-4ca8-b3a4-687877cea7f5" (UID: "6d13ace9-7237-4ca8-b3a4-687877cea7f5"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 14 13:27:37.352154 kubelet[3568]: I0114 13:27:37.350523 3568 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d13ace9-7237-4ca8-b3a4-687877cea7f5-hostproc" (OuterVolumeSpecName: "hostproc") pod "6d13ace9-7237-4ca8-b3a4-687877cea7f5" (UID: "6d13ace9-7237-4ca8-b3a4-687877cea7f5"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 14 13:27:37.352368 kubelet[3568]: I0114 13:27:37.350542 3568 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d13ace9-7237-4ca8-b3a4-687877cea7f5-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "6d13ace9-7237-4ca8-b3a4-687877cea7f5" (UID: "6d13ace9-7237-4ca8-b3a4-687877cea7f5"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 14 13:27:37.353334 kubelet[3568]: I0114 13:27:37.353305 3568 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d13ace9-7237-4ca8-b3a4-687877cea7f5-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "6d13ace9-7237-4ca8-b3a4-687877cea7f5" (UID: "6d13ace9-7237-4ca8-b3a4-687877cea7f5"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 14 13:27:37.353993 kubelet[3568]: I0114 13:27:37.353971 3568 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d13ace9-7237-4ca8-b3a4-687877cea7f5-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "6d13ace9-7237-4ca8-b3a4-687877cea7f5" (UID: "6d13ace9-7237-4ca8-b3a4-687877cea7f5"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 14 13:27:37.354208 kubelet[3568]: I0114 13:27:37.354186 3568 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7141d984-4995-478e-9f1d-b12fd40144ce-kube-api-access-d9hb9" (OuterVolumeSpecName: "kube-api-access-d9hb9") pod "7141d984-4995-478e-9f1d-b12fd40144ce" (UID: "7141d984-4995-478e-9f1d-b12fd40144ce"). InnerVolumeSpecName "kube-api-access-d9hb9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 14 13:27:37.359237 kubelet[3568]: I0114 13:27:37.359211 3568 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d13ace9-7237-4ca8-b3a4-687877cea7f5-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "6d13ace9-7237-4ca8-b3a4-687877cea7f5" (UID: "6d13ace9-7237-4ca8-b3a4-687877cea7f5"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 14 13:27:37.359434 kubelet[3568]: I0114 13:27:37.359412 3568 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d13ace9-7237-4ca8-b3a4-687877cea7f5-kube-api-access-knm8s" (OuterVolumeSpecName: "kube-api-access-knm8s") pod "6d13ace9-7237-4ca8-b3a4-687877cea7f5" (UID: "6d13ace9-7237-4ca8-b3a4-687877cea7f5"). InnerVolumeSpecName "kube-api-access-knm8s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 14 13:27:37.359587 kubelet[3568]: I0114 13:27:37.359562 3568 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d13ace9-7237-4ca8-b3a4-687877cea7f5-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "6d13ace9-7237-4ca8-b3a4-687877cea7f5" (UID: "6d13ace9-7237-4ca8-b3a4-687877cea7f5"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 14 13:27:37.360079 kubelet[3568]: I0114 13:27:37.360023 3568 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d13ace9-7237-4ca8-b3a4-687877cea7f5-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6d13ace9-7237-4ca8-b3a4-687877cea7f5" (UID: "6d13ace9-7237-4ca8-b3a4-687877cea7f5"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 14 13:27:37.360403 kubelet[3568]: I0114 13:27:37.360384 3568 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7141d984-4995-478e-9f1d-b12fd40144ce-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7141d984-4995-478e-9f1d-b12fd40144ce" (UID: "7141d984-4995-478e-9f1d-b12fd40144ce"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 14 13:27:37.450205 kubelet[3568]: I0114 13:27:37.450048 3568 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6d13ace9-7237-4ca8-b3a4-687877cea7f5-host-proc-sys-net\") on node \"ci-4152.2.0-a-0907529617\" DevicePath \"\"" Jan 14 13:27:37.450205 kubelet[3568]: I0114 13:27:37.450101 3568 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6d13ace9-7237-4ca8-b3a4-687877cea7f5-etc-cni-netd\") on node \"ci-4152.2.0-a-0907529617\" DevicePath \"\"" Jan 14 13:27:37.450205 kubelet[3568]: I0114 13:27:37.450124 3568 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-d9hb9\" (UniqueName: \"kubernetes.io/projected/7141d984-4995-478e-9f1d-b12fd40144ce-kube-api-access-d9hb9\") on node \"ci-4152.2.0-a-0907529617\" DevicePath \"\"" Jan 14 13:27:37.450205 kubelet[3568]: I0114 13:27:37.450141 3568 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6d13ace9-7237-4ca8-b3a4-687877cea7f5-cilium-cgroup\") on node \"ci-4152.2.0-a-0907529617\" DevicePath \"\"" Jan 14 13:27:37.450205 kubelet[3568]: I0114 13:27:37.450158 3568 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6d13ace9-7237-4ca8-b3a4-687877cea7f5-cilium-run\") on node \"ci-4152.2.0-a-0907529617\" DevicePath \"\"" Jan 14 13:27:37.450205 kubelet[3568]: I0114 13:27:37.450207 3568 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6d13ace9-7237-4ca8-b3a4-687877cea7f5-hostproc\") on node \"ci-4152.2.0-a-0907529617\" DevicePath \"\"" Jan 14 13:27:37.450632 kubelet[3568]: I0114 13:27:37.450223 3568 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6d13ace9-7237-4ca8-b3a4-687877cea7f5-xtables-lock\") on node \"ci-4152.2.0-a-0907529617\" DevicePath \"\"" Jan 14 13:27:37.450632 kubelet[3568]: I0114 13:27:37.450239 3568 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6d13ace9-7237-4ca8-b3a4-687877cea7f5-hubble-tls\") on node \"ci-4152.2.0-a-0907529617\" DevicePath \"\"" Jan 14 13:27:37.450632 kubelet[3568]: I0114 13:27:37.450253 3568 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6d13ace9-7237-4ca8-b3a4-687877cea7f5-cni-path\") on node \"ci-4152.2.0-a-0907529617\" DevicePath \"\"" Jan 14 13:27:37.450632 kubelet[3568]: I0114 13:27:37.450269 3568 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7141d984-4995-478e-9f1d-b12fd40144ce-cilium-config-path\") on node \"ci-4152.2.0-a-0907529617\" DevicePath \"\"" Jan 14 13:27:37.450632 kubelet[3568]: I0114 13:27:37.450284 3568 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6d13ace9-7237-4ca8-b3a4-687877cea7f5-bpf-maps\") on node \"ci-4152.2.0-a-0907529617\" DevicePath \"\"" Jan 14 13:27:37.450632 kubelet[3568]: I0114 13:27:37.450301 3568 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-knm8s\" (UniqueName: \"kubernetes.io/projected/6d13ace9-7237-4ca8-b3a4-687877cea7f5-kube-api-access-knm8s\") on node \"ci-4152.2.0-a-0907529617\" DevicePath \"\"" Jan 14 13:27:37.450632 kubelet[3568]: I0114 13:27:37.450320 3568 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6d13ace9-7237-4ca8-b3a4-687877cea7f5-clustermesh-secrets\") on node \"ci-4152.2.0-a-0907529617\" DevicePath \"\"" Jan 14 13:27:37.450632 kubelet[3568]: I0114 13:27:37.450336 3568 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6d13ace9-7237-4ca8-b3a4-687877cea7f5-host-proc-sys-kernel\") on node \"ci-4152.2.0-a-0907529617\" DevicePath \"\"" Jan 14 13:27:37.451100 kubelet[3568]: I0114 13:27:37.450355 3568 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6d13ace9-7237-4ca8-b3a4-687877cea7f5-cilium-config-path\") on node \"ci-4152.2.0-a-0907529617\" DevicePath \"\"" Jan 14 13:27:37.451100 kubelet[3568]: I0114 13:27:37.450371 3568 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6d13ace9-7237-4ca8-b3a4-687877cea7f5-lib-modules\") on node \"ci-4152.2.0-a-0907529617\" DevicePath \"\"" Jan 14 13:27:37.815820 kubelet[3568]: I0114 13:27:37.813883 3568 scope.go:117] "RemoveContainer" containerID="4872bdce808cb57178e551c0f5c52116003c96cdf26fae95302115efbade4f2e" Jan 14 13:27:37.818197 containerd[1829]: time="2025-01-14T13:27:37.818158667Z" level=info msg="RemoveContainer for \"4872bdce808cb57178e551c0f5c52116003c96cdf26fae95302115efbade4f2e\"" Jan 14 13:27:37.849858 containerd[1829]: time="2025-01-14T13:27:37.849804483Z" level=info msg="RemoveContainer for \"4872bdce808cb57178e551c0f5c52116003c96cdf26fae95302115efbade4f2e\" returns successfully" Jan 14 13:27:37.850208 kubelet[3568]: I0114 13:27:37.850166 3568 scope.go:117] "RemoveContainer" containerID="4872bdce808cb57178e551c0f5c52116003c96cdf26fae95302115efbade4f2e" Jan 14 13:27:37.850499 containerd[1829]: time="2025-01-14T13:27:37.850457079Z" level=error msg="ContainerStatus for \"4872bdce808cb57178e551c0f5c52116003c96cdf26fae95302115efbade4f2e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4872bdce808cb57178e551c0f5c52116003c96cdf26fae95302115efbade4f2e\": not found" Jan 14 13:27:37.850678 kubelet[3568]: E0114 13:27:37.850655 3568 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4872bdce808cb57178e551c0f5c52116003c96cdf26fae95302115efbade4f2e\": not found" containerID="4872bdce808cb57178e551c0f5c52116003c96cdf26fae95302115efbade4f2e" Jan 14 13:27:37.850841 kubelet[3568]: I0114 13:27:37.850823 3568 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4872bdce808cb57178e551c0f5c52116003c96cdf26fae95302115efbade4f2e"} err="failed to get container status \"4872bdce808cb57178e551c0f5c52116003c96cdf26fae95302115efbade4f2e\": rpc error: code = NotFound desc = an error occurred when try to find container \"4872bdce808cb57178e551c0f5c52116003c96cdf26fae95302115efbade4f2e\": not found" Jan 14 13:27:37.850930 kubelet[3568]: I0114 13:27:37.850859 3568 scope.go:117] "RemoveContainer" containerID="ec55ec88ccea956446908c626971ce58ece2c979ed196c535f674c4cb9b0fe44" Jan 14 13:27:37.853987 containerd[1829]: time="2025-01-14T13:27:37.853955759Z" level=info msg="RemoveContainer for \"ec55ec88ccea956446908c626971ce58ece2c979ed196c535f674c4cb9b0fe44\"" Jan 14 13:27:37.861854 containerd[1829]: time="2025-01-14T13:27:37.861811113Z" level=info msg="RemoveContainer for \"ec55ec88ccea956446908c626971ce58ece2c979ed196c535f674c4cb9b0fe44\" returns successfully" Jan 14 13:27:37.862110 kubelet[3568]: I0114 13:27:37.862084 3568 scope.go:117] "RemoveContainer" containerID="eb6423c3483bc43a04a95c16dd4ce66dbb50820642ebe9ba5fe7a05651c79c0f" Jan 14 13:27:37.863213 containerd[1829]: time="2025-01-14T13:27:37.863183705Z" level=info msg="RemoveContainer for \"eb6423c3483bc43a04a95c16dd4ce66dbb50820642ebe9ba5fe7a05651c79c0f\"" Jan 14 13:27:37.870019 containerd[1829]: time="2025-01-14T13:27:37.869987265Z" level=info msg="RemoveContainer for \"eb6423c3483bc43a04a95c16dd4ce66dbb50820642ebe9ba5fe7a05651c79c0f\" returns successfully" Jan 14 13:27:37.870242 kubelet[3568]: I0114 13:27:37.870163 3568 scope.go:117] "RemoveContainer" containerID="9b47c7aad60cd2329fd63bf0873f99617fecd69c8407d6a0e6e0722b2337d0f6" Jan 14 13:27:37.871239 containerd[1829]: time="2025-01-14T13:27:37.871211458Z" level=info msg="RemoveContainer for \"9b47c7aad60cd2329fd63bf0873f99617fecd69c8407d6a0e6e0722b2337d0f6\"" Jan 14 13:27:37.879582 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-92f9b68ad1c3eb56076300e66b02b342e1936e15372e4a5849b492f3a5fb1f86-rootfs.mount: Deactivated successfully. Jan 14 13:27:37.880042 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fa792318dc660424fe1b9061dfde60f3198252f4d3d38d6f1bb7df7926ed6912-rootfs.mount: Deactivated successfully. Jan 14 13:27:37.880644 containerd[1829]: time="2025-01-14T13:27:37.880351105Z" level=info msg="RemoveContainer for \"9b47c7aad60cd2329fd63bf0873f99617fecd69c8407d6a0e6e0722b2337d0f6\" returns successfully" Jan 14 13:27:37.880192 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fa792318dc660424fe1b9061dfde60f3198252f4d3d38d6f1bb7df7926ed6912-shm.mount: Deactivated successfully. Jan 14 13:27:37.881041 kubelet[3568]: I0114 13:27:37.880646 3568 scope.go:117] "RemoveContainer" containerID="7343084cd52ae403c5025d94474c5cd592b7df67dba6068cfb3160a1de05c14b" Jan 14 13:27:37.880324 systemd[1]: var-lib-kubelet-pods-7141d984\x2d4995\x2d478e\x2d9f1d\x2db12fd40144ce-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dd9hb9.mount: Deactivated successfully. Jan 14 13:27:37.880791 systemd[1]: var-lib-kubelet-pods-6d13ace9\x2d7237\x2d4ca8\x2db3a4\x2d687877cea7f5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dknm8s.mount: Deactivated successfully. Jan 14 13:27:37.881171 systemd[1]: var-lib-kubelet-pods-6d13ace9\x2d7237\x2d4ca8\x2db3a4\x2d687877cea7f5-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 14 13:27:37.881633 systemd[1]: var-lib-kubelet-pods-6d13ace9\x2d7237\x2d4ca8\x2db3a4\x2d687877cea7f5-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 14 13:27:37.882545 containerd[1829]: time="2025-01-14T13:27:37.882089495Z" level=info msg="RemoveContainer for \"7343084cd52ae403c5025d94474c5cd592b7df67dba6068cfb3160a1de05c14b\"" Jan 14 13:27:37.890352 containerd[1829]: time="2025-01-14T13:27:37.890318547Z" level=info msg="RemoveContainer for \"7343084cd52ae403c5025d94474c5cd592b7df67dba6068cfb3160a1de05c14b\" returns successfully" Jan 14 13:27:37.890515 kubelet[3568]: I0114 13:27:37.890494 3568 scope.go:117] "RemoveContainer" containerID="ca3775b626a04e4bc640ae704b0b1ef0a3168d6a77c91895fcce6c58bac5d9e6" Jan 14 13:27:37.891539 containerd[1829]: time="2025-01-14T13:27:37.891517840Z" level=info msg="RemoveContainer for \"ca3775b626a04e4bc640ae704b0b1ef0a3168d6a77c91895fcce6c58bac5d9e6\"" Jan 14 13:27:37.898838 containerd[1829]: time="2025-01-14T13:27:37.898805597Z" level=info msg="RemoveContainer for \"ca3775b626a04e4bc640ae704b0b1ef0a3168d6a77c91895fcce6c58bac5d9e6\" returns successfully" Jan 14 13:27:37.899411 containerd[1829]: time="2025-01-14T13:27:37.899268595Z" level=error msg="ContainerStatus for \"ec55ec88ccea956446908c626971ce58ece2c979ed196c535f674c4cb9b0fe44\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ec55ec88ccea956446908c626971ce58ece2c979ed196c535f674c4cb9b0fe44\": not found" Jan 14 13:27:37.899505 kubelet[3568]: I0114 13:27:37.899029 3568 scope.go:117] "RemoveContainer" containerID="ec55ec88ccea956446908c626971ce58ece2c979ed196c535f674c4cb9b0fe44" Jan 14 13:27:37.899505 kubelet[3568]: E0114 13:27:37.899456 3568 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ec55ec88ccea956446908c626971ce58ece2c979ed196c535f674c4cb9b0fe44\": not found" containerID="ec55ec88ccea956446908c626971ce58ece2c979ed196c535f674c4cb9b0fe44" Jan 14 13:27:37.899505 kubelet[3568]: I0114 13:27:37.899500 3568 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ec55ec88ccea956446908c626971ce58ece2c979ed196c535f674c4cb9b0fe44"} err="failed to get container status \"ec55ec88ccea956446908c626971ce58ece2c979ed196c535f674c4cb9b0fe44\": rpc error: code = NotFound desc = an error occurred when try to find container \"ec55ec88ccea956446908c626971ce58ece2c979ed196c535f674c4cb9b0fe44\": not found" Jan 14 13:27:37.899701 kubelet[3568]: I0114 13:27:37.899518 3568 scope.go:117] "RemoveContainer" containerID="eb6423c3483bc43a04a95c16dd4ce66dbb50820642ebe9ba5fe7a05651c79c0f" Jan 14 13:27:37.899794 containerd[1829]: time="2025-01-14T13:27:37.899722092Z" level=error msg="ContainerStatus for \"eb6423c3483bc43a04a95c16dd4ce66dbb50820642ebe9ba5fe7a05651c79c0f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"eb6423c3483bc43a04a95c16dd4ce66dbb50820642ebe9ba5fe7a05651c79c0f\": not found" Jan 14 13:27:37.899910 kubelet[3568]: E0114 13:27:37.899873 3568 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"eb6423c3483bc43a04a95c16dd4ce66dbb50820642ebe9ba5fe7a05651c79c0f\": not found" containerID="eb6423c3483bc43a04a95c16dd4ce66dbb50820642ebe9ba5fe7a05651c79c0f" Jan 14 13:27:37.899910 kubelet[3568]: I0114 13:27:37.899908 3568 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"eb6423c3483bc43a04a95c16dd4ce66dbb50820642ebe9ba5fe7a05651c79c0f"} err="failed to get container status \"eb6423c3483bc43a04a95c16dd4ce66dbb50820642ebe9ba5fe7a05651c79c0f\": rpc error: code = NotFound desc = an error occurred when try to find container \"eb6423c3483bc43a04a95c16dd4ce66dbb50820642ebe9ba5fe7a05651c79c0f\": not found" Jan 14 13:27:37.900056 kubelet[3568]: I0114 13:27:37.899922 3568 scope.go:117] "RemoveContainer" containerID="9b47c7aad60cd2329fd63bf0873f99617fecd69c8407d6a0e6e0722b2337d0f6" Jan 14 13:27:37.900175 containerd[1829]: time="2025-01-14T13:27:37.900121290Z" level=error msg="ContainerStatus for \"9b47c7aad60cd2329fd63bf0873f99617fecd69c8407d6a0e6e0722b2337d0f6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9b47c7aad60cd2329fd63bf0873f99617fecd69c8407d6a0e6e0722b2337d0f6\": not found" Jan 14 13:27:37.900268 kubelet[3568]: E0114 13:27:37.900244 3568 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9b47c7aad60cd2329fd63bf0873f99617fecd69c8407d6a0e6e0722b2337d0f6\": not found" containerID="9b47c7aad60cd2329fd63bf0873f99617fecd69c8407d6a0e6e0722b2337d0f6" Jan 14 13:27:37.900321 kubelet[3568]: I0114 13:27:37.900287 3568 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9b47c7aad60cd2329fd63bf0873f99617fecd69c8407d6a0e6e0722b2337d0f6"} err="failed to get container status \"9b47c7aad60cd2329fd63bf0873f99617fecd69c8407d6a0e6e0722b2337d0f6\": rpc error: code = NotFound desc = an error occurred when try to find container \"9b47c7aad60cd2329fd63bf0873f99617fecd69c8407d6a0e6e0722b2337d0f6\": not found" Jan 14 13:27:37.900321 kubelet[3568]: I0114 13:27:37.900302 3568 scope.go:117] "RemoveContainer" containerID="7343084cd52ae403c5025d94474c5cd592b7df67dba6068cfb3160a1de05c14b" Jan 14 13:27:37.900531 containerd[1829]: time="2025-01-14T13:27:37.900459388Z" level=error msg="ContainerStatus for \"7343084cd52ae403c5025d94474c5cd592b7df67dba6068cfb3160a1de05c14b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7343084cd52ae403c5025d94474c5cd592b7df67dba6068cfb3160a1de05c14b\": not found" Jan 14 13:27:37.900660 kubelet[3568]: E0114 13:27:37.900634 3568 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7343084cd52ae403c5025d94474c5cd592b7df67dba6068cfb3160a1de05c14b\": not found" containerID="7343084cd52ae403c5025d94474c5cd592b7df67dba6068cfb3160a1de05c14b" Jan 14 13:27:37.900731 kubelet[3568]: I0114 13:27:37.900664 3568 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7343084cd52ae403c5025d94474c5cd592b7df67dba6068cfb3160a1de05c14b"} err="failed to get container status \"7343084cd52ae403c5025d94474c5cd592b7df67dba6068cfb3160a1de05c14b\": rpc error: code = NotFound desc = an error occurred when try to find container \"7343084cd52ae403c5025d94474c5cd592b7df67dba6068cfb3160a1de05c14b\": not found" Jan 14 13:27:37.900731 kubelet[3568]: I0114 13:27:37.900676 3568 scope.go:117] "RemoveContainer" containerID="ca3775b626a04e4bc640ae704b0b1ef0a3168d6a77c91895fcce6c58bac5d9e6" Jan 14 13:27:37.900992 containerd[1829]: time="2025-01-14T13:27:37.900964085Z" level=error msg="ContainerStatus for \"ca3775b626a04e4bc640ae704b0b1ef0a3168d6a77c91895fcce6c58bac5d9e6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ca3775b626a04e4bc640ae704b0b1ef0a3168d6a77c91895fcce6c58bac5d9e6\": not found" Jan 14 13:27:37.901110 kubelet[3568]: E0114 13:27:37.901093 3568 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ca3775b626a04e4bc640ae704b0b1ef0a3168d6a77c91895fcce6c58bac5d9e6\": not found" containerID="ca3775b626a04e4bc640ae704b0b1ef0a3168d6a77c91895fcce6c58bac5d9e6" Jan 14 13:27:37.901179 kubelet[3568]: I0114 13:27:37.901126 3568 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ca3775b626a04e4bc640ae704b0b1ef0a3168d6a77c91895fcce6c58bac5d9e6"} err="failed to get container status \"ca3775b626a04e4bc640ae704b0b1ef0a3168d6a77c91895fcce6c58bac5d9e6\": rpc error: code = NotFound desc = an error occurred when try to find container \"ca3775b626a04e4bc640ae704b0b1ef0a3168d6a77c91895fcce6c58bac5d9e6\": not found" Jan 14 13:27:38.307398 kubelet[3568]: I0114 13:27:38.307354 3568 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="6d13ace9-7237-4ca8-b3a4-687877cea7f5" path="/var/lib/kubelet/pods/6d13ace9-7237-4ca8-b3a4-687877cea7f5/volumes" Jan 14 13:27:38.308126 kubelet[3568]: I0114 13:27:38.308089 3568 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="7141d984-4995-478e-9f1d-b12fd40144ce" path="/var/lib/kubelet/pods/7141d984-4995-478e-9f1d-b12fd40144ce/volumes" Jan 14 13:27:38.920920 sshd[5174]: Connection closed by 10.200.16.10 port 39370 Jan 14 13:27:38.921350 sshd-session[5171]: pam_unix(sshd:session): session closed for user core Jan 14 13:27:38.924865 systemd[1]: sshd@22-10.200.4.19:22-10.200.16.10:39370.service: Deactivated successfully. Jan 14 13:27:38.940962 systemd[1]: session-25.scope: Deactivated successfully. Jan 14 13:27:38.946851 systemd-logind[1807]: Session 25 logged out. Waiting for processes to exit. Jan 14 13:27:38.948038 systemd-logind[1807]: Removed session 25. Jan 14 13:27:39.024066 systemd[1]: Started sshd@23-10.200.4.19:22-10.200.16.10:58556.service - OpenSSH per-connection server daemon (10.200.16.10:58556). Jan 14 13:27:39.439632 update_engine[1812]: I20250114 13:27:39.439561 1812 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 14 13:27:39.440150 update_engine[1812]: I20250114 13:27:39.439877 1812 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 14 13:27:39.440206 update_engine[1812]: I20250114 13:27:39.440176 1812 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 14 13:27:39.454088 update_engine[1812]: E20250114 13:27:39.454032 1812 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 14 13:27:39.454223 update_engine[1812]: I20250114 13:27:39.454123 1812 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 14 13:27:39.627396 sshd[5345]: Accepted publickey for core from 10.200.16.10 port 58556 ssh2: RSA SHA256:5BL0MHBLBPplNMyfHIepoEZk0FL953xzvqueGYBPke0 Jan 14 13:27:39.629148 sshd-session[5345]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:27:39.633617 systemd-logind[1807]: New session 26 of user core. Jan 14 13:27:39.638034 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 14 13:27:40.429920 kubelet[3568]: I0114 13:27:40.429341 3568 topology_manager.go:215] "Topology Admit Handler" podUID="b83dbfe2-6028-428c-9d70-6872ff0f5299" podNamespace="kube-system" podName="cilium-kbkdx" Jan 14 13:27:40.432005 kubelet[3568]: E0114 13:27:40.430810 3568 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6d13ace9-7237-4ca8-b3a4-687877cea7f5" containerName="clean-cilium-state" Jan 14 13:27:40.432005 kubelet[3568]: E0114 13:27:40.430842 3568 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6d13ace9-7237-4ca8-b3a4-687877cea7f5" containerName="mount-cgroup" Jan 14 13:27:40.432005 kubelet[3568]: E0114 13:27:40.430874 3568 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6d13ace9-7237-4ca8-b3a4-687877cea7f5" containerName="mount-bpf-fs" Jan 14 13:27:40.432005 kubelet[3568]: E0114 13:27:40.430886 3568 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7141d984-4995-478e-9f1d-b12fd40144ce" containerName="cilium-operator" Jan 14 13:27:40.432005 kubelet[3568]: E0114 13:27:40.430896 3568 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6d13ace9-7237-4ca8-b3a4-687877cea7f5" containerName="cilium-agent" Jan 14 13:27:40.432005 kubelet[3568]: E0114 13:27:40.430906 3568 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6d13ace9-7237-4ca8-b3a4-687877cea7f5" containerName="apply-sysctl-overwrites" Jan 14 13:27:40.432005 kubelet[3568]: I0114 13:27:40.431086 3568 memory_manager.go:354] "RemoveStaleState removing state" podUID="7141d984-4995-478e-9f1d-b12fd40144ce" containerName="cilium-operator" Jan 14 13:27:40.432005 kubelet[3568]: I0114 13:27:40.431104 3568 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d13ace9-7237-4ca8-b3a4-687877cea7f5" containerName="cilium-agent" Jan 14 13:27:40.492659 sshd[5348]: Connection closed by 10.200.16.10 port 58556 Jan 14 13:27:40.493679 sshd-session[5345]: pam_unix(sshd:session): session closed for user core Jan 14 13:27:40.497110 systemd[1]: sshd@23-10.200.4.19:22-10.200.16.10:58556.service: Deactivated successfully. Jan 14 13:27:40.503668 systemd[1]: session-26.scope: Deactivated successfully. Jan 14 13:27:40.504656 systemd-logind[1807]: Session 26 logged out. Waiting for processes to exit. Jan 14 13:27:40.505668 systemd-logind[1807]: Removed session 26. Jan 14 13:27:40.566392 kubelet[3568]: I0114 13:27:40.566292 3568 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b83dbfe2-6028-428c-9d70-6872ff0f5299-hubble-tls\") pod \"cilium-kbkdx\" (UID: \"b83dbfe2-6028-428c-9d70-6872ff0f5299\") " pod="kube-system/cilium-kbkdx" Jan 14 13:27:40.566392 kubelet[3568]: I0114 13:27:40.566359 3568 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b83dbfe2-6028-428c-9d70-6872ff0f5299-clustermesh-secrets\") pod \"cilium-kbkdx\" (UID: \"b83dbfe2-6028-428c-9d70-6872ff0f5299\") " pod="kube-system/cilium-kbkdx" Jan 14 13:27:40.566892 kubelet[3568]: I0114 13:27:40.566437 3568 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b83dbfe2-6028-428c-9d70-6872ff0f5299-host-proc-sys-net\") pod \"cilium-kbkdx\" (UID: \"b83dbfe2-6028-428c-9d70-6872ff0f5299\") " pod="kube-system/cilium-kbkdx" Jan 14 13:27:40.566892 kubelet[3568]: I0114 13:27:40.566489 3568 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8s59\" (UniqueName: \"kubernetes.io/projected/b83dbfe2-6028-428c-9d70-6872ff0f5299-kube-api-access-w8s59\") pod \"cilium-kbkdx\" (UID: \"b83dbfe2-6028-428c-9d70-6872ff0f5299\") " pod="kube-system/cilium-kbkdx" Jan 14 13:27:40.566892 kubelet[3568]: I0114 13:27:40.566546 3568 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b83dbfe2-6028-428c-9d70-6872ff0f5299-cilium-ipsec-secrets\") pod \"cilium-kbkdx\" (UID: \"b83dbfe2-6028-428c-9d70-6872ff0f5299\") " pod="kube-system/cilium-kbkdx" Jan 14 13:27:40.566892 kubelet[3568]: I0114 13:27:40.566578 3568 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b83dbfe2-6028-428c-9d70-6872ff0f5299-hostproc\") pod \"cilium-kbkdx\" (UID: \"b83dbfe2-6028-428c-9d70-6872ff0f5299\") " pod="kube-system/cilium-kbkdx" Jan 14 13:27:40.566892 kubelet[3568]: I0114 13:27:40.566613 3568 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b83dbfe2-6028-428c-9d70-6872ff0f5299-bpf-maps\") pod \"cilium-kbkdx\" (UID: \"b83dbfe2-6028-428c-9d70-6872ff0f5299\") " pod="kube-system/cilium-kbkdx" Jan 14 13:27:40.566892 kubelet[3568]: I0114 13:27:40.566649 3568 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b83dbfe2-6028-428c-9d70-6872ff0f5299-etc-cni-netd\") pod \"cilium-kbkdx\" (UID: \"b83dbfe2-6028-428c-9d70-6872ff0f5299\") " pod="kube-system/cilium-kbkdx" Jan 14 13:27:40.567268 kubelet[3568]: I0114 13:27:40.566682 3568 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b83dbfe2-6028-428c-9d70-6872ff0f5299-lib-modules\") pod \"cilium-kbkdx\" (UID: \"b83dbfe2-6028-428c-9d70-6872ff0f5299\") " pod="kube-system/cilium-kbkdx" Jan 14 13:27:40.567268 kubelet[3568]: I0114 13:27:40.566715 3568 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b83dbfe2-6028-428c-9d70-6872ff0f5299-cilium-config-path\") pod \"cilium-kbkdx\" (UID: \"b83dbfe2-6028-428c-9d70-6872ff0f5299\") " pod="kube-system/cilium-kbkdx" Jan 14 13:27:40.567268 kubelet[3568]: I0114 13:27:40.566769 3568 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b83dbfe2-6028-428c-9d70-6872ff0f5299-cni-path\") pod \"cilium-kbkdx\" (UID: \"b83dbfe2-6028-428c-9d70-6872ff0f5299\") " pod="kube-system/cilium-kbkdx" Jan 14 13:27:40.567268 kubelet[3568]: I0114 13:27:40.566820 3568 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b83dbfe2-6028-428c-9d70-6872ff0f5299-xtables-lock\") pod \"cilium-kbkdx\" (UID: \"b83dbfe2-6028-428c-9d70-6872ff0f5299\") " pod="kube-system/cilium-kbkdx" Jan 14 13:27:40.567268 kubelet[3568]: I0114 13:27:40.566953 3568 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b83dbfe2-6028-428c-9d70-6872ff0f5299-host-proc-sys-kernel\") pod \"cilium-kbkdx\" (UID: \"b83dbfe2-6028-428c-9d70-6872ff0f5299\") " pod="kube-system/cilium-kbkdx" Jan 14 13:27:40.567268 kubelet[3568]: I0114 13:27:40.566993 3568 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b83dbfe2-6028-428c-9d70-6872ff0f5299-cilium-run\") pod \"cilium-kbkdx\" (UID: \"b83dbfe2-6028-428c-9d70-6872ff0f5299\") " pod="kube-system/cilium-kbkdx" Jan 14 13:27:40.567449 kubelet[3568]: I0114 13:27:40.567031 3568 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b83dbfe2-6028-428c-9d70-6872ff0f5299-cilium-cgroup\") pod \"cilium-kbkdx\" (UID: \"b83dbfe2-6028-428c-9d70-6872ff0f5299\") " pod="kube-system/cilium-kbkdx" Jan 14 13:27:40.596087 systemd[1]: Started sshd@24-10.200.4.19:22-10.200.16.10:58562.service - OpenSSH per-connection server daemon (10.200.16.10:58562). Jan 14 13:27:40.737588 containerd[1829]: time="2025-01-14T13:27:40.737441391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kbkdx,Uid:b83dbfe2-6028-428c-9d70-6872ff0f5299,Namespace:kube-system,Attempt:0,}" Jan 14 13:27:40.781975 containerd[1829]: time="2025-01-14T13:27:40.781862509Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 13:27:40.781975 containerd[1829]: time="2025-01-14T13:27:40.781934409Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 13:27:40.782404 containerd[1829]: time="2025-01-14T13:27:40.781949609Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:27:40.782404 containerd[1829]: time="2025-01-14T13:27:40.782067510Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 13:27:40.821351 containerd[1829]: time="2025-01-14T13:27:40.821298202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kbkdx,Uid:b83dbfe2-6028-428c-9d70-6872ff0f5299,Namespace:kube-system,Attempt:0,} returns sandbox id \"77f5d2000097ea46abe1290fab44d8e5d21156a0fdce46e93fe2c4c0ac313e71\"" Jan 14 13:27:40.824462 containerd[1829]: time="2025-01-14T13:27:40.824293817Z" level=info msg="CreateContainer within sandbox \"77f5d2000097ea46abe1290fab44d8e5d21156a0fdce46e93fe2c4c0ac313e71\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 14 13:27:40.863919 containerd[1829]: time="2025-01-14T13:27:40.863868311Z" level=info msg="CreateContainer within sandbox \"77f5d2000097ea46abe1290fab44d8e5d21156a0fdce46e93fe2c4c0ac313e71\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9d1b169a1cdb457708e2cc0701c9d92f3ef2d1de48940a12503080e440f90ed6\"" Jan 14 13:27:40.864700 containerd[1829]: time="2025-01-14T13:27:40.864670015Z" level=info msg="StartContainer for \"9d1b169a1cdb457708e2cc0701c9d92f3ef2d1de48940a12503080e440f90ed6\"" Jan 14 13:27:40.927878 containerd[1829]: time="2025-01-14T13:27:40.927698924Z" level=info msg="StartContainer for \"9d1b169a1cdb457708e2cc0701c9d92f3ef2d1de48940a12503080e440f90ed6\" returns successfully" Jan 14 13:27:41.002827 containerd[1829]: time="2025-01-14T13:27:41.002704092Z" level=info msg="shim disconnected" id=9d1b169a1cdb457708e2cc0701c9d92f3ef2d1de48940a12503080e440f90ed6 namespace=k8s.io Jan 14 13:27:41.002827 containerd[1829]: time="2025-01-14T13:27:41.002805393Z" level=warning msg="cleaning up after shim disconnected" id=9d1b169a1cdb457708e2cc0701c9d92f3ef2d1de48940a12503080e440f90ed6 namespace=k8s.io Jan 14 13:27:41.002827 containerd[1829]: time="2025-01-14T13:27:41.002817593Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 14 13:27:41.024726 containerd[1829]: time="2025-01-14T13:27:41.024662200Z" level=warning msg="cleanup warnings time=\"2025-01-14T13:27:41Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 14 13:27:41.199200 sshd[5359]: Accepted publickey for core from 10.200.16.10 port 58562 ssh2: RSA SHA256:5BL0MHBLBPplNMyfHIepoEZk0FL953xzvqueGYBPke0 Jan 14 13:27:41.200855 sshd-session[5359]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:27:41.205647 systemd-logind[1807]: New session 27 of user core. Jan 14 13:27:41.211020 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 14 13:27:41.443255 kubelet[3568]: E0114 13:27:41.443131 3568 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 14 13:27:41.626891 sshd[5470]: Connection closed by 10.200.16.10 port 58562 Jan 14 13:27:41.627633 sshd-session[5359]: pam_unix(sshd:session): session closed for user core Jan 14 13:27:41.631605 systemd[1]: sshd@24-10.200.4.19:22-10.200.16.10:58562.service: Deactivated successfully. Jan 14 13:27:41.636227 systemd[1]: session-27.scope: Deactivated successfully. Jan 14 13:27:41.637184 systemd-logind[1807]: Session 27 logged out. Waiting for processes to exit. Jan 14 13:27:41.638298 systemd-logind[1807]: Removed session 27. Jan 14 13:27:41.731405 systemd[1]: Started sshd@25-10.200.4.19:22-10.200.16.10:58570.service - OpenSSH per-connection server daemon (10.200.16.10:58570). Jan 14 13:27:41.836235 containerd[1829]: time="2025-01-14T13:27:41.836190381Z" level=info msg="CreateContainer within sandbox \"77f5d2000097ea46abe1290fab44d8e5d21156a0fdce46e93fe2c4c0ac313e71\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 14 13:27:41.869564 containerd[1829]: time="2025-01-14T13:27:41.869518044Z" level=info msg="CreateContainer within sandbox \"77f5d2000097ea46abe1290fab44d8e5d21156a0fdce46e93fe2c4c0ac313e71\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d3fa3905fdaed85459aa68f56610ddde613a817fb8cc114c77665f3a44859d87\"" Jan 14 13:27:41.870741 containerd[1829]: time="2025-01-14T13:27:41.870097747Z" level=info msg="StartContainer for \"d3fa3905fdaed85459aa68f56610ddde613a817fb8cc114c77665f3a44859d87\"" Jan 14 13:27:41.931446 containerd[1829]: time="2025-01-14T13:27:41.931386248Z" level=info msg="StartContainer for \"d3fa3905fdaed85459aa68f56610ddde613a817fb8cc114c77665f3a44859d87\" returns successfully" Jan 14 13:27:41.973471 containerd[1829]: time="2025-01-14T13:27:41.973374254Z" level=info msg="shim disconnected" id=d3fa3905fdaed85459aa68f56610ddde613a817fb8cc114c77665f3a44859d87 namespace=k8s.io Jan 14 13:27:41.973471 containerd[1829]: time="2025-01-14T13:27:41.973472554Z" level=warning msg="cleaning up after shim disconnected" id=d3fa3905fdaed85459aa68f56610ddde613a817fb8cc114c77665f3a44859d87 namespace=k8s.io Jan 14 13:27:41.973881 containerd[1829]: time="2025-01-14T13:27:41.973486854Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 14 13:27:42.338051 sshd[5476]: Accepted publickey for core from 10.200.16.10 port 58570 ssh2: RSA SHA256:5BL0MHBLBPplNMyfHIepoEZk0FL953xzvqueGYBPke0 Jan 14 13:27:42.339628 sshd-session[5476]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:27:42.347355 systemd-logind[1807]: New session 28 of user core. Jan 14 13:27:42.354219 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 14 13:27:42.678029 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d3fa3905fdaed85459aa68f56610ddde613a817fb8cc114c77665f3a44859d87-rootfs.mount: Deactivated successfully. Jan 14 13:27:42.840518 containerd[1829]: time="2025-01-14T13:27:42.840472607Z" level=info msg="CreateContainer within sandbox \"77f5d2000097ea46abe1290fab44d8e5d21156a0fdce46e93fe2c4c0ac313e71\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 14 13:27:42.878649 containerd[1829]: time="2025-01-14T13:27:42.878601194Z" level=info msg="CreateContainer within sandbox \"77f5d2000097ea46abe1290fab44d8e5d21156a0fdce46e93fe2c4c0ac313e71\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1b85a2f0340c78827828af4ed38bec652d5d08637b7e3a0ce5fbed93e342a340\"" Jan 14 13:27:42.879841 containerd[1829]: time="2025-01-14T13:27:42.879636199Z" level=info msg="StartContainer for \"1b85a2f0340c78827828af4ed38bec652d5d08637b7e3a0ce5fbed93e342a340\"" Jan 14 13:27:42.950181 containerd[1829]: time="2025-01-14T13:27:42.949616243Z" level=info msg="StartContainer for \"1b85a2f0340c78827828af4ed38bec652d5d08637b7e3a0ce5fbed93e342a340\" returns successfully" Jan 14 13:27:42.980592 containerd[1829]: time="2025-01-14T13:27:42.980518594Z" level=info msg="shim disconnected" id=1b85a2f0340c78827828af4ed38bec652d5d08637b7e3a0ce5fbed93e342a340 namespace=k8s.io Jan 14 13:27:42.980592 containerd[1829]: time="2025-01-14T13:27:42.980586995Z" level=warning msg="cleaning up after shim disconnected" id=1b85a2f0340c78827828af4ed38bec652d5d08637b7e3a0ce5fbed93e342a340 namespace=k8s.io Jan 14 13:27:42.980592 containerd[1829]: time="2025-01-14T13:27:42.980597595Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 14 13:27:43.675668 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1b85a2f0340c78827828af4ed38bec652d5d08637b7e3a0ce5fbed93e342a340-rootfs.mount: Deactivated successfully. Jan 14 13:27:43.844305 containerd[1829]: time="2025-01-14T13:27:43.843979630Z" level=info msg="CreateContainer within sandbox \"77f5d2000097ea46abe1290fab44d8e5d21156a0fdce46e93fe2c4c0ac313e71\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 14 13:27:43.888651 containerd[1829]: time="2025-01-14T13:27:43.888449348Z" level=info msg="CreateContainer within sandbox \"77f5d2000097ea46abe1290fab44d8e5d21156a0fdce46e93fe2c4c0ac313e71\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"07faa27f4149c5aba9a8c7dac911200b552c528a2225efcf9bf91b7ac4ee67f0\"" Jan 14 13:27:43.889445 containerd[1829]: time="2025-01-14T13:27:43.889414253Z" level=info msg="StartContainer for \"07faa27f4149c5aba9a8c7dac911200b552c528a2225efcf9bf91b7ac4ee67f0\"" Jan 14 13:27:43.971306 containerd[1829]: time="2025-01-14T13:27:43.971083153Z" level=info msg="StartContainer for \"07faa27f4149c5aba9a8c7dac911200b552c528a2225efcf9bf91b7ac4ee67f0\" returns successfully" Jan 14 13:27:43.998066 containerd[1829]: time="2025-01-14T13:27:43.997992685Z" level=info msg="shim disconnected" id=07faa27f4149c5aba9a8c7dac911200b552c528a2225efcf9bf91b7ac4ee67f0 namespace=k8s.io Jan 14 13:27:43.998066 containerd[1829]: time="2025-01-14T13:27:43.998058986Z" level=warning msg="cleaning up after shim disconnected" id=07faa27f4149c5aba9a8c7dac911200b552c528a2225efcf9bf91b7ac4ee67f0 namespace=k8s.io Jan 14 13:27:43.998066 containerd[1829]: time="2025-01-14T13:27:43.998072086Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 14 13:27:44.675938 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-07faa27f4149c5aba9a8c7dac911200b552c528a2225efcf9bf91b7ac4ee67f0-rootfs.mount: Deactivated successfully. Jan 14 13:27:44.848712 containerd[1829]: time="2025-01-14T13:27:44.848494783Z" level=info msg="CreateContainer within sandbox \"77f5d2000097ea46abe1290fab44d8e5d21156a0fdce46e93fe2c4c0ac313e71\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 14 13:27:44.891210 containerd[1829]: time="2025-01-14T13:27:44.891163894Z" level=info msg="CreateContainer within sandbox \"77f5d2000097ea46abe1290fab44d8e5d21156a0fdce46e93fe2c4c0ac313e71\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d1111f133b75a3d68ef1f7b55e3f14b175b6ec90fb817e057dce03927d85872f\"" Jan 14 13:27:44.892917 containerd[1829]: time="2025-01-14T13:27:44.892014798Z" level=info msg="StartContainer for \"d1111f133b75a3d68ef1f7b55e3f14b175b6ec90fb817e057dce03927d85872f\"" Jan 14 13:27:44.972927 containerd[1829]: time="2025-01-14T13:27:44.971209990Z" level=info msg="StartContainer for \"d1111f133b75a3d68ef1f7b55e3f14b175b6ec90fb817e057dce03927d85872f\" returns successfully" Jan 14 13:27:45.476791 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 14 13:27:45.676422 systemd[1]: run-containerd-runc-k8s.io-d1111f133b75a3d68ef1f7b55e3f14b175b6ec90fb817e057dce03927d85872f-runc.sbS9X3.mount: Deactivated successfully. Jan 14 13:27:48.340293 systemd-networkd[1366]: lxc_health: Link UP Jan 14 13:27:48.344937 systemd-networkd[1366]: lxc_health: Gained carrier Jan 14 13:27:48.767394 kubelet[3568]: I0114 13:27:48.767341 3568 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-kbkdx" podStartSLOduration=8.767291658 podStartE2EDuration="8.767291658s" podCreationTimestamp="2025-01-14 13:27:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-14 13:27:45.871546641 +0000 UTC m=+219.687493886" watchObservedRunningTime="2025-01-14 13:27:48.767291658 +0000 UTC m=+222.583239203" Jan 14 13:27:49.447708 update_engine[1812]: I20250114 13:27:49.445796 1812 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 14 13:27:49.447708 update_engine[1812]: I20250114 13:27:49.446130 1812 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 14 13:27:49.447708 update_engine[1812]: I20250114 13:27:49.446449 1812 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 14 13:27:49.475791 update_engine[1812]: E20250114 13:27:49.475492 1812 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 14 13:27:49.475791 update_engine[1812]: I20250114 13:27:49.475603 1812 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 14 13:27:49.475791 update_engine[1812]: I20250114 13:27:49.475616 1812 omaha_request_action.cc:617] Omaha request response: Jan 14 13:27:49.475791 update_engine[1812]: E20250114 13:27:49.475724 1812 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 14 13:27:49.478691 update_engine[1812]: I20250114 13:27:49.477785 1812 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 14 13:27:49.478691 update_engine[1812]: I20250114 13:27:49.477831 1812 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 14 13:27:49.478691 update_engine[1812]: I20250114 13:27:49.477842 1812 update_attempter.cc:306] Processing Done. Jan 14 13:27:49.478691 update_engine[1812]: E20250114 13:27:49.477862 1812 update_attempter.cc:619] Update failed. Jan 14 13:27:49.478691 update_engine[1812]: I20250114 13:27:49.477870 1812 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 14 13:27:49.478691 update_engine[1812]: I20250114 13:27:49.477877 1812 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 14 13:27:49.478691 update_engine[1812]: I20250114 13:27:49.477885 1812 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 14 13:27:49.478691 update_engine[1812]: I20250114 13:27:49.477985 1812 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 14 13:27:49.478691 update_engine[1812]: I20250114 13:27:49.478019 1812 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 14 13:27:49.478691 update_engine[1812]: I20250114 13:27:49.478029 1812 omaha_request_action.cc:272] Request: Jan 14 13:27:49.478691 update_engine[1812]: Jan 14 13:27:49.478691 update_engine[1812]: Jan 14 13:27:49.478691 update_engine[1812]: Jan 14 13:27:49.478691 update_engine[1812]: Jan 14 13:27:49.478691 update_engine[1812]: Jan 14 13:27:49.478691 update_engine[1812]: Jan 14 13:27:49.478691 update_engine[1812]: I20250114 13:27:49.478039 1812 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 14 13:27:49.478691 update_engine[1812]: I20250114 13:27:49.478303 1812 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 14 13:27:49.478691 update_engine[1812]: I20250114 13:27:49.478613 1812 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 14 13:27:49.480619 locksmithd[1851]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 14 13:27:49.486562 update_engine[1812]: E20250114 13:27:49.486311 1812 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 14 13:27:49.487780 update_engine[1812]: I20250114 13:27:49.486842 1812 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 14 13:27:49.487908 update_engine[1812]: I20250114 13:27:49.487880 1812 omaha_request_action.cc:617] Omaha request response: Jan 14 13:27:49.487996 update_engine[1812]: I20250114 13:27:49.487982 1812 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 14 13:27:49.488132 update_engine[1812]: I20250114 13:27:49.488116 1812 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 14 13:27:49.488194 update_engine[1812]: I20250114 13:27:49.488180 1812 update_attempter.cc:306] Processing Done. Jan 14 13:27:49.488498 update_engine[1812]: I20250114 13:27:49.488247 1812 update_attempter.cc:310] Error event sent. Jan 14 13:27:49.488498 update_engine[1812]: I20250114 13:27:49.488268 1812 update_check_scheduler.cc:74] Next update check in 46m18s Jan 14 13:27:49.489749 locksmithd[1851]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 14 13:27:49.796963 systemd-networkd[1366]: lxc_health: Gained IPv6LL Jan 14 13:27:55.701607 sshd[5539]: Connection closed by 10.200.16.10 port 58570 Jan 14 13:27:55.702330 sshd-session[5476]: pam_unix(sshd:session): session closed for user core Jan 14 13:27:55.706168 systemd[1]: sshd@25-10.200.4.19:22-10.200.16.10:58570.service: Deactivated successfully. Jan 14 13:27:55.711125 systemd[1]: session-28.scope: Deactivated successfully. Jan 14 13:27:55.712031 systemd-logind[1807]: Session 28 logged out. Waiting for processes to exit. Jan 14 13:27:55.713632 systemd-logind[1807]: Removed session 28.