Jan 30 13:04:14.104100 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 09:29:54 -00 2025 Jan 30 13:04:14.104135 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=fe60919b0c6f6abb7495678f87f7024e97a038fc343fa31a123a43ef5f489466 Jan 30 13:04:14.104149 kernel: BIOS-provided physical RAM map: Jan 30 13:04:14.104159 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 30 13:04:14.104169 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jan 30 13:04:14.104179 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Jan 30 13:04:14.104191 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved Jan 30 13:04:14.104202 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jan 30 13:04:14.104215 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jan 30 13:04:14.104226 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jan 30 13:04:14.104237 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jan 30 13:04:14.104247 kernel: printk: bootconsole [earlyser0] enabled Jan 30 13:04:14.104257 kernel: NX (Execute Disable) protection: active Jan 30 13:04:14.104268 kernel: APIC: Static calls initialized Jan 30 13:04:14.104284 kernel: efi: EFI v2.7 by Microsoft Jan 30 13:04:14.104296 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f621218 RNG=0x3ffd1018 Jan 30 13:04:14.104308 kernel: random: crng init done Jan 30 13:04:14.104319 kernel: secureboot: Secure boot disabled Jan 30 13:04:14.104331 kernel: SMBIOS 3.1.0 present. Jan 30 13:04:14.104342 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Jan 30 13:04:14.104353 kernel: Hypervisor detected: Microsoft Hyper-V Jan 30 13:04:14.104364 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Jan 30 13:04:14.104389 kernel: Hyper-V: Host Build 10.0.20348.1799-1-0 Jan 30 13:04:14.104400 kernel: Hyper-V: Nested features: 0x1e0101 Jan 30 13:04:14.104415 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jan 30 13:04:14.104429 kernel: Hyper-V: Using hypercall for remote TLB flush Jan 30 13:04:14.104452 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 30 13:04:14.104464 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 30 13:04:14.104475 kernel: tsc: Marking TSC unstable due to running on Hyper-V Jan 30 13:04:14.104486 kernel: tsc: Detected 2593.906 MHz processor Jan 30 13:04:14.104499 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 30 13:04:14.104512 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 30 13:04:14.104523 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Jan 30 13:04:14.104543 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 30 13:04:14.104555 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 30 13:04:14.104566 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Jan 30 13:04:14.104578 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Jan 30 13:04:14.104591 kernel: Using GB pages for direct mapping Jan 30 13:04:14.104603 kernel: ACPI: Early table checksum verification disabled Jan 30 13:04:14.104615 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jan 30 13:04:14.104632 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:04:14.104648 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:04:14.104662 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Jan 30 13:04:14.104675 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jan 30 13:04:14.104688 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:04:14.104702 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:04:14.104715 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:04:14.104731 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:04:14.104744 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:04:14.104756 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:04:14.104769 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:04:14.104781 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jan 30 13:04:14.104793 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Jan 30 13:04:14.104808 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jan 30 13:04:14.104820 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jan 30 13:04:14.104832 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jan 30 13:04:14.104849 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jan 30 13:04:14.104862 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Jan 30 13:04:14.104874 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Jan 30 13:04:14.104887 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jan 30 13:04:14.104900 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Jan 30 13:04:14.104913 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 30 13:04:14.104927 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 30 13:04:14.104940 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jan 30 13:04:14.104955 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Jan 30 13:04:14.104970 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Jan 30 13:04:14.104983 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jan 30 13:04:14.104996 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jan 30 13:04:14.105009 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jan 30 13:04:14.105022 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jan 30 13:04:14.105036 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jan 30 13:04:14.105049 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jan 30 13:04:14.105061 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jan 30 13:04:14.105076 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jan 30 13:04:14.105088 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jan 30 13:04:14.105101 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Jan 30 13:04:14.105114 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Jan 30 13:04:14.105127 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Jan 30 13:04:14.105141 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Jan 30 13:04:14.105154 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Jan 30 13:04:14.105166 kernel: NODE_DATA(0) allocated [mem 0x2bfff9000-0x2bfffefff] Jan 30 13:04:14.105179 kernel: Zone ranges: Jan 30 13:04:14.105195 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 30 13:04:14.105209 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 30 13:04:14.105222 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jan 30 13:04:14.105235 kernel: Movable zone start for each node Jan 30 13:04:14.105249 kernel: Early memory node ranges Jan 30 13:04:14.105262 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 30 13:04:14.105275 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Jan 30 13:04:14.105289 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jan 30 13:04:14.105302 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jan 30 13:04:14.105318 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jan 30 13:04:14.105331 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 30 13:04:14.105344 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 30 13:04:14.105358 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Jan 30 13:04:14.105371 kernel: ACPI: PM-Timer IO Port: 0x408 Jan 30 13:04:14.105404 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Jan 30 13:04:14.105418 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Jan 30 13:04:14.105431 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 30 13:04:14.105444 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 30 13:04:14.105461 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jan 30 13:04:14.105474 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 30 13:04:14.105487 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jan 30 13:04:14.105499 kernel: Booting paravirtualized kernel on Hyper-V Jan 30 13:04:14.105513 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 30 13:04:14.105525 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 30 13:04:14.105538 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 30 13:04:14.105551 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 30 13:04:14.105563 kernel: pcpu-alloc: [0] 0 1 Jan 30 13:04:14.105579 kernel: Hyper-V: PV spinlocks enabled Jan 30 13:04:14.105592 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 30 13:04:14.105606 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=fe60919b0c6f6abb7495678f87f7024e97a038fc343fa31a123a43ef5f489466 Jan 30 13:04:14.105620 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 13:04:14.105632 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 30 13:04:14.105645 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 30 13:04:14.105658 kernel: Fallback order for Node 0: 0 Jan 30 13:04:14.105670 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Jan 30 13:04:14.105686 kernel: Policy zone: Normal Jan 30 13:04:14.105708 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 13:04:14.105721 kernel: software IO TLB: area num 2. Jan 30 13:04:14.105738 kernel: Memory: 8067556K/8387460K available (14336K kernel code, 2301K rwdata, 22800K rodata, 43320K init, 1752K bss, 319648K reserved, 0K cma-reserved) Jan 30 13:04:14.105751 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 30 13:04:14.105765 kernel: ftrace: allocating 37893 entries in 149 pages Jan 30 13:04:14.105778 kernel: ftrace: allocated 149 pages with 4 groups Jan 30 13:04:14.105791 kernel: Dynamic Preempt: voluntary Jan 30 13:04:14.105804 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 13:04:14.105822 kernel: rcu: RCU event tracing is enabled. Jan 30 13:04:14.105836 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 30 13:04:14.105852 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 13:04:14.105866 kernel: Rude variant of Tasks RCU enabled. Jan 30 13:04:14.105880 kernel: Tracing variant of Tasks RCU enabled. Jan 30 13:04:14.105893 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 13:04:14.105907 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 30 13:04:14.105920 kernel: Using NULL legacy PIC Jan 30 13:04:14.105936 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jan 30 13:04:14.105950 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 13:04:14.105963 kernel: Console: colour dummy device 80x25 Jan 30 13:04:14.105977 kernel: printk: console [tty1] enabled Jan 30 13:04:14.105990 kernel: printk: console [ttyS0] enabled Jan 30 13:04:14.106003 kernel: printk: bootconsole [earlyser0] disabled Jan 30 13:04:14.106017 kernel: ACPI: Core revision 20230628 Jan 30 13:04:14.106030 kernel: Failed to register legacy timer interrupt Jan 30 13:04:14.106044 kernel: APIC: Switch to symmetric I/O mode setup Jan 30 13:04:14.106060 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 30 13:04:14.106073 kernel: Hyper-V: Using IPI hypercalls Jan 30 13:04:14.106086 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jan 30 13:04:14.106100 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jan 30 13:04:14.106113 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jan 30 13:04:14.106127 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jan 30 13:04:14.106141 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jan 30 13:04:14.106154 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jan 30 13:04:14.106168 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593906) Jan 30 13:04:14.106184 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 30 13:04:14.106198 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 30 13:04:14.106211 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 30 13:04:14.106232 kernel: Spectre V2 : Mitigation: Retpolines Jan 30 13:04:14.106247 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 30 13:04:14.106258 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 30 13:04:14.106272 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 30 13:04:14.106286 kernel: RETBleed: Vulnerable Jan 30 13:04:14.106302 kernel: Speculative Store Bypass: Vulnerable Jan 30 13:04:14.106316 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Jan 30 13:04:14.106334 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 30 13:04:14.106348 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 30 13:04:14.106361 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 30 13:04:14.106395 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 30 13:04:14.106409 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 30 13:04:14.106423 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 30 13:04:14.106437 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 30 13:04:14.106451 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 30 13:04:14.106465 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jan 30 13:04:14.106479 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jan 30 13:04:14.106493 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jan 30 13:04:14.106510 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Jan 30 13:04:14.106524 kernel: Freeing SMP alternatives memory: 32K Jan 30 13:04:14.106538 kernel: pid_max: default: 32768 minimum: 301 Jan 30 13:04:14.106552 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 13:04:14.106566 kernel: landlock: Up and running. Jan 30 13:04:14.106579 kernel: SELinux: Initializing. Jan 30 13:04:14.106593 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 30 13:04:14.106607 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 30 13:04:14.106621 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Jan 30 13:04:14.106636 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:04:14.106650 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:04:14.106668 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:04:14.106682 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 30 13:04:14.106696 kernel: signal: max sigframe size: 3632 Jan 30 13:04:14.106710 kernel: rcu: Hierarchical SRCU implementation. Jan 30 13:04:14.106725 kernel: rcu: Max phase no-delay instances is 400. Jan 30 13:04:14.106739 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 30 13:04:14.106753 kernel: smp: Bringing up secondary CPUs ... Jan 30 13:04:14.106767 kernel: smpboot: x86: Booting SMP configuration: Jan 30 13:04:14.106781 kernel: .... node #0, CPUs: #1 Jan 30 13:04:14.106799 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Jan 30 13:04:14.106814 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 30 13:04:14.106828 kernel: smp: Brought up 1 node, 2 CPUs Jan 30 13:04:14.106842 kernel: smpboot: Max logical packages: 1 Jan 30 13:04:14.106856 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Jan 30 13:04:14.106871 kernel: devtmpfs: initialized Jan 30 13:04:14.106885 kernel: x86/mm: Memory block size: 128MB Jan 30 13:04:14.106899 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jan 30 13:04:14.106916 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 13:04:14.106930 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 30 13:04:14.106945 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 13:04:14.106959 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 13:04:14.106973 kernel: audit: initializing netlink subsys (disabled) Jan 30 13:04:14.106988 kernel: audit: type=2000 audit(1738242252.027:1): state=initialized audit_enabled=0 res=1 Jan 30 13:04:14.107002 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 13:04:14.107016 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 30 13:04:14.107030 kernel: cpuidle: using governor menu Jan 30 13:04:14.107047 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 13:04:14.107061 kernel: dca service started, version 1.12.1 Jan 30 13:04:14.107076 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Jan 30 13:04:14.107090 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 30 13:04:14.107105 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 13:04:14.107119 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 13:04:14.107133 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 13:04:14.107147 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 13:04:14.107161 kernel: ACPI: Added _OSI(Module Device) Jan 30 13:04:14.107178 kernel: ACPI: Added _OSI(Processor Device) Jan 30 13:04:14.107192 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 13:04:14.107206 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 13:04:14.107221 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 30 13:04:14.107235 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 30 13:04:14.107249 kernel: ACPI: Interpreter enabled Jan 30 13:04:14.107263 kernel: ACPI: PM: (supports S0 S5) Jan 30 13:04:14.107277 kernel: ACPI: Using IOAPIC for interrupt routing Jan 30 13:04:14.107292 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 30 13:04:14.107309 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 30 13:04:14.107323 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jan 30 13:04:14.107337 kernel: iommu: Default domain type: Translated Jan 30 13:04:14.107351 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 30 13:04:14.107365 kernel: efivars: Registered efivars operations Jan 30 13:04:14.114981 kernel: PCI: Using ACPI for IRQ routing Jan 30 13:04:14.115003 kernel: PCI: System does not support PCI Jan 30 13:04:14.115019 kernel: vgaarb: loaded Jan 30 13:04:14.115034 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Jan 30 13:04:14.115053 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 13:04:14.115068 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 13:04:14.115083 kernel: pnp: PnP ACPI init Jan 30 13:04:14.115097 kernel: pnp: PnP ACPI: found 3 devices Jan 30 13:04:14.115112 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 30 13:04:14.115126 kernel: NET: Registered PF_INET protocol family Jan 30 13:04:14.115141 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 30 13:04:14.115156 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 30 13:04:14.115171 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 13:04:14.115188 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 13:04:14.115203 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 30 13:04:14.115217 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 30 13:04:14.115232 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 30 13:04:14.115247 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 30 13:04:14.115261 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 13:04:14.115275 kernel: NET: Registered PF_XDP protocol family Jan 30 13:04:14.115290 kernel: PCI: CLS 0 bytes, default 64 Jan 30 13:04:14.115304 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 30 13:04:14.115322 kernel: software IO TLB: mapped [mem 0x000000003ae72000-0x000000003ee72000] (64MB) Jan 30 13:04:14.115336 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 30 13:04:14.115351 kernel: Initialise system trusted keyrings Jan 30 13:04:14.115364 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 30 13:04:14.115388 kernel: Key type asymmetric registered Jan 30 13:04:14.115402 kernel: Asymmetric key parser 'x509' registered Jan 30 13:04:14.115416 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 30 13:04:14.115431 kernel: io scheduler mq-deadline registered Jan 30 13:04:14.115445 kernel: io scheduler kyber registered Jan 30 13:04:14.115463 kernel: io scheduler bfq registered Jan 30 13:04:14.115478 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 30 13:04:14.115492 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 13:04:14.115506 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 30 13:04:14.115521 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 30 13:04:14.115535 kernel: i8042: PNP: No PS/2 controller found. Jan 30 13:04:14.115720 kernel: rtc_cmos 00:02: registered as rtc0 Jan 30 13:04:14.115850 kernel: rtc_cmos 00:02: setting system clock to 2025-01-30T13:04:13 UTC (1738242253) Jan 30 13:04:14.115969 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jan 30 13:04:14.115987 kernel: intel_pstate: CPU model not supported Jan 30 13:04:14.116002 kernel: efifb: probing for efifb Jan 30 13:04:14.116016 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 30 13:04:14.116031 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 30 13:04:14.116045 kernel: efifb: scrolling: redraw Jan 30 13:04:14.116060 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 30 13:04:14.116074 kernel: Console: switching to colour frame buffer device 128x48 Jan 30 13:04:14.116089 kernel: fb0: EFI VGA frame buffer device Jan 30 13:04:14.116107 kernel: pstore: Using crash dump compression: deflate Jan 30 13:04:14.116121 kernel: pstore: Registered efi_pstore as persistent store backend Jan 30 13:04:14.116135 kernel: NET: Registered PF_INET6 protocol family Jan 30 13:04:14.116149 kernel: Segment Routing with IPv6 Jan 30 13:04:14.116164 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 13:04:14.116179 kernel: NET: Registered PF_PACKET protocol family Jan 30 13:04:14.116193 kernel: Key type dns_resolver registered Jan 30 13:04:14.116207 kernel: IPI shorthand broadcast: enabled Jan 30 13:04:14.116221 kernel: sched_clock: Marking stable (844011600, 49023200)->(1107738100, -214703300) Jan 30 13:04:14.116238 kernel: registered taskstats version 1 Jan 30 13:04:14.116253 kernel: Loading compiled-in X.509 certificates Jan 30 13:04:14.116267 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 7f0738935740330d55027faa5877e7155d5f24f4' Jan 30 13:04:14.116282 kernel: Key type .fscrypt registered Jan 30 13:04:14.116295 kernel: Key type fscrypt-provisioning registered Jan 30 13:04:14.116310 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 13:04:14.116324 kernel: ima: Allocated hash algorithm: sha1 Jan 30 13:04:14.116338 kernel: ima: No architecture policies found Jan 30 13:04:14.116355 kernel: clk: Disabling unused clocks Jan 30 13:04:14.116370 kernel: Freeing unused kernel image (initmem) memory: 43320K Jan 30 13:04:14.116394 kernel: Write protecting the kernel read-only data: 38912k Jan 30 13:04:14.116407 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Jan 30 13:04:14.116419 kernel: Run /init as init process Jan 30 13:04:14.116432 kernel: with arguments: Jan 30 13:04:14.116456 kernel: /init Jan 30 13:04:14.116469 kernel: with environment: Jan 30 13:04:14.116482 kernel: HOME=/ Jan 30 13:04:14.116495 kernel: TERM=linux Jan 30 13:04:14.116513 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 13:04:14.116531 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:04:14.116549 systemd[1]: Detected virtualization microsoft. Jan 30 13:04:14.116564 systemd[1]: Detected architecture x86-64. Jan 30 13:04:14.116577 systemd[1]: Running in initrd. Jan 30 13:04:14.116591 systemd[1]: No hostname configured, using default hostname. Jan 30 13:04:14.116605 systemd[1]: Hostname set to . Jan 30 13:04:14.116626 systemd[1]: Initializing machine ID from random generator. Jan 30 13:04:14.116639 systemd[1]: Queued start job for default target initrd.target. Jan 30 13:04:14.116652 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:04:14.116667 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:04:14.116683 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 13:04:14.116698 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:04:14.116713 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 13:04:14.116728 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 13:04:14.116748 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 13:04:14.116763 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 13:04:14.116778 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:04:14.116792 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:04:14.116807 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:04:14.116831 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:04:14.116846 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:04:14.116864 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:04:14.116879 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:04:14.116894 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:04:14.116909 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 13:04:14.116924 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 13:04:14.116939 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:04:14.116955 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:04:14.116970 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:04:14.116985 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:04:14.117004 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 13:04:14.117019 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:04:14.117034 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 13:04:14.117050 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 13:04:14.117065 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:04:14.117080 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:04:14.117123 systemd-journald[177]: Collecting audit messages is disabled. Jan 30 13:04:14.117161 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:04:14.117176 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 13:04:14.117192 systemd-journald[177]: Journal started Jan 30 13:04:14.117227 systemd-journald[177]: Runtime Journal (/run/log/journal/0638f834b7ae4f579e98cf5e93d42381) is 8.0M, max 158.8M, 150.8M free. Jan 30 13:04:14.103064 systemd-modules-load[178]: Inserted module 'overlay' Jan 30 13:04:14.129303 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:04:14.130134 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:04:14.137102 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 13:04:14.141366 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:04:14.154413 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 13:04:14.156610 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:04:14.165373 kernel: Bridge firewalling registered Jan 30 13:04:14.161539 systemd-modules-load[178]: Inserted module 'br_netfilter' Jan 30 13:04:14.173933 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 13:04:14.183592 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:04:14.187436 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:04:14.188478 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:04:14.192521 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:04:14.193998 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:04:14.213670 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:04:14.223491 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 13:04:14.228139 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:04:14.237326 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:04:14.247529 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:04:14.260689 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:04:14.266405 dracut-cmdline[208]: dracut-dracut-053 Jan 30 13:04:14.266405 dracut-cmdline[208]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=fe60919b0c6f6abb7495678f87f7024e97a038fc343fa31a123a43ef5f489466 Jan 30 13:04:14.327713 systemd-resolved[217]: Positive Trust Anchors: Jan 30 13:04:14.330001 systemd-resolved[217]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:04:14.330062 systemd-resolved[217]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:04:14.361282 kernel: SCSI subsystem initialized Jan 30 13:04:14.350824 systemd-resolved[217]: Defaulting to hostname 'linux'. Jan 30 13:04:14.356114 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:04:14.364225 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:04:14.376550 kernel: Loading iSCSI transport class v2.0-870. Jan 30 13:04:14.388394 kernel: iscsi: registered transport (tcp) Jan 30 13:04:14.409123 kernel: iscsi: registered transport (qla4xxx) Jan 30 13:04:14.409202 kernel: QLogic iSCSI HBA Driver Jan 30 13:04:14.445154 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 13:04:14.456586 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 13:04:14.485554 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 13:04:14.485660 kernel: device-mapper: uevent: version 1.0.3 Jan 30 13:04:14.489261 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 13:04:14.531409 kernel: raid6: avx512x4 gen() 18190 MB/s Jan 30 13:04:14.550399 kernel: raid6: avx512x2 gen() 18142 MB/s Jan 30 13:04:14.568392 kernel: raid6: avx512x1 gen() 18437 MB/s Jan 30 13:04:14.587394 kernel: raid6: avx2x4 gen() 18347 MB/s Jan 30 13:04:14.606391 kernel: raid6: avx2x2 gen() 18358 MB/s Jan 30 13:04:14.626325 kernel: raid6: avx2x1 gen() 14065 MB/s Jan 30 13:04:14.626366 kernel: raid6: using algorithm avx512x1 gen() 18437 MB/s Jan 30 13:04:14.646973 kernel: raid6: .... xor() 26526 MB/s, rmw enabled Jan 30 13:04:14.647025 kernel: raid6: using avx512x2 recovery algorithm Jan 30 13:04:14.674410 kernel: xor: automatically using best checksumming function avx Jan 30 13:04:14.816403 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 13:04:14.825433 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:04:14.836620 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:04:14.854078 systemd-udevd[396]: Using default interface naming scheme 'v255'. Jan 30 13:04:14.858504 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:04:14.873551 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 13:04:14.885675 dracut-pre-trigger[408]: rd.md=0: removing MD RAID activation Jan 30 13:04:14.912499 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:04:14.922701 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:04:14.963563 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:04:14.980541 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 13:04:15.000585 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 13:04:15.010921 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:04:15.018698 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:04:15.028090 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:04:15.039400 kernel: cryptd: max_cpu_qlen set to 1000 Jan 30 13:04:15.043598 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 13:04:15.067471 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:04:15.067629 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:04:15.071147 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:04:15.078441 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:04:15.078585 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:04:15.090073 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:04:15.105483 kernel: AVX2 version of gcm_enc/dec engaged. Jan 30 13:04:15.105528 kernel: AES CTR mode by8 optimization enabled Jan 30 13:04:15.106762 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:04:15.114836 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:04:15.119961 kernel: hv_vmbus: Vmbus version:5.2 Jan 30 13:04:15.141561 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:04:15.141690 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:04:15.181476 kernel: hv_vmbus: registering driver hv_storvsc Jan 30 13:04:15.181517 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 30 13:04:15.181532 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 30 13:04:15.181546 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 30 13:04:15.181558 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jan 30 13:04:15.181572 kernel: scsi host1: storvsc_host_t Jan 30 13:04:15.181748 kernel: scsi host0: storvsc_host_t Jan 30 13:04:15.181869 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 30 13:04:15.181896 kernel: hv_vmbus: registering driver hv_netvsc Jan 30 13:04:15.175744 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:04:15.188585 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 30 13:04:15.193730 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jan 30 13:04:15.216671 kernel: hv_vmbus: registering driver hid_hyperv Jan 30 13:04:15.218524 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:04:15.233483 kernel: PTP clock support registered Jan 30 13:04:15.233675 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:04:15.253740 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jan 30 13:04:15.253783 kernel: hv_utils: Registering HyperV Utility Driver Jan 30 13:04:15.262005 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 30 13:04:15.269691 kernel: hv_vmbus: registering driver hv_utils Jan 30 13:04:15.269715 kernel: hv_utils: TimeSync IC version 4.0 Jan 30 13:04:15.269741 kernel: hv_utils: Heartbeat IC version 3.0 Jan 30 13:04:15.269762 kernel: hv_utils: Shutdown IC version 3.2 Jan 30 13:04:15.682572 systemd-resolved[217]: Clock change detected. Flushing caches. Jan 30 13:04:15.702678 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 30 13:04:15.710939 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 30 13:04:15.710958 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 30 13:04:15.737291 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 30 13:04:15.737482 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 30 13:04:15.737667 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 30 13:04:15.737819 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 30 13:04:15.737972 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 30 13:04:15.738117 kernel: hv_netvsc 0022483e-f156-0022-483e-f1560022483e eth0: VF slot 1 added Jan 30 13:04:15.738309 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:04:15.738327 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 30 13:04:15.709016 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:04:15.746143 kernel: hv_vmbus: registering driver hv_pci Jan 30 13:04:15.751183 kernel: hv_pci 89cd63b5-d952-43f3-a13c-dda244e90d3c: PCI VMBus probing: Using version 0x10004 Jan 30 13:04:15.796436 kernel: hv_pci 89cd63b5-d952-43f3-a13c-dda244e90d3c: PCI host bridge to bus d952:00 Jan 30 13:04:15.796628 kernel: pci_bus d952:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Jan 30 13:04:15.796801 kernel: pci_bus d952:00: No busn resource found for root bus, will use [bus 00-ff] Jan 30 13:04:15.797578 kernel: pci d952:00:02.0: [15b3:1016] type 00 class 0x020000 Jan 30 13:04:15.797779 kernel: pci d952:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Jan 30 13:04:15.797954 kernel: pci d952:00:02.0: enabling Extended Tags Jan 30 13:04:15.798121 kernel: pci d952:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at d952:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Jan 30 13:04:15.798310 kernel: pci_bus d952:00: busn_res: [bus 00-ff] end is updated to 00 Jan 30 13:04:15.798456 kernel: pci d952:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Jan 30 13:04:15.994419 kernel: mlx5_core d952:00:02.0: enabling device (0000 -> 0002) Jan 30 13:04:16.227884 kernel: mlx5_core d952:00:02.0: firmware version: 14.30.5000 Jan 30 13:04:16.228084 kernel: hv_netvsc 0022483e-f156-0022-483e-f1560022483e eth0: VF registering: eth1 Jan 30 13:04:16.228618 kernel: mlx5_core d952:00:02.0 eth1: joined to eth0 Jan 30 13:04:16.228803 kernel: mlx5_core d952:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jan 30 13:04:16.234378 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 30 13:04:16.246263 kernel: mlx5_core d952:00:02.0 enP55634s1: renamed from eth1 Jan 30 13:04:16.322155 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (459) Jan 30 13:04:16.337834 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 30 13:04:16.363151 kernel: BTRFS: device fsid f8084233-4a6f-4e67-af0b-519e43b19e58 devid 1 transid 41 /dev/sda3 scanned by (udev-worker) (457) Jan 30 13:04:16.376695 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 30 13:04:16.380254 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 30 13:04:16.391093 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 13:04:16.400940 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 30 13:04:16.418150 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:04:16.425200 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:04:17.433158 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:04:17.433224 disk-uuid[603]: The operation has completed successfully. Jan 30 13:04:17.510741 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 13:04:17.510857 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 13:04:17.533252 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 13:04:17.539114 sh[689]: Success Jan 30 13:04:17.579156 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 30 13:04:17.801256 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 13:04:17.817225 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 13:04:17.822119 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 13:04:17.841172 kernel: BTRFS info (device dm-0): first mount of filesystem f8084233-4a6f-4e67-af0b-519e43b19e58 Jan 30 13:04:17.841260 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:04:17.844524 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 13:04:17.847586 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 13:04:17.850565 kernel: BTRFS info (device dm-0): using free space tree Jan 30 13:04:18.222272 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 13:04:18.227622 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 13:04:18.237335 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 13:04:18.243437 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 13:04:18.253239 kernel: BTRFS info (device sda6): first mount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 13:04:18.253289 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:04:18.257694 kernel: BTRFS info (device sda6): using free space tree Jan 30 13:04:18.276154 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 13:04:18.291690 kernel: BTRFS info (device sda6): last unmount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 13:04:18.291248 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 13:04:18.303064 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 13:04:18.313309 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 13:04:18.337868 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:04:18.351304 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:04:18.370339 systemd-networkd[873]: lo: Link UP Jan 30 13:04:18.370348 systemd-networkd[873]: lo: Gained carrier Jan 30 13:04:18.372485 systemd-networkd[873]: Enumeration completed Jan 30 13:04:18.372721 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:04:18.373939 systemd-networkd[873]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:04:18.373943 systemd-networkd[873]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:04:18.377145 systemd[1]: Reached target network.target - Network. Jan 30 13:04:18.436147 kernel: mlx5_core d952:00:02.0 enP55634s1: Link up Jan 30 13:04:18.478157 kernel: hv_netvsc 0022483e-f156-0022-483e-f1560022483e eth0: Data path switched to VF: enP55634s1 Jan 30 13:04:18.479266 systemd-networkd[873]: enP55634s1: Link UP Jan 30 13:04:18.479422 systemd-networkd[873]: eth0: Link UP Jan 30 13:04:18.479626 systemd-networkd[873]: eth0: Gained carrier Jan 30 13:04:18.479643 systemd-networkd[873]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:04:18.491350 systemd-networkd[873]: enP55634s1: Gained carrier Jan 30 13:04:18.522198 systemd-networkd[873]: eth0: DHCPv4 address 10.200.4.12/24, gateway 10.200.4.1 acquired from 168.63.129.16 Jan 30 13:04:19.258519 ignition[834]: Ignition 2.20.0 Jan 30 13:04:19.258532 ignition[834]: Stage: fetch-offline Jan 30 13:04:19.260848 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:04:19.258573 ignition[834]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:04:19.258583 ignition[834]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 13:04:19.258691 ignition[834]: parsed url from cmdline: "" Jan 30 13:04:19.258695 ignition[834]: no config URL provided Jan 30 13:04:19.258702 ignition[834]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:04:19.258712 ignition[834]: no config at "/usr/lib/ignition/user.ign" Jan 30 13:04:19.258719 ignition[834]: failed to fetch config: resource requires networking Jan 30 13:04:19.258934 ignition[834]: Ignition finished successfully Jan 30 13:04:19.296300 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 30 13:04:19.309733 ignition[882]: Ignition 2.20.0 Jan 30 13:04:19.309744 ignition[882]: Stage: fetch Jan 30 13:04:19.309949 ignition[882]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:04:19.309963 ignition[882]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 13:04:19.310066 ignition[882]: parsed url from cmdline: "" Jan 30 13:04:19.310069 ignition[882]: no config URL provided Jan 30 13:04:19.310073 ignition[882]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:04:19.310081 ignition[882]: no config at "/usr/lib/ignition/user.ign" Jan 30 13:04:19.310103 ignition[882]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 30 13:04:19.383322 ignition[882]: GET result: OK Jan 30 13:04:19.383423 ignition[882]: config has been read from IMDS userdata Jan 30 13:04:19.383457 ignition[882]: parsing config with SHA512: 5df08a0d68041d0f562bc83f7062a9d3bec5c9da53434633d1aa85a3d46bcf7a2953265d4e1f0f9212bf060634d3e498e0568170295c99d417effcdca7a3fe97 Jan 30 13:04:19.389105 unknown[882]: fetched base config from "system" Jan 30 13:04:19.389120 unknown[882]: fetched base config from "system" Jan 30 13:04:19.389519 ignition[882]: fetch: fetch complete Jan 30 13:04:19.389140 unknown[882]: fetched user config from "azure" Jan 30 13:04:19.389524 ignition[882]: fetch: fetch passed Jan 30 13:04:19.391201 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 30 13:04:19.389566 ignition[882]: Ignition finished successfully Jan 30 13:04:19.403462 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 13:04:19.418090 ignition[888]: Ignition 2.20.0 Jan 30 13:04:19.418101 ignition[888]: Stage: kargs Jan 30 13:04:19.420485 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 13:04:19.418332 ignition[888]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:04:19.418346 ignition[888]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 13:04:19.419254 ignition[888]: kargs: kargs passed Jan 30 13:04:19.419300 ignition[888]: Ignition finished successfully Jan 30 13:04:19.435420 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 13:04:19.449143 ignition[894]: Ignition 2.20.0 Jan 30 13:04:19.449172 ignition[894]: Stage: disks Jan 30 13:04:19.449400 ignition[894]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:04:19.451339 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 13:04:19.449409 ignition[894]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 13:04:19.455522 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 13:04:19.450444 ignition[894]: disks: disks passed Jan 30 13:04:19.460330 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 13:04:19.450487 ignition[894]: Ignition finished successfully Jan 30 13:04:19.463556 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:04:19.481585 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:04:19.484247 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:04:19.496436 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 13:04:19.561369 systemd-fsck[903]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 30 13:04:19.566480 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 13:04:19.576451 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 13:04:19.591940 systemd-networkd[873]: enP55634s1: Gained IPv6LL Jan 30 13:04:19.668172 kernel: EXT4-fs (sda9): mounted filesystem cdc615db-d057-439f-af25-aa57b1c399e2 r/w with ordered data mode. Quota mode: none. Jan 30 13:04:19.668799 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 13:04:19.671756 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 13:04:19.714288 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:04:19.719972 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 13:04:19.728149 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (914) Jan 30 13:04:19.731340 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 30 13:04:19.747663 kernel: BTRFS info (device sda6): first mount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 13:04:19.747694 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:04:19.747709 kernel: BTRFS info (device sda6): using free space tree Jan 30 13:04:19.734735 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 13:04:19.759764 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 13:04:19.734772 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:04:19.745521 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 13:04:19.761020 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:04:19.779346 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 13:04:19.974268 systemd-networkd[873]: eth0: Gained IPv6LL Jan 30 13:04:20.425833 coreos-metadata[916]: Jan 30 13:04:20.425 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 30 13:04:20.430234 coreos-metadata[916]: Jan 30 13:04:20.428 INFO Fetch successful Jan 30 13:04:20.430234 coreos-metadata[916]: Jan 30 13:04:20.428 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 30 13:04:20.440463 coreos-metadata[916]: Jan 30 13:04:20.440 INFO Fetch successful Jan 30 13:04:20.455392 coreos-metadata[916]: Jan 30 13:04:20.455 INFO wrote hostname ci-4186.1.0-a-d95fc4b65f to /sysroot/etc/hostname Jan 30 13:04:20.460253 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 13:04:20.467060 initrd-setup-root[944]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 13:04:20.483049 initrd-setup-root[952]: cut: /sysroot/etc/group: No such file or directory Jan 30 13:04:20.488546 initrd-setup-root[959]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 13:04:20.495162 initrd-setup-root[966]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 13:04:21.279210 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 13:04:21.289244 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 13:04:21.297312 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 13:04:21.307043 kernel: BTRFS info (device sda6): last unmount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 13:04:21.306613 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 13:04:21.333348 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 13:04:21.338376 ignition[1033]: INFO : Ignition 2.20.0 Jan 30 13:04:21.338376 ignition[1033]: INFO : Stage: mount Jan 30 13:04:21.342355 ignition[1033]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:04:21.342355 ignition[1033]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 13:04:21.342355 ignition[1033]: INFO : mount: mount passed Jan 30 13:04:21.342355 ignition[1033]: INFO : Ignition finished successfully Jan 30 13:04:21.340693 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 13:04:21.353199 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 13:04:21.367323 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:04:21.378144 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1046) Jan 30 13:04:21.378184 kernel: BTRFS info (device sda6): first mount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 13:04:21.383150 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:04:21.387092 kernel: BTRFS info (device sda6): using free space tree Jan 30 13:04:21.392149 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 13:04:21.393492 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:04:21.418723 ignition[1062]: INFO : Ignition 2.20.0 Jan 30 13:04:21.418723 ignition[1062]: INFO : Stage: files Jan 30 13:04:21.423743 ignition[1062]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:04:21.423743 ignition[1062]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 13:04:21.423743 ignition[1062]: DEBUG : files: compiled without relabeling support, skipping Jan 30 13:04:21.423743 ignition[1062]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 13:04:21.423743 ignition[1062]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 13:04:21.531815 ignition[1062]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 13:04:21.535682 ignition[1062]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 13:04:21.539036 ignition[1062]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 13:04:21.536513 unknown[1062]: wrote ssh authorized keys file for user: core Jan 30 13:04:21.593760 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 13:04:21.598915 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 30 13:04:21.642810 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 30 13:04:21.827259 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 13:04:21.827259 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 30 13:04:21.839207 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 30 13:04:22.194950 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 30 13:04:22.240881 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 30 13:04:22.245515 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 30 13:04:22.249955 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 13:04:22.249955 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:04:22.258898 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:04:22.258898 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:04:22.268475 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:04:22.273019 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:04:22.273019 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:04:22.286858 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:04:22.291913 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:04:22.296530 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:04:22.296530 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:04:22.296530 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:04:22.296530 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 30 13:04:22.759900 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 30 13:04:22.976775 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:04:22.976775 ignition[1062]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 30 13:04:22.991587 ignition[1062]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:04:22.998209 ignition[1062]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:04:22.998209 ignition[1062]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 30 13:04:22.998209 ignition[1062]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jan 30 13:04:23.009612 ignition[1062]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 13:04:23.013313 ignition[1062]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:04:23.018141 ignition[1062]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:04:23.022957 ignition[1062]: INFO : files: files passed Jan 30 13:04:23.022957 ignition[1062]: INFO : Ignition finished successfully Jan 30 13:04:23.025858 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 13:04:23.035667 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 13:04:23.041751 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 13:04:23.063861 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 13:04:23.066340 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 13:04:23.076958 initrd-setup-root-after-ignition[1091]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:04:23.076958 initrd-setup-root-after-ignition[1091]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:04:23.091060 initrd-setup-root-after-ignition[1095]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:04:23.080508 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:04:23.086991 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 13:04:23.106329 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 13:04:23.133759 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 13:04:23.133879 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 13:04:23.139273 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 13:04:23.145244 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 13:04:23.150038 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 13:04:23.162287 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 13:04:23.175462 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:04:23.185381 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 13:04:23.195386 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:04:23.196696 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:04:23.196973 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 13:04:23.197432 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 13:04:23.197536 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:04:23.209634 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 13:04:23.214322 systemd[1]: Stopped target basic.target - Basic System. Jan 30 13:04:23.225058 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 13:04:23.236138 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:04:23.239251 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 13:04:23.246394 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 13:04:23.252151 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:04:23.255561 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 13:04:23.261091 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 13:04:23.265919 systemd[1]: Stopped target swap.target - Swaps. Jan 30 13:04:23.271098 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 13:04:23.271276 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:04:23.276687 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:04:23.281338 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:04:23.287365 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 13:04:23.289780 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:04:23.296286 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 13:04:23.296444 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 13:04:23.311818 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 13:04:23.311970 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:04:23.318624 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 13:04:23.318817 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 13:04:23.329292 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 30 13:04:23.329471 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 13:04:23.347395 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 13:04:23.349579 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 13:04:23.351792 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:04:23.358076 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 13:04:23.366643 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 13:04:23.368489 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:04:23.376074 ignition[1115]: INFO : Ignition 2.20.0 Jan 30 13:04:23.376074 ignition[1115]: INFO : Stage: umount Jan 30 13:04:23.376074 ignition[1115]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:04:23.376074 ignition[1115]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 13:04:23.376074 ignition[1115]: INFO : umount: umount passed Jan 30 13:04:23.376074 ignition[1115]: INFO : Ignition finished successfully Jan 30 13:04:23.378785 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 13:04:23.378923 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:04:23.392650 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 13:04:23.392742 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 13:04:23.406688 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 13:04:23.407842 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 13:04:23.421665 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 13:04:23.422895 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 13:04:23.429782 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 13:04:23.429852 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 13:04:23.437119 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 30 13:04:23.437226 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 30 13:04:23.442009 systemd[1]: Stopped target network.target - Network. Jan 30 13:04:23.448573 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 13:04:23.448648 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:04:23.451641 systemd[1]: Stopped target paths.target - Path Units. Jan 30 13:04:23.456306 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 13:04:23.461478 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:04:23.468389 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 13:04:23.476865 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 13:04:23.479309 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 13:04:23.479359 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:04:23.484193 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 13:04:23.484243 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:04:23.488006 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 13:04:23.489025 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 13:04:23.494957 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 13:04:23.497088 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 13:04:23.509734 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 13:04:23.515376 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 13:04:23.522526 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 13:04:23.522991 systemd-networkd[873]: eth0: DHCPv6 lease lost Jan 30 13:04:23.524493 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 13:04:23.524591 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 13:04:23.533982 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 13:04:23.534097 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 13:04:23.540491 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 13:04:23.540700 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 13:04:23.544677 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 13:04:23.544734 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:04:23.548950 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 13:04:23.549011 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 13:04:23.566768 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 13:04:23.571869 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 13:04:23.571953 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:04:23.572421 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 13:04:23.572466 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:04:23.581796 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 13:04:23.581848 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 13:04:23.590223 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 13:04:23.590287 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:04:23.595977 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:04:23.624952 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 13:04:23.625143 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:04:23.630497 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 13:04:23.630542 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 13:04:23.634797 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 13:04:23.634835 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:04:23.658156 kernel: hv_netvsc 0022483e-f156-0022-483e-f1560022483e eth0: Data path switched from VF: enP55634s1 Jan 30 13:04:23.635195 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 13:04:23.635237 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:04:23.636092 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 13:04:23.636124 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 13:04:23.649328 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:04:23.651784 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:04:23.675616 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 13:04:23.680091 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 13:04:23.683051 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:04:23.689293 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:04:23.689358 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:04:23.701356 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 13:04:23.701454 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 13:04:23.706042 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 13:04:23.706188 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 13:04:23.725466 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 13:04:23.734290 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 13:04:23.743902 systemd[1]: Switching root. Jan 30 13:04:23.796292 systemd-journald[177]: Journal stopped Jan 30 13:04:14.104100 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 09:29:54 -00 2025 Jan 30 13:04:14.104135 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=fe60919b0c6f6abb7495678f87f7024e97a038fc343fa31a123a43ef5f489466 Jan 30 13:04:14.104149 kernel: BIOS-provided physical RAM map: Jan 30 13:04:14.104159 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 30 13:04:14.104169 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jan 30 13:04:14.104179 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Jan 30 13:04:14.104191 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved Jan 30 13:04:14.104202 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jan 30 13:04:14.104215 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jan 30 13:04:14.104226 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jan 30 13:04:14.104237 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jan 30 13:04:14.104247 kernel: printk: bootconsole [earlyser0] enabled Jan 30 13:04:14.104257 kernel: NX (Execute Disable) protection: active Jan 30 13:04:14.104268 kernel: APIC: Static calls initialized Jan 30 13:04:14.104284 kernel: efi: EFI v2.7 by Microsoft Jan 30 13:04:14.104296 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f621218 RNG=0x3ffd1018 Jan 30 13:04:14.104308 kernel: random: crng init done Jan 30 13:04:14.104319 kernel: secureboot: Secure boot disabled Jan 30 13:04:14.104331 kernel: SMBIOS 3.1.0 present. Jan 30 13:04:14.104342 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Jan 30 13:04:14.104353 kernel: Hypervisor detected: Microsoft Hyper-V Jan 30 13:04:14.104364 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Jan 30 13:04:14.104389 kernel: Hyper-V: Host Build 10.0.20348.1799-1-0 Jan 30 13:04:14.104400 kernel: Hyper-V: Nested features: 0x1e0101 Jan 30 13:04:14.104415 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jan 30 13:04:14.104429 kernel: Hyper-V: Using hypercall for remote TLB flush Jan 30 13:04:14.104452 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 30 13:04:14.104464 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 30 13:04:14.104475 kernel: tsc: Marking TSC unstable due to running on Hyper-V Jan 30 13:04:14.104486 kernel: tsc: Detected 2593.906 MHz processor Jan 30 13:04:14.104499 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 30 13:04:14.104512 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 30 13:04:14.104523 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Jan 30 13:04:14.104543 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 30 13:04:14.104555 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 30 13:04:14.104566 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Jan 30 13:04:14.104578 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Jan 30 13:04:14.104591 kernel: Using GB pages for direct mapping Jan 30 13:04:14.104603 kernel: ACPI: Early table checksum verification disabled Jan 30 13:04:14.104615 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jan 30 13:04:14.104632 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:04:14.104648 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:04:14.104662 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Jan 30 13:04:14.104675 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jan 30 13:04:14.104688 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:04:14.104702 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:04:14.104715 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:04:14.104731 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:04:14.104744 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:04:14.104756 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:04:14.104769 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:04:14.104781 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jan 30 13:04:14.104793 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Jan 30 13:04:14.104808 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jan 30 13:04:14.104820 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jan 30 13:04:14.104832 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jan 30 13:04:14.104849 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jan 30 13:04:14.104862 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Jan 30 13:04:14.104874 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Jan 30 13:04:14.104887 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jan 30 13:04:14.104900 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Jan 30 13:04:14.104913 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 30 13:04:14.104927 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 30 13:04:14.104940 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jan 30 13:04:14.104955 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Jan 30 13:04:14.104970 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Jan 30 13:04:14.104983 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jan 30 13:04:14.104996 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jan 30 13:04:14.105009 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jan 30 13:04:14.105022 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jan 30 13:04:14.105036 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jan 30 13:04:14.105049 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jan 30 13:04:14.105061 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jan 30 13:04:14.105076 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jan 30 13:04:14.105088 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jan 30 13:04:14.105101 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Jan 30 13:04:14.105114 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Jan 30 13:04:14.105127 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Jan 30 13:04:14.105141 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Jan 30 13:04:14.105154 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Jan 30 13:04:14.105166 kernel: NODE_DATA(0) allocated [mem 0x2bfff9000-0x2bfffefff] Jan 30 13:04:14.105179 kernel: Zone ranges: Jan 30 13:04:14.105195 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 30 13:04:14.105209 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 30 13:04:14.105222 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jan 30 13:04:14.105235 kernel: Movable zone start for each node Jan 30 13:04:14.105249 kernel: Early memory node ranges Jan 30 13:04:14.105262 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 30 13:04:14.105275 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Jan 30 13:04:14.105289 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jan 30 13:04:14.105302 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jan 30 13:04:14.105318 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jan 30 13:04:14.105331 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 30 13:04:14.105344 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 30 13:04:14.105358 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Jan 30 13:04:14.105371 kernel: ACPI: PM-Timer IO Port: 0x408 Jan 30 13:04:14.105404 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Jan 30 13:04:14.105418 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Jan 30 13:04:14.105431 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 30 13:04:14.105444 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 30 13:04:14.105461 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jan 30 13:04:14.105474 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 30 13:04:14.105487 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jan 30 13:04:14.105499 kernel: Booting paravirtualized kernel on Hyper-V Jan 30 13:04:14.105513 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 30 13:04:14.105525 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 30 13:04:14.105538 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 30 13:04:14.105551 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 30 13:04:14.105563 kernel: pcpu-alloc: [0] 0 1 Jan 30 13:04:14.105579 kernel: Hyper-V: PV spinlocks enabled Jan 30 13:04:14.105592 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 30 13:04:14.105606 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=fe60919b0c6f6abb7495678f87f7024e97a038fc343fa31a123a43ef5f489466 Jan 30 13:04:14.105620 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 13:04:14.105632 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 30 13:04:14.105645 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 30 13:04:14.105658 kernel: Fallback order for Node 0: 0 Jan 30 13:04:14.105670 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Jan 30 13:04:14.105686 kernel: Policy zone: Normal Jan 30 13:04:14.105708 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 13:04:14.105721 kernel: software IO TLB: area num 2. Jan 30 13:04:14.105738 kernel: Memory: 8067556K/8387460K available (14336K kernel code, 2301K rwdata, 22800K rodata, 43320K init, 1752K bss, 319648K reserved, 0K cma-reserved) Jan 30 13:04:14.105751 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 30 13:04:14.105765 kernel: ftrace: allocating 37893 entries in 149 pages Jan 30 13:04:14.105778 kernel: ftrace: allocated 149 pages with 4 groups Jan 30 13:04:14.105791 kernel: Dynamic Preempt: voluntary Jan 30 13:04:14.105804 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 13:04:14.105822 kernel: rcu: RCU event tracing is enabled. Jan 30 13:04:14.105836 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 30 13:04:14.105852 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 13:04:14.105866 kernel: Rude variant of Tasks RCU enabled. Jan 30 13:04:14.105880 kernel: Tracing variant of Tasks RCU enabled. Jan 30 13:04:14.105893 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 13:04:14.105907 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 30 13:04:14.105920 kernel: Using NULL legacy PIC Jan 30 13:04:14.105936 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jan 30 13:04:14.105950 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 13:04:14.105963 kernel: Console: colour dummy device 80x25 Jan 30 13:04:14.105977 kernel: printk: console [tty1] enabled Jan 30 13:04:14.105990 kernel: printk: console [ttyS0] enabled Jan 30 13:04:14.106003 kernel: printk: bootconsole [earlyser0] disabled Jan 30 13:04:14.106017 kernel: ACPI: Core revision 20230628 Jan 30 13:04:14.106030 kernel: Failed to register legacy timer interrupt Jan 30 13:04:14.106044 kernel: APIC: Switch to symmetric I/O mode setup Jan 30 13:04:14.106060 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 30 13:04:14.106073 kernel: Hyper-V: Using IPI hypercalls Jan 30 13:04:14.106086 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jan 30 13:04:14.106100 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jan 30 13:04:14.106113 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jan 30 13:04:14.106127 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jan 30 13:04:14.106141 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jan 30 13:04:14.106154 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jan 30 13:04:14.106168 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593906) Jan 30 13:04:14.106184 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 30 13:04:14.106198 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 30 13:04:14.106211 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 30 13:04:14.106232 kernel: Spectre V2 : Mitigation: Retpolines Jan 30 13:04:14.106247 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 30 13:04:14.106258 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 30 13:04:14.106272 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 30 13:04:14.106286 kernel: RETBleed: Vulnerable Jan 30 13:04:14.106302 kernel: Speculative Store Bypass: Vulnerable Jan 30 13:04:14.106316 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Jan 30 13:04:14.106334 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 30 13:04:14.106348 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 30 13:04:14.106361 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 30 13:04:14.106395 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 30 13:04:14.106409 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 30 13:04:14.106423 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 30 13:04:14.106437 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 30 13:04:14.106451 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 30 13:04:14.106465 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jan 30 13:04:14.106479 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jan 30 13:04:14.106493 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jan 30 13:04:14.106510 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Jan 30 13:04:14.106524 kernel: Freeing SMP alternatives memory: 32K Jan 30 13:04:14.106538 kernel: pid_max: default: 32768 minimum: 301 Jan 30 13:04:14.106552 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 13:04:14.106566 kernel: landlock: Up and running. Jan 30 13:04:14.106579 kernel: SELinux: Initializing. Jan 30 13:04:14.106593 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 30 13:04:14.106607 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 30 13:04:14.106621 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Jan 30 13:04:14.106636 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:04:14.106650 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:04:14.106668 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:04:14.106682 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 30 13:04:14.106696 kernel: signal: max sigframe size: 3632 Jan 30 13:04:14.106710 kernel: rcu: Hierarchical SRCU implementation. Jan 30 13:04:14.106725 kernel: rcu: Max phase no-delay instances is 400. Jan 30 13:04:14.106739 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 30 13:04:14.106753 kernel: smp: Bringing up secondary CPUs ... Jan 30 13:04:14.106767 kernel: smpboot: x86: Booting SMP configuration: Jan 30 13:04:14.106781 kernel: .... node #0, CPUs: #1 Jan 30 13:04:14.106799 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Jan 30 13:04:14.106814 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 30 13:04:14.106828 kernel: smp: Brought up 1 node, 2 CPUs Jan 30 13:04:14.106842 kernel: smpboot: Max logical packages: 1 Jan 30 13:04:14.106856 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Jan 30 13:04:14.106871 kernel: devtmpfs: initialized Jan 30 13:04:14.106885 kernel: x86/mm: Memory block size: 128MB Jan 30 13:04:14.106899 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jan 30 13:04:14.106916 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 13:04:14.106930 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 30 13:04:14.106945 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 13:04:14.106959 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 13:04:14.106973 kernel: audit: initializing netlink subsys (disabled) Jan 30 13:04:14.106988 kernel: audit: type=2000 audit(1738242252.027:1): state=initialized audit_enabled=0 res=1 Jan 30 13:04:14.107002 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 13:04:14.107016 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 30 13:04:14.107030 kernel: cpuidle: using governor menu Jan 30 13:04:14.107047 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 13:04:14.107061 kernel: dca service started, version 1.12.1 Jan 30 13:04:14.107076 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Jan 30 13:04:14.107090 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 30 13:04:14.107105 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 13:04:14.107119 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 13:04:14.107133 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 13:04:14.107147 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 13:04:14.107161 kernel: ACPI: Added _OSI(Module Device) Jan 30 13:04:14.107178 kernel: ACPI: Added _OSI(Processor Device) Jan 30 13:04:14.107192 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 13:04:14.107206 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 13:04:14.107221 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 30 13:04:14.107235 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 30 13:04:14.107249 kernel: ACPI: Interpreter enabled Jan 30 13:04:14.107263 kernel: ACPI: PM: (supports S0 S5) Jan 30 13:04:14.107277 kernel: ACPI: Using IOAPIC for interrupt routing Jan 30 13:04:14.107292 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 30 13:04:14.107309 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 30 13:04:14.107323 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jan 30 13:04:14.107337 kernel: iommu: Default domain type: Translated Jan 30 13:04:14.107351 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 30 13:04:14.107365 kernel: efivars: Registered efivars operations Jan 30 13:04:14.114981 kernel: PCI: Using ACPI for IRQ routing Jan 30 13:04:14.115003 kernel: PCI: System does not support PCI Jan 30 13:04:14.115019 kernel: vgaarb: loaded Jan 30 13:04:14.115034 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Jan 30 13:04:14.115053 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 13:04:14.115068 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 13:04:14.115083 kernel: pnp: PnP ACPI init Jan 30 13:04:14.115097 kernel: pnp: PnP ACPI: found 3 devices Jan 30 13:04:14.115112 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 30 13:04:14.115126 kernel: NET: Registered PF_INET protocol family Jan 30 13:04:14.115141 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 30 13:04:14.115156 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 30 13:04:14.115171 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 13:04:14.115188 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 13:04:14.115203 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 30 13:04:14.115217 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 30 13:04:14.115232 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 30 13:04:14.115247 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 30 13:04:14.115261 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 13:04:14.115275 kernel: NET: Registered PF_XDP protocol family Jan 30 13:04:14.115290 kernel: PCI: CLS 0 bytes, default 64 Jan 30 13:04:14.115304 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 30 13:04:14.115322 kernel: software IO TLB: mapped [mem 0x000000003ae72000-0x000000003ee72000] (64MB) Jan 30 13:04:14.115336 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 30 13:04:14.115351 kernel: Initialise system trusted keyrings Jan 30 13:04:14.115364 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 30 13:04:14.115388 kernel: Key type asymmetric registered Jan 30 13:04:14.115402 kernel: Asymmetric key parser 'x509' registered Jan 30 13:04:14.115416 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 30 13:04:14.115431 kernel: io scheduler mq-deadline registered Jan 30 13:04:14.115445 kernel: io scheduler kyber registered Jan 30 13:04:14.115463 kernel: io scheduler bfq registered Jan 30 13:04:14.115478 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 30 13:04:14.115492 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 13:04:14.115506 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 30 13:04:14.115521 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 30 13:04:14.115535 kernel: i8042: PNP: No PS/2 controller found. Jan 30 13:04:14.115720 kernel: rtc_cmos 00:02: registered as rtc0 Jan 30 13:04:14.115850 kernel: rtc_cmos 00:02: setting system clock to 2025-01-30T13:04:13 UTC (1738242253) Jan 30 13:04:14.115969 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jan 30 13:04:14.115987 kernel: intel_pstate: CPU model not supported Jan 30 13:04:14.116002 kernel: efifb: probing for efifb Jan 30 13:04:14.116016 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 30 13:04:14.116031 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 30 13:04:14.116045 kernel: efifb: scrolling: redraw Jan 30 13:04:14.116060 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 30 13:04:14.116074 kernel: Console: switching to colour frame buffer device 128x48 Jan 30 13:04:14.116089 kernel: fb0: EFI VGA frame buffer device Jan 30 13:04:14.116107 kernel: pstore: Using crash dump compression: deflate Jan 30 13:04:14.116121 kernel: pstore: Registered efi_pstore as persistent store backend Jan 30 13:04:14.116135 kernel: NET: Registered PF_INET6 protocol family Jan 30 13:04:14.116149 kernel: Segment Routing with IPv6 Jan 30 13:04:14.116164 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 13:04:14.116179 kernel: NET: Registered PF_PACKET protocol family Jan 30 13:04:14.116193 kernel: Key type dns_resolver registered Jan 30 13:04:14.116207 kernel: IPI shorthand broadcast: enabled Jan 30 13:04:14.116221 kernel: sched_clock: Marking stable (844011600, 49023200)->(1107738100, -214703300) Jan 30 13:04:14.116238 kernel: registered taskstats version 1 Jan 30 13:04:14.116253 kernel: Loading compiled-in X.509 certificates Jan 30 13:04:14.116267 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 7f0738935740330d55027faa5877e7155d5f24f4' Jan 30 13:04:14.116282 kernel: Key type .fscrypt registered Jan 30 13:04:14.116295 kernel: Key type fscrypt-provisioning registered Jan 30 13:04:14.116310 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 13:04:14.116324 kernel: ima: Allocated hash algorithm: sha1 Jan 30 13:04:14.116338 kernel: ima: No architecture policies found Jan 30 13:04:14.116355 kernel: clk: Disabling unused clocks Jan 30 13:04:14.116370 kernel: Freeing unused kernel image (initmem) memory: 43320K Jan 30 13:04:14.116394 kernel: Write protecting the kernel read-only data: 38912k Jan 30 13:04:14.116407 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Jan 30 13:04:14.116419 kernel: Run /init as init process Jan 30 13:04:14.116432 kernel: with arguments: Jan 30 13:04:14.116456 kernel: /init Jan 30 13:04:14.116469 kernel: with environment: Jan 30 13:04:14.116482 kernel: HOME=/ Jan 30 13:04:14.116495 kernel: TERM=linux Jan 30 13:04:14.116513 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 13:04:14.116531 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:04:14.116549 systemd[1]: Detected virtualization microsoft. Jan 30 13:04:14.116564 systemd[1]: Detected architecture x86-64. Jan 30 13:04:14.116577 systemd[1]: Running in initrd. Jan 30 13:04:14.116591 systemd[1]: No hostname configured, using default hostname. Jan 30 13:04:14.116605 systemd[1]: Hostname set to . Jan 30 13:04:14.116626 systemd[1]: Initializing machine ID from random generator. Jan 30 13:04:14.116639 systemd[1]: Queued start job for default target initrd.target. Jan 30 13:04:14.116652 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:04:14.116667 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:04:14.116683 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 13:04:14.116698 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:04:14.116713 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 13:04:14.116728 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 13:04:14.116748 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 13:04:14.116763 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 13:04:14.116778 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:04:14.116792 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:04:14.116807 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:04:14.116831 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:04:14.116846 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:04:14.116864 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:04:14.116879 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:04:14.116894 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:04:14.116909 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 13:04:14.116924 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 13:04:14.116939 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:04:14.116955 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:04:14.116970 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:04:14.116985 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:04:14.117004 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 13:04:14.117019 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:04:14.117034 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 13:04:14.117050 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 13:04:14.117065 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:04:14.117080 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:04:14.117123 systemd-journald[177]: Collecting audit messages is disabled. Jan 30 13:04:14.117161 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:04:14.117176 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 13:04:14.117192 systemd-journald[177]: Journal started Jan 30 13:04:14.117227 systemd-journald[177]: Runtime Journal (/run/log/journal/0638f834b7ae4f579e98cf5e93d42381) is 8.0M, max 158.8M, 150.8M free. Jan 30 13:04:14.103064 systemd-modules-load[178]: Inserted module 'overlay' Jan 30 13:04:14.129303 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:04:14.130134 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:04:14.137102 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 13:04:14.141366 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:04:14.154413 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 13:04:14.156610 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:04:14.165373 kernel: Bridge firewalling registered Jan 30 13:04:14.161539 systemd-modules-load[178]: Inserted module 'br_netfilter' Jan 30 13:04:14.173933 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 13:04:14.183592 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:04:14.187436 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:04:14.188478 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:04:14.192521 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:04:14.193998 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:04:14.213670 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:04:14.223491 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 13:04:14.228139 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:04:14.237326 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:04:14.247529 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:04:14.260689 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:04:14.266405 dracut-cmdline[208]: dracut-dracut-053 Jan 30 13:04:14.266405 dracut-cmdline[208]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=fe60919b0c6f6abb7495678f87f7024e97a038fc343fa31a123a43ef5f489466 Jan 30 13:04:14.327713 systemd-resolved[217]: Positive Trust Anchors: Jan 30 13:04:14.330001 systemd-resolved[217]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:04:14.330062 systemd-resolved[217]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:04:14.361282 kernel: SCSI subsystem initialized Jan 30 13:04:14.350824 systemd-resolved[217]: Defaulting to hostname 'linux'. Jan 30 13:04:14.356114 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:04:14.364225 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:04:14.376550 kernel: Loading iSCSI transport class v2.0-870. Jan 30 13:04:14.388394 kernel: iscsi: registered transport (tcp) Jan 30 13:04:14.409123 kernel: iscsi: registered transport (qla4xxx) Jan 30 13:04:14.409202 kernel: QLogic iSCSI HBA Driver Jan 30 13:04:14.445154 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 13:04:14.456586 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 13:04:14.485554 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 13:04:14.485660 kernel: device-mapper: uevent: version 1.0.3 Jan 30 13:04:14.489261 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 13:04:14.531409 kernel: raid6: avx512x4 gen() 18190 MB/s Jan 30 13:04:14.550399 kernel: raid6: avx512x2 gen() 18142 MB/s Jan 30 13:04:14.568392 kernel: raid6: avx512x1 gen() 18437 MB/s Jan 30 13:04:14.587394 kernel: raid6: avx2x4 gen() 18347 MB/s Jan 30 13:04:14.606391 kernel: raid6: avx2x2 gen() 18358 MB/s Jan 30 13:04:14.626325 kernel: raid6: avx2x1 gen() 14065 MB/s Jan 30 13:04:14.626366 kernel: raid6: using algorithm avx512x1 gen() 18437 MB/s Jan 30 13:04:14.646973 kernel: raid6: .... xor() 26526 MB/s, rmw enabled Jan 30 13:04:14.647025 kernel: raid6: using avx512x2 recovery algorithm Jan 30 13:04:14.674410 kernel: xor: automatically using best checksumming function avx Jan 30 13:04:14.816403 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 13:04:14.825433 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:04:14.836620 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:04:14.854078 systemd-udevd[396]: Using default interface naming scheme 'v255'. Jan 30 13:04:14.858504 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:04:14.873551 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 13:04:14.885675 dracut-pre-trigger[408]: rd.md=0: removing MD RAID activation Jan 30 13:04:14.912499 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:04:14.922701 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:04:14.963563 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:04:14.980541 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 13:04:15.000585 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 13:04:15.010921 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:04:15.018698 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:04:15.028090 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:04:15.039400 kernel: cryptd: max_cpu_qlen set to 1000 Jan 30 13:04:15.043598 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 13:04:15.067471 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:04:15.067629 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:04:15.071147 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:04:15.078441 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:04:15.078585 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:04:15.090073 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:04:15.105483 kernel: AVX2 version of gcm_enc/dec engaged. Jan 30 13:04:15.105528 kernel: AES CTR mode by8 optimization enabled Jan 30 13:04:15.106762 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:04:15.114836 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:04:15.119961 kernel: hv_vmbus: Vmbus version:5.2 Jan 30 13:04:15.141561 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:04:15.141690 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:04:15.181476 kernel: hv_vmbus: registering driver hv_storvsc Jan 30 13:04:15.181517 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 30 13:04:15.181532 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 30 13:04:15.181546 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 30 13:04:15.181558 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jan 30 13:04:15.181572 kernel: scsi host1: storvsc_host_t Jan 30 13:04:15.181748 kernel: scsi host0: storvsc_host_t Jan 30 13:04:15.181869 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 30 13:04:15.181896 kernel: hv_vmbus: registering driver hv_netvsc Jan 30 13:04:15.175744 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:04:15.188585 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 30 13:04:15.193730 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jan 30 13:04:15.216671 kernel: hv_vmbus: registering driver hid_hyperv Jan 30 13:04:15.218524 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:04:15.233483 kernel: PTP clock support registered Jan 30 13:04:15.233675 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:04:15.253740 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jan 30 13:04:15.253783 kernel: hv_utils: Registering HyperV Utility Driver Jan 30 13:04:15.262005 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 30 13:04:15.269691 kernel: hv_vmbus: registering driver hv_utils Jan 30 13:04:15.269715 kernel: hv_utils: TimeSync IC version 4.0 Jan 30 13:04:15.269741 kernel: hv_utils: Heartbeat IC version 3.0 Jan 30 13:04:15.269762 kernel: hv_utils: Shutdown IC version 3.2 Jan 30 13:04:15.682572 systemd-resolved[217]: Clock change detected. Flushing caches. Jan 30 13:04:15.702678 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 30 13:04:15.710939 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 30 13:04:15.710958 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 30 13:04:15.737291 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 30 13:04:15.737482 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 30 13:04:15.737667 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 30 13:04:15.737819 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 30 13:04:15.737972 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 30 13:04:15.738117 kernel: hv_netvsc 0022483e-f156-0022-483e-f1560022483e eth0: VF slot 1 added Jan 30 13:04:15.738309 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:04:15.738327 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 30 13:04:15.709016 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:04:15.746143 kernel: hv_vmbus: registering driver hv_pci Jan 30 13:04:15.751183 kernel: hv_pci 89cd63b5-d952-43f3-a13c-dda244e90d3c: PCI VMBus probing: Using version 0x10004 Jan 30 13:04:15.796436 kernel: hv_pci 89cd63b5-d952-43f3-a13c-dda244e90d3c: PCI host bridge to bus d952:00 Jan 30 13:04:15.796628 kernel: pci_bus d952:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Jan 30 13:04:15.796801 kernel: pci_bus d952:00: No busn resource found for root bus, will use [bus 00-ff] Jan 30 13:04:15.797578 kernel: pci d952:00:02.0: [15b3:1016] type 00 class 0x020000 Jan 30 13:04:15.797779 kernel: pci d952:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Jan 30 13:04:15.797954 kernel: pci d952:00:02.0: enabling Extended Tags Jan 30 13:04:15.798121 kernel: pci d952:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at d952:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Jan 30 13:04:15.798310 kernel: pci_bus d952:00: busn_res: [bus 00-ff] end is updated to 00 Jan 30 13:04:15.798456 kernel: pci d952:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Jan 30 13:04:15.994419 kernel: mlx5_core d952:00:02.0: enabling device (0000 -> 0002) Jan 30 13:04:16.227884 kernel: mlx5_core d952:00:02.0: firmware version: 14.30.5000 Jan 30 13:04:16.228084 kernel: hv_netvsc 0022483e-f156-0022-483e-f1560022483e eth0: VF registering: eth1 Jan 30 13:04:16.228618 kernel: mlx5_core d952:00:02.0 eth1: joined to eth0 Jan 30 13:04:16.228803 kernel: mlx5_core d952:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jan 30 13:04:16.234378 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 30 13:04:16.246263 kernel: mlx5_core d952:00:02.0 enP55634s1: renamed from eth1 Jan 30 13:04:16.322155 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (459) Jan 30 13:04:16.337834 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 30 13:04:16.363151 kernel: BTRFS: device fsid f8084233-4a6f-4e67-af0b-519e43b19e58 devid 1 transid 41 /dev/sda3 scanned by (udev-worker) (457) Jan 30 13:04:16.376695 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 30 13:04:16.380254 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 30 13:04:16.391093 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 13:04:16.400940 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 30 13:04:16.418150 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:04:16.425200 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:04:17.433158 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:04:17.433224 disk-uuid[603]: The operation has completed successfully. Jan 30 13:04:17.510741 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 13:04:17.510857 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 13:04:17.533252 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 13:04:17.539114 sh[689]: Success Jan 30 13:04:17.579156 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 30 13:04:17.801256 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 13:04:17.817225 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 13:04:17.822119 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 13:04:17.841172 kernel: BTRFS info (device dm-0): first mount of filesystem f8084233-4a6f-4e67-af0b-519e43b19e58 Jan 30 13:04:17.841260 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:04:17.844524 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 13:04:17.847586 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 13:04:17.850565 kernel: BTRFS info (device dm-0): using free space tree Jan 30 13:04:18.222272 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 13:04:18.227622 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 13:04:18.237335 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 13:04:18.243437 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 13:04:18.253239 kernel: BTRFS info (device sda6): first mount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 13:04:18.253289 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:04:18.257694 kernel: BTRFS info (device sda6): using free space tree Jan 30 13:04:18.276154 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 13:04:18.291690 kernel: BTRFS info (device sda6): last unmount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 13:04:18.291248 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 13:04:18.303064 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 13:04:18.313309 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 13:04:18.337868 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:04:18.351304 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:04:18.370339 systemd-networkd[873]: lo: Link UP Jan 30 13:04:18.370348 systemd-networkd[873]: lo: Gained carrier Jan 30 13:04:18.372485 systemd-networkd[873]: Enumeration completed Jan 30 13:04:18.372721 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:04:18.373939 systemd-networkd[873]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:04:18.373943 systemd-networkd[873]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:04:18.377145 systemd[1]: Reached target network.target - Network. Jan 30 13:04:18.436147 kernel: mlx5_core d952:00:02.0 enP55634s1: Link up Jan 30 13:04:18.478157 kernel: hv_netvsc 0022483e-f156-0022-483e-f1560022483e eth0: Data path switched to VF: enP55634s1 Jan 30 13:04:18.479266 systemd-networkd[873]: enP55634s1: Link UP Jan 30 13:04:18.479422 systemd-networkd[873]: eth0: Link UP Jan 30 13:04:18.479626 systemd-networkd[873]: eth0: Gained carrier Jan 30 13:04:18.479643 systemd-networkd[873]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:04:18.491350 systemd-networkd[873]: enP55634s1: Gained carrier Jan 30 13:04:18.522198 systemd-networkd[873]: eth0: DHCPv4 address 10.200.4.12/24, gateway 10.200.4.1 acquired from 168.63.129.16 Jan 30 13:04:19.258519 ignition[834]: Ignition 2.20.0 Jan 30 13:04:19.258532 ignition[834]: Stage: fetch-offline Jan 30 13:04:19.260848 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:04:19.258573 ignition[834]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:04:19.258583 ignition[834]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 13:04:19.258691 ignition[834]: parsed url from cmdline: "" Jan 30 13:04:19.258695 ignition[834]: no config URL provided Jan 30 13:04:19.258702 ignition[834]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:04:19.258712 ignition[834]: no config at "/usr/lib/ignition/user.ign" Jan 30 13:04:19.258719 ignition[834]: failed to fetch config: resource requires networking Jan 30 13:04:19.258934 ignition[834]: Ignition finished successfully Jan 30 13:04:19.296300 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 30 13:04:19.309733 ignition[882]: Ignition 2.20.0 Jan 30 13:04:19.309744 ignition[882]: Stage: fetch Jan 30 13:04:19.309949 ignition[882]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:04:19.309963 ignition[882]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 13:04:19.310066 ignition[882]: parsed url from cmdline: "" Jan 30 13:04:19.310069 ignition[882]: no config URL provided Jan 30 13:04:19.310073 ignition[882]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:04:19.310081 ignition[882]: no config at "/usr/lib/ignition/user.ign" Jan 30 13:04:19.310103 ignition[882]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 30 13:04:19.383322 ignition[882]: GET result: OK Jan 30 13:04:19.383423 ignition[882]: config has been read from IMDS userdata Jan 30 13:04:19.383457 ignition[882]: parsing config with SHA512: 5df08a0d68041d0f562bc83f7062a9d3bec5c9da53434633d1aa85a3d46bcf7a2953265d4e1f0f9212bf060634d3e498e0568170295c99d417effcdca7a3fe97 Jan 30 13:04:19.389105 unknown[882]: fetched base config from "system" Jan 30 13:04:19.389120 unknown[882]: fetched base config from "system" Jan 30 13:04:19.389519 ignition[882]: fetch: fetch complete Jan 30 13:04:19.389140 unknown[882]: fetched user config from "azure" Jan 30 13:04:19.389524 ignition[882]: fetch: fetch passed Jan 30 13:04:19.391201 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 30 13:04:19.389566 ignition[882]: Ignition finished successfully Jan 30 13:04:19.403462 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 13:04:19.418090 ignition[888]: Ignition 2.20.0 Jan 30 13:04:19.418101 ignition[888]: Stage: kargs Jan 30 13:04:19.420485 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 13:04:19.418332 ignition[888]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:04:19.418346 ignition[888]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 13:04:19.419254 ignition[888]: kargs: kargs passed Jan 30 13:04:19.419300 ignition[888]: Ignition finished successfully Jan 30 13:04:19.435420 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 13:04:19.449143 ignition[894]: Ignition 2.20.0 Jan 30 13:04:19.449172 ignition[894]: Stage: disks Jan 30 13:04:19.449400 ignition[894]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:04:19.451339 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 13:04:19.449409 ignition[894]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 13:04:19.455522 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 13:04:19.450444 ignition[894]: disks: disks passed Jan 30 13:04:19.460330 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 13:04:19.450487 ignition[894]: Ignition finished successfully Jan 30 13:04:19.463556 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:04:19.481585 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:04:19.484247 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:04:19.496436 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 13:04:19.561369 systemd-fsck[903]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 30 13:04:19.566480 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 13:04:19.576451 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 13:04:19.591940 systemd-networkd[873]: enP55634s1: Gained IPv6LL Jan 30 13:04:19.668172 kernel: EXT4-fs (sda9): mounted filesystem cdc615db-d057-439f-af25-aa57b1c399e2 r/w with ordered data mode. Quota mode: none. Jan 30 13:04:19.668799 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 13:04:19.671756 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 13:04:19.714288 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:04:19.719972 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 13:04:19.728149 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (914) Jan 30 13:04:19.731340 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 30 13:04:19.747663 kernel: BTRFS info (device sda6): first mount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 13:04:19.747694 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:04:19.747709 kernel: BTRFS info (device sda6): using free space tree Jan 30 13:04:19.734735 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 13:04:19.759764 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 13:04:19.734772 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:04:19.745521 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 13:04:19.761020 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:04:19.779346 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 13:04:19.974268 systemd-networkd[873]: eth0: Gained IPv6LL Jan 30 13:04:20.425833 coreos-metadata[916]: Jan 30 13:04:20.425 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 30 13:04:20.430234 coreos-metadata[916]: Jan 30 13:04:20.428 INFO Fetch successful Jan 30 13:04:20.430234 coreos-metadata[916]: Jan 30 13:04:20.428 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 30 13:04:20.440463 coreos-metadata[916]: Jan 30 13:04:20.440 INFO Fetch successful Jan 30 13:04:20.455392 coreos-metadata[916]: Jan 30 13:04:20.455 INFO wrote hostname ci-4186.1.0-a-d95fc4b65f to /sysroot/etc/hostname Jan 30 13:04:20.460253 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 13:04:20.467060 initrd-setup-root[944]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 13:04:20.483049 initrd-setup-root[952]: cut: /sysroot/etc/group: No such file or directory Jan 30 13:04:20.488546 initrd-setup-root[959]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 13:04:20.495162 initrd-setup-root[966]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 13:04:21.279210 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 13:04:21.289244 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 13:04:21.297312 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 13:04:21.307043 kernel: BTRFS info (device sda6): last unmount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 13:04:21.306613 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 13:04:21.333348 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 13:04:21.338376 ignition[1033]: INFO : Ignition 2.20.0 Jan 30 13:04:21.338376 ignition[1033]: INFO : Stage: mount Jan 30 13:04:21.342355 ignition[1033]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:04:21.342355 ignition[1033]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 13:04:21.342355 ignition[1033]: INFO : mount: mount passed Jan 30 13:04:21.342355 ignition[1033]: INFO : Ignition finished successfully Jan 30 13:04:21.340693 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 13:04:21.353199 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 13:04:21.367323 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:04:21.378144 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1046) Jan 30 13:04:21.378184 kernel: BTRFS info (device sda6): first mount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 13:04:21.383150 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:04:21.387092 kernel: BTRFS info (device sda6): using free space tree Jan 30 13:04:21.392149 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 13:04:21.393492 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:04:21.418723 ignition[1062]: INFO : Ignition 2.20.0 Jan 30 13:04:21.418723 ignition[1062]: INFO : Stage: files Jan 30 13:04:21.423743 ignition[1062]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:04:21.423743 ignition[1062]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 13:04:21.423743 ignition[1062]: DEBUG : files: compiled without relabeling support, skipping Jan 30 13:04:21.423743 ignition[1062]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 13:04:21.423743 ignition[1062]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 13:04:21.531815 ignition[1062]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 13:04:21.535682 ignition[1062]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 13:04:21.539036 ignition[1062]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 13:04:21.536513 unknown[1062]: wrote ssh authorized keys file for user: core Jan 30 13:04:21.593760 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 13:04:21.598915 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 30 13:04:21.642810 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 30 13:04:21.827259 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 13:04:21.827259 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 30 13:04:21.839207 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 30 13:04:22.194950 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 30 13:04:22.240881 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 30 13:04:22.245515 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 30 13:04:22.249955 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 13:04:22.249955 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:04:22.258898 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:04:22.258898 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:04:22.268475 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:04:22.273019 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:04:22.273019 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:04:22.286858 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:04:22.291913 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:04:22.296530 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:04:22.296530 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:04:22.296530 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:04:22.296530 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 30 13:04:22.759900 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 30 13:04:22.976775 ignition[1062]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:04:22.976775 ignition[1062]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 30 13:04:22.991587 ignition[1062]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:04:22.998209 ignition[1062]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:04:22.998209 ignition[1062]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 30 13:04:22.998209 ignition[1062]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jan 30 13:04:23.009612 ignition[1062]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 13:04:23.013313 ignition[1062]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:04:23.018141 ignition[1062]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:04:23.022957 ignition[1062]: INFO : files: files passed Jan 30 13:04:23.022957 ignition[1062]: INFO : Ignition finished successfully Jan 30 13:04:23.025858 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 13:04:23.035667 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 13:04:23.041751 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 13:04:23.063861 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 13:04:23.066340 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 13:04:23.076958 initrd-setup-root-after-ignition[1091]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:04:23.076958 initrd-setup-root-after-ignition[1091]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:04:23.091060 initrd-setup-root-after-ignition[1095]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:04:23.080508 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:04:23.086991 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 13:04:23.106329 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 13:04:23.133759 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 13:04:23.133879 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 13:04:23.139273 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 13:04:23.145244 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 13:04:23.150038 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 13:04:23.162287 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 13:04:23.175462 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:04:23.185381 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 13:04:23.195386 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:04:23.196696 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:04:23.196973 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 13:04:23.197432 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 13:04:23.197536 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:04:23.209634 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 13:04:23.214322 systemd[1]: Stopped target basic.target - Basic System. Jan 30 13:04:23.225058 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 13:04:23.236138 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:04:23.239251 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 13:04:23.246394 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 13:04:23.252151 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:04:23.255561 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 13:04:23.261091 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 13:04:23.265919 systemd[1]: Stopped target swap.target - Swaps. Jan 30 13:04:23.271098 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 13:04:23.271276 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:04:23.276687 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:04:23.281338 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:04:23.287365 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 13:04:23.289780 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:04:23.296286 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 13:04:23.296444 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 13:04:23.311818 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 13:04:23.311970 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:04:23.318624 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 13:04:23.318817 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 13:04:23.329292 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 30 13:04:23.329471 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 13:04:23.347395 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 13:04:23.349579 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 13:04:23.351792 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:04:23.358076 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 13:04:23.366643 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 13:04:23.368489 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:04:23.376074 ignition[1115]: INFO : Ignition 2.20.0 Jan 30 13:04:23.376074 ignition[1115]: INFO : Stage: umount Jan 30 13:04:23.376074 ignition[1115]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:04:23.376074 ignition[1115]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 13:04:23.376074 ignition[1115]: INFO : umount: umount passed Jan 30 13:04:23.376074 ignition[1115]: INFO : Ignition finished successfully Jan 30 13:04:23.378785 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 13:04:23.378923 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:04:23.392650 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 13:04:23.392742 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 13:04:23.406688 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 13:04:23.407842 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 13:04:23.421665 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 13:04:23.422895 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 13:04:23.429782 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 13:04:23.429852 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 13:04:23.437119 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 30 13:04:23.437226 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 30 13:04:23.442009 systemd[1]: Stopped target network.target - Network. Jan 30 13:04:23.448573 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 13:04:23.448648 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:04:23.451641 systemd[1]: Stopped target paths.target - Path Units. Jan 30 13:04:23.456306 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 13:04:23.461478 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:04:23.468389 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 13:04:23.476865 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 13:04:23.479309 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 13:04:23.479359 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:04:23.484193 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 13:04:23.484243 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:04:23.488006 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 13:04:23.489025 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 13:04:23.494957 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 13:04:23.497088 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 13:04:23.509734 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 13:04:23.515376 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 13:04:23.522526 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 13:04:23.522991 systemd-networkd[873]: eth0: DHCPv6 lease lost Jan 30 13:04:23.524493 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 13:04:23.524591 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 13:04:23.533982 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 13:04:23.534097 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 13:04:23.540491 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 13:04:23.540700 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 13:04:23.544677 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 13:04:23.544734 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:04:23.548950 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 13:04:23.549011 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 13:04:23.566768 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 13:04:23.571869 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 13:04:23.571953 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:04:23.572421 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 13:04:23.572466 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:04:23.581796 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 13:04:23.581848 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 13:04:23.590223 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 13:04:23.590287 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:04:23.595977 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:04:23.624952 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 13:04:23.625143 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:04:23.630497 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 13:04:23.630542 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 13:04:23.634797 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 13:04:23.634835 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:04:23.658156 kernel: hv_netvsc 0022483e-f156-0022-483e-f1560022483e eth0: Data path switched from VF: enP55634s1 Jan 30 13:04:23.635195 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 13:04:23.635237 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:04:23.636092 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 13:04:23.636124 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 13:04:23.649328 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:04:23.651784 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:04:23.675616 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 13:04:23.680091 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 13:04:23.683051 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:04:23.689293 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:04:23.689358 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:04:23.701356 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 13:04:23.701454 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 13:04:23.706042 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 13:04:23.706188 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 13:04:23.725466 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 13:04:23.734290 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 13:04:23.743902 systemd[1]: Switching root. Jan 30 13:04:23.796292 systemd-journald[177]: Journal stopped Jan 30 13:04:28.981755 systemd-journald[177]: Received SIGTERM from PID 1 (systemd). Jan 30 13:04:28.981792 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 13:04:28.981803 kernel: SELinux: policy capability open_perms=1 Jan 30 13:04:28.981812 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 13:04:28.981820 kernel: SELinux: policy capability always_check_network=0 Jan 30 13:04:28.981828 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 13:04:28.981837 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 13:04:28.981849 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 13:04:28.981859 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 13:04:28.981868 kernel: audit: type=1403 audit(1738242266.049:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 13:04:28.981877 systemd[1]: Successfully loaded SELinux policy in 187.859ms. Jan 30 13:04:28.981890 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.431ms. Jan 30 13:04:28.981901 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:04:28.981911 systemd[1]: Detected virtualization microsoft. Jan 30 13:04:28.981924 systemd[1]: Detected architecture x86-64. Jan 30 13:04:28.981935 systemd[1]: Detected first boot. Jan 30 13:04:28.981952 systemd[1]: Hostname set to . Jan 30 13:04:28.981965 systemd[1]: Initializing machine ID from random generator. Jan 30 13:04:28.981975 zram_generator::config[1158]: No configuration found. Jan 30 13:04:28.981989 systemd[1]: Populated /etc with preset unit settings. Jan 30 13:04:28.982002 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 30 13:04:28.982015 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 30 13:04:28.982026 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 30 13:04:28.982039 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 13:04:28.982053 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 13:04:28.982067 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 13:04:28.982083 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 13:04:28.982093 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 13:04:28.982106 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 13:04:28.982116 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 13:04:28.985391 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 13:04:28.985425 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:04:28.985442 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:04:28.985463 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 13:04:28.985481 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 13:04:28.985498 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 13:04:28.985521 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:04:28.985535 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 30 13:04:28.985548 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:04:28.985567 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 30 13:04:28.985588 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 30 13:04:28.985604 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 30 13:04:28.985624 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 13:04:28.985642 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:04:28.985659 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:04:28.985677 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:04:28.985693 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:04:28.985713 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 13:04:28.985731 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 13:04:28.985747 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:04:28.985767 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:04:28.985786 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:04:28.985804 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 13:04:28.985821 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 13:04:28.985841 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 13:04:28.985857 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 13:04:28.985872 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:04:28.985887 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 13:04:28.985904 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 13:04:28.985920 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 13:04:28.985938 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 13:04:28.985953 systemd[1]: Reached target machines.target - Containers. Jan 30 13:04:28.985972 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 13:04:28.985989 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:04:28.986005 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:04:28.986021 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 13:04:28.986038 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:04:28.986055 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:04:28.986075 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:04:28.986091 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 13:04:28.986107 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:04:28.986144 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 13:04:28.986161 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 30 13:04:28.986178 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 30 13:04:28.986194 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 30 13:04:28.986210 systemd[1]: Stopped systemd-fsck-usr.service. Jan 30 13:04:28.986226 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:04:28.986242 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:04:28.986258 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 13:04:28.986279 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 13:04:28.986328 systemd-journald[1243]: Collecting audit messages is disabled. Jan 30 13:04:28.986368 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:04:28.986385 systemd-journald[1243]: Journal started Jan 30 13:04:28.986420 systemd-journald[1243]: Runtime Journal (/run/log/journal/91c742ae619b4253bc56f6e6b0d7c4ac) is 8.0M, max 158.8M, 150.8M free. Jan 30 13:04:28.336605 systemd[1]: Queued start job for default target multi-user.target. Jan 30 13:04:28.477719 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 30 13:04:28.478150 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 30 13:04:29.014158 systemd[1]: verity-setup.service: Deactivated successfully. Jan 30 13:04:29.014248 systemd[1]: Stopped verity-setup.service. Jan 30 13:04:29.014273 kernel: loop: module loaded Jan 30 13:04:29.014293 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:04:29.014321 kernel: fuse: init (API version 7.39) Jan 30 13:04:29.022913 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:04:29.025522 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 13:04:29.028495 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 13:04:29.031890 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 13:04:29.034487 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 13:04:29.037360 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 13:04:29.040830 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 13:04:29.044332 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:04:29.048299 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 13:04:29.051993 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 13:04:29.052253 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 13:04:29.055703 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:04:29.055932 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:04:29.059589 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:04:29.059825 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:04:29.064704 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 13:04:29.064957 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 13:04:29.068036 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:04:29.068295 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:04:29.071388 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:04:29.075849 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 13:04:29.081417 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 13:04:29.085367 kernel: ACPI: bus type drm_connector registered Jan 30 13:04:29.086662 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:04:29.086848 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:04:29.105161 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 13:04:29.120233 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 13:04:29.135230 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 13:04:29.138462 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 13:04:29.138522 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:04:29.144493 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 13:04:29.156297 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 13:04:29.164312 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 13:04:29.167104 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:04:29.183420 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 13:04:29.191272 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 13:04:29.194297 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:04:29.196433 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 13:04:29.197720 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:04:29.204409 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:04:29.211319 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 13:04:29.216698 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 13:04:29.223144 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:04:29.227001 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 13:04:29.231390 systemd-journald[1243]: Time spent on flushing to /var/log/journal/91c742ae619b4253bc56f6e6b0d7c4ac is 37.968ms for 960 entries. Jan 30 13:04:29.231390 systemd-journald[1243]: System Journal (/var/log/journal/91c742ae619b4253bc56f6e6b0d7c4ac) is 8.0M, max 2.6G, 2.6G free. Jan 30 13:04:29.301389 systemd-journald[1243]: Received client request to flush runtime journal. Jan 30 13:04:29.235572 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 13:04:29.239253 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 13:04:29.249495 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 13:04:29.257940 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 13:04:29.269730 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 13:04:29.284405 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 13:04:29.305231 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 13:04:29.314849 udevadm[1304]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 30 13:04:29.323516 kernel: loop0: detected capacity change from 0 to 28304 Jan 30 13:04:29.321305 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:04:29.347974 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 13:04:29.348815 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 13:04:29.480063 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 13:04:29.489395 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:04:29.551001 systemd-tmpfiles[1311]: ACLs are not supported, ignoring. Jan 30 13:04:29.551028 systemd-tmpfiles[1311]: ACLs are not supported, ignoring. Jan 30 13:04:29.558730 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:04:29.667149 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 13:04:29.737161 kernel: loop1: detected capacity change from 0 to 141000 Jan 30 13:04:30.160158 kernel: loop2: detected capacity change from 0 to 138184 Jan 30 13:04:30.381269 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 13:04:30.391434 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:04:30.415964 systemd-udevd[1318]: Using default interface naming scheme 'v255'. Jan 30 13:04:30.697157 kernel: loop3: detected capacity change from 0 to 210664 Jan 30 13:04:30.729390 kernel: loop4: detected capacity change from 0 to 28304 Jan 30 13:04:30.730262 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:04:30.745715 kernel: loop5: detected capacity change from 0 to 141000 Jan 30 13:04:30.746302 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:04:30.772169 kernel: loop6: detected capacity change from 0 to 138184 Jan 30 13:04:30.793197 kernel: loop7: detected capacity change from 0 to 210664 Jan 30 13:04:30.806968 (sd-merge)[1321]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jan 30 13:04:30.809992 (sd-merge)[1321]: Merged extensions into '/usr'. Jan 30 13:04:30.844455 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 30 13:04:30.845755 systemd[1]: Reloading requested from client PID 1294 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 13:04:30.845778 systemd[1]: Reloading... Jan 30 13:04:30.937243 kernel: hv_vmbus: registering driver hv_balloon Jan 30 13:04:30.944214 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jan 30 13:04:31.012154 zram_generator::config[1385]: No configuration found. Jan 30 13:04:31.012263 kernel: hv_vmbus: registering driver hyperv_fb Jan 30 13:04:31.033619 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jan 30 13:04:31.033746 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jan 30 13:04:31.043713 kernel: Console: switching to colour dummy device 80x25 Jan 30 13:04:31.051586 kernel: Console: switching to colour frame buffer device 128x48 Jan 30 13:04:31.077237 kernel: mousedev: PS/2 mouse device common for all mice Jan 30 13:04:31.139181 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1339) Jan 30 13:04:31.466573 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:04:31.538154 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Jan 30 13:04:31.613862 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 30 13:04:31.618029 systemd[1]: Reloading finished in 771 ms. Jan 30 13:04:31.647923 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 13:04:31.667722 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 13:04:31.687304 systemd[1]: Starting ensure-sysext.service... Jan 30 13:04:31.692317 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 13:04:31.698474 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 13:04:31.704302 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:04:31.713329 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 13:04:31.721305 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:04:31.737191 systemd[1]: Reloading requested from client PID 1505 ('systemctl') (unit ensure-sysext.service)... Jan 30 13:04:31.737215 systemd[1]: Reloading... Jan 30 13:04:31.753423 systemd-tmpfiles[1508]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 13:04:31.756900 systemd-tmpfiles[1508]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 13:04:31.760694 systemd-tmpfiles[1508]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 13:04:31.763115 systemd-tmpfiles[1508]: ACLs are not supported, ignoring. Jan 30 13:04:31.763335 systemd-tmpfiles[1508]: ACLs are not supported, ignoring. Jan 30 13:04:31.781912 systemd-tmpfiles[1508]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:04:31.781931 systemd-tmpfiles[1508]: Skipping /boot Jan 30 13:04:31.797653 lvm[1506]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:04:31.823013 zram_generator::config[1538]: No configuration found. Jan 30 13:04:31.857708 systemd-tmpfiles[1508]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:04:31.857878 systemd-tmpfiles[1508]: Skipping /boot Jan 30 13:04:32.039780 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:04:32.051997 systemd-networkd[1327]: lo: Link UP Jan 30 13:04:32.052012 systemd-networkd[1327]: lo: Gained carrier Jan 30 13:04:32.056124 systemd-networkd[1327]: Enumeration completed Jan 30 13:04:32.056882 systemd-networkd[1327]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:04:32.056995 systemd-networkd[1327]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:04:32.111150 kernel: mlx5_core d952:00:02.0 enP55634s1: Link up Jan 30 13:04:32.129361 kernel: hv_netvsc 0022483e-f156-0022-483e-f1560022483e eth0: Data path switched to VF: enP55634s1 Jan 30 13:04:32.130683 systemd-networkd[1327]: enP55634s1: Link UP Jan 30 13:04:32.130831 systemd-networkd[1327]: eth0: Link UP Jan 30 13:04:32.130836 systemd-networkd[1327]: eth0: Gained carrier Jan 30 13:04:32.130859 systemd-networkd[1327]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:04:32.132792 systemd[1]: Reloading finished in 395 ms. Jan 30 13:04:32.135383 systemd-networkd[1327]: enP55634s1: Gained carrier Jan 30 13:04:32.151849 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 13:04:32.154849 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:04:32.162212 systemd-networkd[1327]: eth0: DHCPv4 address 10.200.4.12/24, gateway 10.200.4.1 acquired from 168.63.129.16 Jan 30 13:04:32.164022 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 13:04:32.168195 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 13:04:32.171982 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:04:32.179777 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:04:32.187403 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 30 13:04:32.216677 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 13:04:32.220549 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 13:04:32.225159 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 13:04:32.239537 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 13:04:32.246418 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:04:32.259466 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 13:04:32.264147 lvm[1612]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:04:32.269247 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:04:32.269537 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:04:32.274420 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:04:32.283429 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:04:32.294412 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:04:32.297338 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:04:32.297512 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:04:32.301994 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:04:32.303025 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:04:32.303725 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:04:32.303860 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:04:32.315074 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:04:32.315427 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:04:32.324935 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 13:04:32.331571 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:04:32.331763 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:04:32.337043 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:04:32.337352 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:04:32.352803 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:04:32.353222 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:04:32.360407 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:04:32.363227 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:04:32.363396 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:04:32.363552 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:04:32.363753 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 13:04:32.370170 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:04:32.372429 systemd[1]: Finished ensure-sysext.service. Jan 30 13:04:32.375810 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:04:32.379760 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:04:32.379943 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:04:32.384429 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 13:04:32.423000 augenrules[1650]: No rules Jan 30 13:04:32.424977 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 13:04:32.425217 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 30 13:04:32.428420 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 13:04:32.446398 systemd-resolved[1616]: Positive Trust Anchors: Jan 30 13:04:32.446413 systemd-resolved[1616]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:04:32.446461 systemd-resolved[1616]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:04:32.492057 systemd-resolved[1616]: Using system hostname 'ci-4186.1.0-a-d95fc4b65f'. Jan 30 13:04:32.493922 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:04:32.499373 systemd[1]: Reached target network.target - Network. Jan 30 13:04:32.501691 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:04:32.835019 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 13:04:32.838837 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 13:04:33.926434 systemd-networkd[1327]: enP55634s1: Gained IPv6LL Jan 30 13:04:34.183256 systemd-networkd[1327]: eth0: Gained IPv6LL Jan 30 13:04:34.186173 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 13:04:34.189626 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 13:04:34.946228 ldconfig[1289]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 13:04:34.957306 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 13:04:34.966555 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 13:04:34.991675 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 13:04:34.994898 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:04:34.997711 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 13:04:35.000813 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 13:04:35.005300 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 13:04:35.009817 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 13:04:35.012910 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 13:04:35.015834 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 13:04:35.015877 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:04:35.018230 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:04:35.022726 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 13:04:35.026970 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 13:04:35.039052 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 13:04:35.042690 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 13:04:35.045817 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:04:35.048061 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:04:35.050369 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:04:35.050404 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:04:35.073238 systemd[1]: Starting chronyd.service - NTP client/server... Jan 30 13:04:35.079276 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 13:04:35.089205 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 30 13:04:35.094333 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 13:04:35.105271 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 13:04:35.110869 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 13:04:35.113416 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 13:04:35.113466 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Jan 30 13:04:35.114677 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jan 30 13:04:35.118716 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jan 30 13:04:35.121240 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:04:35.132305 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 13:04:35.137495 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 13:04:35.146193 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 30 13:04:35.149463 jq[1668]: false Jan 30 13:04:35.150060 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 13:04:35.159479 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 13:04:35.167404 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 13:04:35.170925 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 30 13:04:35.171546 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 13:04:35.172695 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 13:04:35.178098 KVP[1670]: KVP starting; pid is:1670 Jan 30 13:04:35.179264 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 13:04:35.189522 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 13:04:35.189751 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 13:04:35.190914 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 13:04:35.191164 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 13:04:35.207602 extend-filesystems[1669]: Found loop4 Jan 30 13:04:35.210390 extend-filesystems[1669]: Found loop5 Jan 30 13:04:35.210390 extend-filesystems[1669]: Found loop6 Jan 30 13:04:35.210390 extend-filesystems[1669]: Found loop7 Jan 30 13:04:35.210390 extend-filesystems[1669]: Found sda Jan 30 13:04:35.210390 extend-filesystems[1669]: Found sda1 Jan 30 13:04:35.210390 extend-filesystems[1669]: Found sda2 Jan 30 13:04:35.210390 extend-filesystems[1669]: Found sda3 Jan 30 13:04:35.210390 extend-filesystems[1669]: Found usr Jan 30 13:04:35.210390 extend-filesystems[1669]: Found sda4 Jan 30 13:04:35.210390 extend-filesystems[1669]: Found sda6 Jan 30 13:04:35.210390 extend-filesystems[1669]: Found sda7 Jan 30 13:04:35.210390 extend-filesystems[1669]: Found sda9 Jan 30 13:04:35.210390 extend-filesystems[1669]: Checking size of /dev/sda9 Jan 30 13:04:35.296766 kernel: hv_utils: KVP IC version 4.0 Jan 30 13:04:35.287657 (ntainerd)[1702]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 13:04:35.300697 update_engine[1683]: I20250130 13:04:35.271334 1683 main.cc:92] Flatcar Update Engine starting Jan 30 13:04:35.216870 KVP[1670]: KVP LIC Version: 3.1 Jan 30 13:04:35.306682 extend-filesystems[1669]: Old size kept for /dev/sda9 Jan 30 13:04:35.306682 extend-filesystems[1669]: Found sr0 Jan 30 13:04:35.308507 (chronyd)[1664]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jan 30 13:04:35.316049 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 13:04:35.316395 jq[1684]: true Jan 30 13:04:35.316284 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 13:04:35.338233 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 13:04:35.338489 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 13:04:35.349582 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 13:04:35.356692 chronyd[1716]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jan 30 13:04:35.369077 jq[1713]: true Jan 30 13:04:35.376205 dbus-daemon[1667]: [system] SELinux support is enabled Jan 30 13:04:35.382452 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 13:04:35.387009 update_engine[1683]: I20250130 13:04:35.386950 1683 update_check_scheduler.cc:74] Next update check in 7m20s Jan 30 13:04:35.397138 tar[1693]: linux-amd64/helm Jan 30 13:04:35.395813 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 13:04:35.395849 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 13:04:35.399846 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 13:04:35.399870 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 13:04:35.403390 systemd[1]: Started update-engine.service - Update Engine. Jan 30 13:04:35.413066 chronyd[1716]: Timezone right/UTC failed leap second check, ignoring Jan 30 13:04:35.416310 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 13:04:35.413788 chronyd[1716]: Loaded seccomp filter (level 2) Jan 30 13:04:35.420607 systemd[1]: Started chronyd.service - NTP client/server. Jan 30 13:04:35.501645 systemd-logind[1681]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 30 13:04:35.506393 systemd-logind[1681]: New seat seat0. Jan 30 13:04:35.560211 bash[1748]: Updated "/home/core/.ssh/authorized_keys" Jan 30 13:04:35.563898 coreos-metadata[1666]: Jan 30 13:04:35.563 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 30 13:04:35.567873 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 13:04:35.569197 coreos-metadata[1666]: Jan 30 13:04:35.569 INFO Fetch successful Jan 30 13:04:35.576331 coreos-metadata[1666]: Jan 30 13:04:35.569 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jan 30 13:04:35.584207 coreos-metadata[1666]: Jan 30 13:04:35.581 INFO Fetch successful Jan 30 13:04:35.584207 coreos-metadata[1666]: Jan 30 13:04:35.583 INFO Fetching http://168.63.129.16/machine/a4c0f80c-ecba-4797-b6bd-b1cfc88c5e86/e6ddd5c3%2D6ae8%2D4dd9%2Da3fb%2Dc864c4b7acb8.%5Fci%2D4186.1.0%2Da%2Dd95fc4b65f?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jan 30 13:04:35.588751 coreos-metadata[1666]: Jan 30 13:04:35.586 INFO Fetch successful Jan 30 13:04:35.588751 coreos-metadata[1666]: Jan 30 13:04:35.588 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jan 30 13:04:35.602284 coreos-metadata[1666]: Jan 30 13:04:35.602 INFO Fetch successful Jan 30 13:04:35.626172 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 13:04:35.642961 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 30 13:04:35.652322 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1750) Jan 30 13:04:35.698397 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 30 13:04:35.706028 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 30 13:04:35.888894 locksmithd[1731]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 13:04:36.153667 sshd_keygen[1706]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 13:04:36.201009 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 13:04:36.214052 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 13:04:36.228377 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jan 30 13:04:36.261530 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 13:04:36.261759 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 13:04:36.274641 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 13:04:36.288303 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jan 30 13:04:36.310457 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 13:04:36.336584 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 13:04:36.349478 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 30 13:04:36.352816 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 13:04:36.458622 tar[1693]: linux-amd64/LICENSE Jan 30 13:04:36.458622 tar[1693]: linux-amd64/README.md Jan 30 13:04:36.470765 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 30 13:04:36.778505 containerd[1702]: time="2025-01-30T13:04:36.776854600Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 30 13:04:36.811638 containerd[1702]: time="2025-01-30T13:04:36.811585100Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:04:36.813478 containerd[1702]: time="2025-01-30T13:04:36.813424100Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:04:36.813478 containerd[1702]: time="2025-01-30T13:04:36.813471100Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 13:04:36.813586 containerd[1702]: time="2025-01-30T13:04:36.813492700Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 13:04:36.813955 containerd[1702]: time="2025-01-30T13:04:36.813670300Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 13:04:36.813955 containerd[1702]: time="2025-01-30T13:04:36.813711000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 13:04:36.813955 containerd[1702]: time="2025-01-30T13:04:36.813806000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:04:36.813955 containerd[1702]: time="2025-01-30T13:04:36.813823700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:04:36.814122 containerd[1702]: time="2025-01-30T13:04:36.814049200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:04:36.814122 containerd[1702]: time="2025-01-30T13:04:36.814072500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 13:04:36.814122 containerd[1702]: time="2025-01-30T13:04:36.814091300Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:04:36.814122 containerd[1702]: time="2025-01-30T13:04:36.814105700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 13:04:36.814267 containerd[1702]: time="2025-01-30T13:04:36.814222400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:04:36.814533 containerd[1702]: time="2025-01-30T13:04:36.814477000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:04:36.814942 containerd[1702]: time="2025-01-30T13:04:36.814646800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:04:36.814942 containerd[1702]: time="2025-01-30T13:04:36.814669000Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 13:04:36.814942 containerd[1702]: time="2025-01-30T13:04:36.814764800Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 13:04:36.814942 containerd[1702]: time="2025-01-30T13:04:36.814816700Z" level=info msg="metadata content store policy set" policy=shared Jan 30 13:04:36.840356 containerd[1702]: time="2025-01-30T13:04:36.840284500Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 13:04:36.842238 containerd[1702]: time="2025-01-30T13:04:36.840502800Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 13:04:36.842238 containerd[1702]: time="2025-01-30T13:04:36.840686700Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 13:04:36.842238 containerd[1702]: time="2025-01-30T13:04:36.840715200Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 13:04:36.842238 containerd[1702]: time="2025-01-30T13:04:36.840736100Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 13:04:36.842238 containerd[1702]: time="2025-01-30T13:04:36.840912000Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 13:04:36.842238 containerd[1702]: time="2025-01-30T13:04:36.841226300Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 13:04:36.842238 containerd[1702]: time="2025-01-30T13:04:36.841371200Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 13:04:36.842238 containerd[1702]: time="2025-01-30T13:04:36.841392700Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 13:04:36.842238 containerd[1702]: time="2025-01-30T13:04:36.841411100Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 13:04:36.842238 containerd[1702]: time="2025-01-30T13:04:36.841431200Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 13:04:36.842238 containerd[1702]: time="2025-01-30T13:04:36.841448600Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 13:04:36.842238 containerd[1702]: time="2025-01-30T13:04:36.841464800Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 13:04:36.842238 containerd[1702]: time="2025-01-30T13:04:36.841490700Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 13:04:36.842238 containerd[1702]: time="2025-01-30T13:04:36.841542700Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 13:04:36.842729 containerd[1702]: time="2025-01-30T13:04:36.841566700Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 13:04:36.842729 containerd[1702]: time="2025-01-30T13:04:36.841583900Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 13:04:36.842729 containerd[1702]: time="2025-01-30T13:04:36.841603100Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 13:04:36.842729 containerd[1702]: time="2025-01-30T13:04:36.841633400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 13:04:36.842729 containerd[1702]: time="2025-01-30T13:04:36.841654900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 13:04:36.842729 containerd[1702]: time="2025-01-30T13:04:36.841672400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 13:04:36.842729 containerd[1702]: time="2025-01-30T13:04:36.841690900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 13:04:36.842729 containerd[1702]: time="2025-01-30T13:04:36.841706700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 13:04:36.842729 containerd[1702]: time="2025-01-30T13:04:36.841724300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 13:04:36.842729 containerd[1702]: time="2025-01-30T13:04:36.841738600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 13:04:36.842729 containerd[1702]: time="2025-01-30T13:04:36.841755400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 13:04:36.842729 containerd[1702]: time="2025-01-30T13:04:36.841770700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 13:04:36.842729 containerd[1702]: time="2025-01-30T13:04:36.841788100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 13:04:36.842729 containerd[1702]: time="2025-01-30T13:04:36.841814800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 13:04:36.843192 containerd[1702]: time="2025-01-30T13:04:36.841832200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 13:04:36.843192 containerd[1702]: time="2025-01-30T13:04:36.841847900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 13:04:36.843192 containerd[1702]: time="2025-01-30T13:04:36.841866900Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 13:04:36.843192 containerd[1702]: time="2025-01-30T13:04:36.841895400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 13:04:36.843192 containerd[1702]: time="2025-01-30T13:04:36.841912400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 13:04:36.843192 containerd[1702]: time="2025-01-30T13:04:36.841926900Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 13:04:36.843192 containerd[1702]: time="2025-01-30T13:04:36.841974300Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 13:04:36.843192 containerd[1702]: time="2025-01-30T13:04:36.841999300Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 13:04:36.843192 containerd[1702]: time="2025-01-30T13:04:36.842014300Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 13:04:36.843192 containerd[1702]: time="2025-01-30T13:04:36.842030000Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 13:04:36.843192 containerd[1702]: time="2025-01-30T13:04:36.842047200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 13:04:36.843192 containerd[1702]: time="2025-01-30T13:04:36.842063600Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 13:04:36.843192 containerd[1702]: time="2025-01-30T13:04:36.842079200Z" level=info msg="NRI interface is disabled by configuration." Jan 30 13:04:36.843192 containerd[1702]: time="2025-01-30T13:04:36.842093800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 13:04:36.844108 containerd[1702]: time="2025-01-30T13:04:36.844036200Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 13:04:36.844353 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:04:36.846228 containerd[1702]: time="2025-01-30T13:04:36.844384300Z" level=info msg="Connect containerd service" Jan 30 13:04:36.846228 containerd[1702]: time="2025-01-30T13:04:36.844449600Z" level=info msg="using legacy CRI server" Jan 30 13:04:36.846228 containerd[1702]: time="2025-01-30T13:04:36.844459900Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 13:04:36.846228 containerd[1702]: time="2025-01-30T13:04:36.844572900Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 13:04:36.846228 containerd[1702]: time="2025-01-30T13:04:36.845096500Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 13:04:36.846228 containerd[1702]: time="2025-01-30T13:04:36.845407900Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 13:04:36.846228 containerd[1702]: time="2025-01-30T13:04:36.845450900Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 13:04:36.846228 containerd[1702]: time="2025-01-30T13:04:36.845511900Z" level=info msg="Start subscribing containerd event" Jan 30 13:04:36.846228 containerd[1702]: time="2025-01-30T13:04:36.845553000Z" level=info msg="Start recovering state" Jan 30 13:04:36.846228 containerd[1702]: time="2025-01-30T13:04:36.845609800Z" level=info msg="Start event monitor" Jan 30 13:04:36.846228 containerd[1702]: time="2025-01-30T13:04:36.845620200Z" level=info msg="Start snapshots syncer" Jan 30 13:04:36.846228 containerd[1702]: time="2025-01-30T13:04:36.845628600Z" level=info msg="Start cni network conf syncer for default" Jan 30 13:04:36.846228 containerd[1702]: time="2025-01-30T13:04:36.845639300Z" level=info msg="Start streaming server" Jan 30 13:04:36.846228 containerd[1702]: time="2025-01-30T13:04:36.845689400Z" level=info msg="containerd successfully booted in 0.069647s" Jan 30 13:04:36.847835 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 13:04:36.851923 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 13:04:36.855425 systemd[1]: Startup finished in 763ms (firmware) + 28.397s (loader) + 984ms (kernel) + 11.751s (initrd) + 10.991s (userspace) = 52.889s. Jan 30 13:04:36.871192 (kubelet)[1860]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:04:36.900100 agetty[1846]: failed to open credentials directory Jan 30 13:04:36.901064 agetty[1847]: failed to open credentials directory Jan 30 13:04:37.108008 login[1846]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 30 13:04:37.113247 login[1847]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 30 13:04:37.126716 systemd-logind[1681]: New session 1 of user core. Jan 30 13:04:37.127928 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 13:04:37.134523 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 13:04:37.139435 systemd-logind[1681]: New session 2 of user core. Jan 30 13:04:37.153768 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 13:04:37.163366 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 13:04:37.181783 (systemd)[1871]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 13:04:37.376698 systemd[1871]: Queued start job for default target default.target. Jan 30 13:04:37.384247 systemd[1871]: Created slice app.slice - User Application Slice. Jan 30 13:04:37.384278 systemd[1871]: Reached target paths.target - Paths. Jan 30 13:04:37.384291 systemd[1871]: Reached target timers.target - Timers. Jan 30 13:04:37.385548 systemd[1871]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 13:04:37.396736 systemd[1871]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 13:04:37.396858 systemd[1871]: Reached target sockets.target - Sockets. Jan 30 13:04:37.396877 systemd[1871]: Reached target basic.target - Basic System. Jan 30 13:04:37.396923 systemd[1871]: Reached target default.target - Main User Target. Jan 30 13:04:37.396958 systemd[1871]: Startup finished in 206ms. Jan 30 13:04:37.397123 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 13:04:37.399260 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 13:04:37.400524 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 13:04:37.614929 kubelet[1860]: E0130 13:04:37.614872 1860 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:04:37.617327 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:04:37.617514 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:04:38.244956 waagent[1843]: 2025-01-30T13:04:38.244842Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Jan 30 13:04:38.247890 waagent[1843]: 2025-01-30T13:04:38.247819Z INFO Daemon Daemon OS: flatcar 4186.1.0 Jan 30 13:04:38.250240 waagent[1843]: 2025-01-30T13:04:38.250181Z INFO Daemon Daemon Python: 3.11.10 Jan 30 13:04:38.270107 waagent[1843]: 2025-01-30T13:04:38.251493Z INFO Daemon Daemon Run daemon Jan 30 13:04:38.270107 waagent[1843]: 2025-01-30T13:04:38.253095Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4186.1.0' Jan 30 13:04:38.270107 waagent[1843]: 2025-01-30T13:04:38.254631Z INFO Daemon Daemon Using waagent for provisioning Jan 30 13:04:38.270107 waagent[1843]: 2025-01-30T13:04:38.255655Z INFO Daemon Daemon Activate resource disk Jan 30 13:04:38.270107 waagent[1843]: 2025-01-30T13:04:38.256407Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jan 30 13:04:38.270107 waagent[1843]: 2025-01-30T13:04:38.261444Z INFO Daemon Daemon Found device: None Jan 30 13:04:38.270107 waagent[1843]: 2025-01-30T13:04:38.262214Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jan 30 13:04:38.270107 waagent[1843]: 2025-01-30T13:04:38.263196Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jan 30 13:04:38.270107 waagent[1843]: 2025-01-30T13:04:38.264073Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 30 13:04:38.270107 waagent[1843]: 2025-01-30T13:04:38.265097Z INFO Daemon Daemon Running default provisioning handler Jan 30 13:04:38.286850 waagent[1843]: 2025-01-30T13:04:38.286757Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jan 30 13:04:38.293414 waagent[1843]: 2025-01-30T13:04:38.293348Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jan 30 13:04:38.301600 waagent[1843]: 2025-01-30T13:04:38.294585Z INFO Daemon Daemon cloud-init is enabled: False Jan 30 13:04:38.301600 waagent[1843]: 2025-01-30T13:04:38.295380Z INFO Daemon Daemon Copying ovf-env.xml Jan 30 13:04:38.375540 waagent[1843]: 2025-01-30T13:04:38.375363Z INFO Daemon Daemon Successfully mounted dvd Jan 30 13:04:38.406018 waagent[1843]: 2025-01-30T13:04:38.404281Z INFO Daemon Daemon Detect protocol endpoint Jan 30 13:04:38.404449 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jan 30 13:04:38.406811 waagent[1843]: 2025-01-30T13:04:38.406741Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 30 13:04:38.421259 waagent[1843]: 2025-01-30T13:04:38.407821Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jan 30 13:04:38.421259 waagent[1843]: 2025-01-30T13:04:38.408595Z INFO Daemon Daemon Test for route to 168.63.129.16 Jan 30 13:04:38.421259 waagent[1843]: 2025-01-30T13:04:38.409579Z INFO Daemon Daemon Route to 168.63.129.16 exists Jan 30 13:04:38.421259 waagent[1843]: 2025-01-30T13:04:38.409912Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jan 30 13:04:38.445020 waagent[1843]: 2025-01-30T13:04:38.444946Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jan 30 13:04:38.452570 waagent[1843]: 2025-01-30T13:04:38.446383Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jan 30 13:04:38.452570 waagent[1843]: 2025-01-30T13:04:38.446979Z INFO Daemon Daemon Server preferred version:2015-04-05 Jan 30 13:04:38.656429 waagent[1843]: 2025-01-30T13:04:38.656320Z INFO Daemon Daemon Initializing goal state during protocol detection Jan 30 13:04:38.659691 waagent[1843]: 2025-01-30T13:04:38.659601Z INFO Daemon Daemon Forcing an update of the goal state. Jan 30 13:04:38.666925 waagent[1843]: 2025-01-30T13:04:38.666852Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 30 13:04:38.680918 waagent[1843]: 2025-01-30T13:04:38.680857Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.162 Jan 30 13:04:38.695860 waagent[1843]: 2025-01-30T13:04:38.682564Z INFO Daemon Jan 30 13:04:38.695860 waagent[1843]: 2025-01-30T13:04:38.684241Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 5d9223f6-4de8-46ac-8b7b-2f8189cd4a3f eTag: 6845681873160493461 source: Fabric] Jan 30 13:04:38.695860 waagent[1843]: 2025-01-30T13:04:38.685740Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jan 30 13:04:38.695860 waagent[1843]: 2025-01-30T13:04:38.686805Z INFO Daemon Jan 30 13:04:38.695860 waagent[1843]: 2025-01-30T13:04:38.687157Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jan 30 13:04:38.698728 waagent[1843]: 2025-01-30T13:04:38.698682Z INFO Daemon Daemon Downloading artifacts profile blob Jan 30 13:04:38.763013 waagent[1843]: 2025-01-30T13:04:38.762926Z INFO Daemon Downloaded certificate {'thumbprint': '221EA6614E0EB3A1EBAD92DDC57E01F292557030', 'hasPrivateKey': True} Jan 30 13:04:38.767810 waagent[1843]: 2025-01-30T13:04:38.767745Z INFO Daemon Fetch goal state completed Jan 30 13:04:38.776474 waagent[1843]: 2025-01-30T13:04:38.776427Z INFO Daemon Daemon Starting provisioning Jan 30 13:04:38.783566 waagent[1843]: 2025-01-30T13:04:38.777663Z INFO Daemon Daemon Handle ovf-env.xml. Jan 30 13:04:38.783566 waagent[1843]: 2025-01-30T13:04:38.778556Z INFO Daemon Daemon Set hostname [ci-4186.1.0-a-d95fc4b65f] Jan 30 13:04:38.795649 waagent[1843]: 2025-01-30T13:04:38.795571Z INFO Daemon Daemon Publish hostname [ci-4186.1.0-a-d95fc4b65f] Jan 30 13:04:38.802981 waagent[1843]: 2025-01-30T13:04:38.796844Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jan 30 13:04:38.802981 waagent[1843]: 2025-01-30T13:04:38.797836Z INFO Daemon Daemon Primary interface is [eth0] Jan 30 13:04:38.822992 systemd-networkd[1327]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:04:38.823002 systemd-networkd[1327]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:04:38.823050 systemd-networkd[1327]: eth0: DHCP lease lost Jan 30 13:04:38.824384 waagent[1843]: 2025-01-30T13:04:38.824293Z INFO Daemon Daemon Create user account if not exists Jan 30 13:04:38.833436 waagent[1843]: 2025-01-30T13:04:38.825643Z INFO Daemon Daemon User core already exists, skip useradd Jan 30 13:04:38.833436 waagent[1843]: 2025-01-30T13:04:38.826599Z INFO Daemon Daemon Configure sudoer Jan 30 13:04:38.833436 waagent[1843]: 2025-01-30T13:04:38.827876Z INFO Daemon Daemon Configure sshd Jan 30 13:04:38.833436 waagent[1843]: 2025-01-30T13:04:38.828981Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jan 30 13:04:38.833436 waagent[1843]: 2025-01-30T13:04:38.829794Z INFO Daemon Daemon Deploy ssh public key. Jan 30 13:04:38.834242 systemd-networkd[1327]: eth0: DHCPv6 lease lost Jan 30 13:04:38.877207 systemd-networkd[1327]: eth0: DHCPv4 address 10.200.4.12/24, gateway 10.200.4.1 acquired from 168.63.129.16 Jan 30 13:04:39.989411 waagent[1843]: 2025-01-30T13:04:39.989334Z INFO Daemon Daemon Provisioning complete Jan 30 13:04:40.002792 waagent[1843]: 2025-01-30T13:04:40.002735Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jan 30 13:04:40.005965 waagent[1843]: 2025-01-30T13:04:40.005898Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jan 30 13:04:40.010242 waagent[1843]: 2025-01-30T13:04:40.010187Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Jan 30 13:04:40.133350 waagent[1926]: 2025-01-30T13:04:40.133241Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Jan 30 13:04:40.133768 waagent[1926]: 2025-01-30T13:04:40.133408Z INFO ExtHandler ExtHandler OS: flatcar 4186.1.0 Jan 30 13:04:40.133768 waagent[1926]: 2025-01-30T13:04:40.133490Z INFO ExtHandler ExtHandler Python: 3.11.10 Jan 30 13:04:40.169160 waagent[1926]: 2025-01-30T13:04:40.169047Z INFO ExtHandler ExtHandler Distro: flatcar-4186.1.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.10; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jan 30 13:04:40.169417 waagent[1926]: 2025-01-30T13:04:40.169357Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 30 13:04:40.169536 waagent[1926]: 2025-01-30T13:04:40.169480Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 30 13:04:40.177540 waagent[1926]: 2025-01-30T13:04:40.177478Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 30 13:04:40.183036 waagent[1926]: 2025-01-30T13:04:40.182985Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.162 Jan 30 13:04:40.183513 waagent[1926]: 2025-01-30T13:04:40.183457Z INFO ExtHandler Jan 30 13:04:40.183584 waagent[1926]: 2025-01-30T13:04:40.183549Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 4b35bbe9-fcdf-4c39-bb1c-0f83b965b40c eTag: 6845681873160493461 source: Fabric] Jan 30 13:04:40.183903 waagent[1926]: 2025-01-30T13:04:40.183852Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 30 13:04:40.184477 waagent[1926]: 2025-01-30T13:04:40.184420Z INFO ExtHandler Jan 30 13:04:40.184543 waagent[1926]: 2025-01-30T13:04:40.184503Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jan 30 13:04:40.188066 waagent[1926]: 2025-01-30T13:04:40.188024Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 30 13:04:40.247214 waagent[1926]: 2025-01-30T13:04:40.247048Z INFO ExtHandler Downloaded certificate {'thumbprint': '221EA6614E0EB3A1EBAD92DDC57E01F292557030', 'hasPrivateKey': True} Jan 30 13:04:40.247668 waagent[1926]: 2025-01-30T13:04:40.247609Z INFO ExtHandler Fetch goal state completed Jan 30 13:04:40.261958 waagent[1926]: 2025-01-30T13:04:40.261888Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1926 Jan 30 13:04:40.262171 waagent[1926]: 2025-01-30T13:04:40.262083Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jan 30 13:04:40.264096 waagent[1926]: 2025-01-30T13:04:40.264021Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4186.1.0', '', 'Flatcar Container Linux by Kinvolk'] Jan 30 13:04:40.264557 waagent[1926]: 2025-01-30T13:04:40.264496Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jan 30 13:04:40.298422 waagent[1926]: 2025-01-30T13:04:40.298364Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jan 30 13:04:40.298686 waagent[1926]: 2025-01-30T13:04:40.298625Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jan 30 13:04:40.305727 waagent[1926]: 2025-01-30T13:04:40.305536Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jan 30 13:04:40.312556 systemd[1]: Reloading requested from client PID 1939 ('systemctl') (unit waagent.service)... Jan 30 13:04:40.312571 systemd[1]: Reloading... Jan 30 13:04:40.418212 zram_generator::config[1973]: No configuration found. Jan 30 13:04:40.534216 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:04:40.621355 systemd[1]: Reloading finished in 308 ms. Jan 30 13:04:40.649621 waagent[1926]: 2025-01-30T13:04:40.649161Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Jan 30 13:04:40.664764 systemd[1]: Reloading requested from client PID 2030 ('systemctl') (unit waagent.service)... Jan 30 13:04:40.664785 systemd[1]: Reloading... Jan 30 13:04:40.772171 zram_generator::config[2067]: No configuration found. Jan 30 13:04:40.884971 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:04:40.972540 systemd[1]: Reloading finished in 307 ms. Jan 30 13:04:41.000392 waagent[1926]: 2025-01-30T13:04:41.000296Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jan 30 13:04:41.000520 waagent[1926]: 2025-01-30T13:04:41.000484Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jan 30 13:04:42.010283 waagent[1926]: 2025-01-30T13:04:42.010180Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jan 30 13:04:42.011177 waagent[1926]: 2025-01-30T13:04:42.011081Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Jan 30 13:04:42.013803 waagent[1926]: 2025-01-30T13:04:42.013723Z INFO ExtHandler ExtHandler Starting env monitor service. Jan 30 13:04:42.014349 waagent[1926]: 2025-01-30T13:04:42.014263Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jan 30 13:04:42.014483 waagent[1926]: 2025-01-30T13:04:42.014420Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 30 13:04:42.014666 waagent[1926]: 2025-01-30T13:04:42.014611Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 30 13:04:42.015043 waagent[1926]: 2025-01-30T13:04:42.014981Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jan 30 13:04:42.015202 waagent[1926]: 2025-01-30T13:04:42.015109Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 30 13:04:42.015356 waagent[1926]: 2025-01-30T13:04:42.015299Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 30 13:04:42.015440 waagent[1926]: 2025-01-30T13:04:42.015386Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jan 30 13:04:42.015992 waagent[1926]: 2025-01-30T13:04:42.015928Z INFO EnvHandler ExtHandler Configure routes Jan 30 13:04:42.016376 waagent[1926]: 2025-01-30T13:04:42.016300Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jan 30 13:04:42.016665 waagent[1926]: 2025-01-30T13:04:42.016600Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jan 30 13:04:42.016894 waagent[1926]: 2025-01-30T13:04:42.016834Z INFO EnvHandler ExtHandler Gateway:None Jan 30 13:04:42.017046 waagent[1926]: 2025-01-30T13:04:42.016965Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jan 30 13:04:42.017842 waagent[1926]: 2025-01-30T13:04:42.017787Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jan 30 13:04:42.017956 waagent[1926]: 2025-01-30T13:04:42.017847Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jan 30 13:04:42.017956 waagent[1926]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jan 30 13:04:42.017956 waagent[1926]: eth0 00000000 0104C80A 0003 0 0 1024 00000000 0 0 0 Jan 30 13:04:42.017956 waagent[1926]: eth0 0004C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jan 30 13:04:42.017956 waagent[1926]: eth0 0104C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jan 30 13:04:42.017956 waagent[1926]: eth0 10813FA8 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 30 13:04:42.017956 waagent[1926]: eth0 FEA9FEA9 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 30 13:04:42.018460 waagent[1926]: 2025-01-30T13:04:42.018289Z INFO EnvHandler ExtHandler Routes:None Jan 30 13:04:42.025240 waagent[1926]: 2025-01-30T13:04:42.025173Z INFO ExtHandler ExtHandler Jan 30 13:04:42.025380 waagent[1926]: 2025-01-30T13:04:42.025342Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 8a86424e-28a8-42f4-99cc-568c0be1cc87 correlation 5f9934b1-ed7d-4c29-9bf4-5508aede3a2c created: 2025-01-30T13:03:34.434761Z] Jan 30 13:04:42.025736 waagent[1926]: 2025-01-30T13:04:42.025689Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 30 13:04:42.026267 waagent[1926]: 2025-01-30T13:04:42.026224Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Jan 30 13:04:42.070295 waagent[1926]: 2025-01-30T13:04:42.070079Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 6BEEE494-0E5B-4D8D-8FA8-CAD70A0DA209;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Jan 30 13:04:42.098095 waagent[1926]: 2025-01-30T13:04:42.098016Z INFO MonitorHandler ExtHandler Network interfaces: Jan 30 13:04:42.098095 waagent[1926]: Executing ['ip', '-a', '-o', 'link']: Jan 30 13:04:42.098095 waagent[1926]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jan 30 13:04:42.098095 waagent[1926]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:3e:f1:56 brd ff:ff:ff:ff:ff:ff Jan 30 13:04:42.098095 waagent[1926]: 3: enP55634s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:3e:f1:56 brd ff:ff:ff:ff:ff:ff\ altname enP55634p0s2 Jan 30 13:04:42.098095 waagent[1926]: Executing ['ip', '-4', '-a', '-o', 'address']: Jan 30 13:04:42.098095 waagent[1926]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jan 30 13:04:42.098095 waagent[1926]: 2: eth0 inet 10.200.4.12/24 metric 1024 brd 10.200.4.255 scope global eth0\ valid_lft forever preferred_lft forever Jan 30 13:04:42.098095 waagent[1926]: Executing ['ip', '-6', '-a', '-o', 'address']: Jan 30 13:04:42.098095 waagent[1926]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jan 30 13:04:42.098095 waagent[1926]: 2: eth0 inet6 fe80::222:48ff:fe3e:f156/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 30 13:04:42.098095 waagent[1926]: 3: enP55634s1 inet6 fe80::222:48ff:fe3e:f156/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 30 13:04:42.137116 waagent[1926]: 2025-01-30T13:04:42.137050Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Jan 30 13:04:42.137116 waagent[1926]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 30 13:04:42.137116 waagent[1926]: pkts bytes target prot opt in out source destination Jan 30 13:04:42.137116 waagent[1926]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 30 13:04:42.137116 waagent[1926]: pkts bytes target prot opt in out source destination Jan 30 13:04:42.137116 waagent[1926]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 30 13:04:42.137116 waagent[1926]: pkts bytes target prot opt in out source destination Jan 30 13:04:42.137116 waagent[1926]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 30 13:04:42.137116 waagent[1926]: 7 881 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 30 13:04:42.137116 waagent[1926]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 30 13:04:42.140936 waagent[1926]: 2025-01-30T13:04:42.140875Z INFO EnvHandler ExtHandler Current Firewall rules: Jan 30 13:04:42.140936 waagent[1926]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 30 13:04:42.140936 waagent[1926]: pkts bytes target prot opt in out source destination Jan 30 13:04:42.140936 waagent[1926]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 30 13:04:42.140936 waagent[1926]: pkts bytes target prot opt in out source destination Jan 30 13:04:42.140936 waagent[1926]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 30 13:04:42.140936 waagent[1926]: pkts bytes target prot opt in out source destination Jan 30 13:04:42.140936 waagent[1926]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 30 13:04:42.140936 waagent[1926]: 8 933 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 30 13:04:42.140936 waagent[1926]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 30 13:04:42.141336 waagent[1926]: 2025-01-30T13:04:42.141193Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jan 30 13:04:47.644697 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 30 13:04:47.653341 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:04:47.753688 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:04:47.761475 (kubelet)[2160]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:04:48.375896 kubelet[2160]: E0130 13:04:48.375801 2160 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:04:48.379521 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:04:48.379714 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:04:58.394737 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 30 13:04:58.400341 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:04:58.490839 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:04:58.495388 (kubelet)[2176]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:04:59.048009 kubelet[2176]: E0130 13:04:59.047954 2176 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:04:59.050913 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:04:59.051121 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:04:59.206921 chronyd[1716]: Selected source PHC0 Jan 30 13:05:09.144652 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 30 13:05:09.150342 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:05:09.239328 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:05:09.243510 (kubelet)[2191]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:05:09.789618 kubelet[2191]: E0130 13:05:09.789561 2191 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:05:09.792105 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:05:09.792313 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:05:12.213633 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 13:05:12.218440 systemd[1]: Started sshd@0-10.200.4.12:22-10.200.16.10:39680.service - OpenSSH per-connection server daemon (10.200.16.10:39680). Jan 30 13:05:13.029219 sshd[2199]: Accepted publickey for core from 10.200.16.10 port 39680 ssh2: RSA SHA256:R1VmL1R3PxweWMZWYfiOmewd1nMkLMPBC099indN3nA Jan 30 13:05:13.030908 sshd-session[2199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:05:13.036781 systemd-logind[1681]: New session 3 of user core. Jan 30 13:05:13.046292 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 13:05:13.610428 systemd[1]: Started sshd@1-10.200.4.12:22-10.200.16.10:39694.service - OpenSSH per-connection server daemon (10.200.16.10:39694). Jan 30 13:05:14.254774 sshd[2204]: Accepted publickey for core from 10.200.16.10 port 39694 ssh2: RSA SHA256:R1VmL1R3PxweWMZWYfiOmewd1nMkLMPBC099indN3nA Jan 30 13:05:14.257383 sshd-session[2204]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:05:14.261343 systemd-logind[1681]: New session 4 of user core. Jan 30 13:05:14.265431 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 13:05:14.710336 sshd[2206]: Connection closed by 10.200.16.10 port 39694 Jan 30 13:05:14.711286 sshd-session[2204]: pam_unix(sshd:session): session closed for user core Jan 30 13:05:14.715620 systemd[1]: sshd@1-10.200.4.12:22-10.200.16.10:39694.service: Deactivated successfully. Jan 30 13:05:14.717705 systemd[1]: session-4.scope: Deactivated successfully. Jan 30 13:05:14.718559 systemd-logind[1681]: Session 4 logged out. Waiting for processes to exit. Jan 30 13:05:14.719799 systemd-logind[1681]: Removed session 4. Jan 30 13:05:14.824265 systemd[1]: Started sshd@2-10.200.4.12:22-10.200.16.10:39702.service - OpenSSH per-connection server daemon (10.200.16.10:39702). Jan 30 13:05:15.475388 sshd[2211]: Accepted publickey for core from 10.200.16.10 port 39702 ssh2: RSA SHA256:R1VmL1R3PxweWMZWYfiOmewd1nMkLMPBC099indN3nA Jan 30 13:05:15.477064 sshd-session[2211]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:05:15.482607 systemd-logind[1681]: New session 5 of user core. Jan 30 13:05:15.489522 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 13:05:15.925331 sshd[2213]: Connection closed by 10.200.16.10 port 39702 Jan 30 13:05:15.926183 sshd-session[2211]: pam_unix(sshd:session): session closed for user core Jan 30 13:05:15.929549 systemd[1]: sshd@2-10.200.4.12:22-10.200.16.10:39702.service: Deactivated successfully. Jan 30 13:05:15.931722 systemd[1]: session-5.scope: Deactivated successfully. Jan 30 13:05:15.933280 systemd-logind[1681]: Session 5 logged out. Waiting for processes to exit. Jan 30 13:05:15.934234 systemd-logind[1681]: Removed session 5. Jan 30 13:05:16.042294 systemd[1]: Started sshd@3-10.200.4.12:22-10.200.16.10:47894.service - OpenSSH per-connection server daemon (10.200.16.10:47894). Jan 30 13:05:16.686845 sshd[2218]: Accepted publickey for core from 10.200.16.10 port 47894 ssh2: RSA SHA256:R1VmL1R3PxweWMZWYfiOmewd1nMkLMPBC099indN3nA Jan 30 13:05:16.688526 sshd-session[2218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:05:16.694095 systemd-logind[1681]: New session 6 of user core. Jan 30 13:05:16.703272 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 13:05:17.140806 sshd[2220]: Connection closed by 10.200.16.10 port 47894 Jan 30 13:05:17.141922 sshd-session[2218]: pam_unix(sshd:session): session closed for user core Jan 30 13:05:17.145198 systemd[1]: sshd@3-10.200.4.12:22-10.200.16.10:47894.service: Deactivated successfully. Jan 30 13:05:17.147358 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 13:05:17.148760 systemd-logind[1681]: Session 6 logged out. Waiting for processes to exit. Jan 30 13:05:17.149695 systemd-logind[1681]: Removed session 6. Jan 30 13:05:17.256876 systemd[1]: Started sshd@4-10.200.4.12:22-10.200.16.10:47904.service - OpenSSH per-connection server daemon (10.200.16.10:47904). Jan 30 13:05:17.898469 sshd[2225]: Accepted publickey for core from 10.200.16.10 port 47904 ssh2: RSA SHA256:R1VmL1R3PxweWMZWYfiOmewd1nMkLMPBC099indN3nA Jan 30 13:05:17.900088 sshd-session[2225]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:05:17.905603 systemd-logind[1681]: New session 7 of user core. Jan 30 13:05:17.913285 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 13:05:18.382750 sudo[2228]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 30 13:05:18.383528 sudo[2228]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:05:18.412491 sudo[2228]: pam_unix(sudo:session): session closed for user root Jan 30 13:05:18.515358 sshd[2227]: Connection closed by 10.200.16.10 port 47904 Jan 30 13:05:18.516489 sshd-session[2225]: pam_unix(sshd:session): session closed for user core Jan 30 13:05:18.519730 systemd[1]: sshd@4-10.200.4.12:22-10.200.16.10:47904.service: Deactivated successfully. Jan 30 13:05:18.521711 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 13:05:18.523274 systemd-logind[1681]: Session 7 logged out. Waiting for processes to exit. Jan 30 13:05:18.524383 systemd-logind[1681]: Removed session 7. Jan 30 13:05:18.628432 systemd[1]: Started sshd@5-10.200.4.12:22-10.200.16.10:47916.service - OpenSSH per-connection server daemon (10.200.16.10:47916). Jan 30 13:05:19.098765 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Jan 30 13:05:19.269691 sshd[2233]: Accepted publickey for core from 10.200.16.10 port 47916 ssh2: RSA SHA256:R1VmL1R3PxweWMZWYfiOmewd1nMkLMPBC099indN3nA Jan 30 13:05:19.271878 sshd-session[2233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:05:19.276414 systemd-logind[1681]: New session 8 of user core. Jan 30 13:05:19.286299 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 30 13:05:19.620499 sudo[2237]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 30 13:05:19.620852 sudo[2237]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:05:19.624308 sudo[2237]: pam_unix(sudo:session): session closed for user root Jan 30 13:05:19.629432 sudo[2236]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 30 13:05:19.629783 sudo[2236]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:05:19.642495 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 30 13:05:19.669409 augenrules[2259]: No rules Jan 30 13:05:19.670797 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 13:05:19.671018 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 30 13:05:19.672613 sudo[2236]: pam_unix(sudo:session): session closed for user root Jan 30 13:05:19.777905 sshd[2235]: Connection closed by 10.200.16.10 port 47916 Jan 30 13:05:19.778730 sshd-session[2233]: pam_unix(sshd:session): session closed for user core Jan 30 13:05:19.782053 systemd[1]: sshd@5-10.200.4.12:22-10.200.16.10:47916.service: Deactivated successfully. Jan 30 13:05:19.784031 systemd[1]: session-8.scope: Deactivated successfully. Jan 30 13:05:19.785602 systemd-logind[1681]: Session 8 logged out. Waiting for processes to exit. Jan 30 13:05:19.786535 systemd-logind[1681]: Removed session 8. Jan 30 13:05:19.890517 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 30 13:05:19.900319 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:05:19.902670 systemd[1]: Started sshd@6-10.200.4.12:22-10.200.16.10:47926.service - OpenSSH per-connection server daemon (10.200.16.10:47926). Jan 30 13:05:20.002757 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:05:20.016487 (kubelet)[2277]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:05:20.055147 kubelet[2277]: E0130 13:05:20.055081 2277 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:05:20.057490 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:05:20.057694 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:05:20.549878 sshd[2268]: Accepted publickey for core from 10.200.16.10 port 47926 ssh2: RSA SHA256:R1VmL1R3PxweWMZWYfiOmewd1nMkLMPBC099indN3nA Jan 30 13:05:20.551352 sshd-session[2268]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:05:20.556272 systemd-logind[1681]: New session 9 of user core. Jan 30 13:05:20.562272 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 30 13:05:20.904001 sudo[2286]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 13:05:20.904374 sudo[2286]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:05:21.127579 update_engine[1683]: I20250130 13:05:21.127480 1683 update_attempter.cc:509] Updating boot flags... Jan 30 13:05:21.181180 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (2306) Jan 30 13:05:21.318340 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (2307) Jan 30 13:05:22.905432 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 30 13:05:22.906291 (dockerd)[2418]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 30 13:05:24.287959 dockerd[2418]: time="2025-01-30T13:05:24.287891995Z" level=info msg="Starting up" Jan 30 13:05:24.814075 dockerd[2418]: time="2025-01-30T13:05:24.814026047Z" level=info msg="Loading containers: start." Jan 30 13:05:25.014295 kernel: Initializing XFRM netlink socket Jan 30 13:05:25.218092 systemd-networkd[1327]: docker0: Link UP Jan 30 13:05:25.257381 dockerd[2418]: time="2025-01-30T13:05:25.257342909Z" level=info msg="Loading containers: done." Jan 30 13:05:25.318969 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck213934454-merged.mount: Deactivated successfully. Jan 30 13:05:25.326028 dockerd[2418]: time="2025-01-30T13:05:25.325972246Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 30 13:05:25.326408 dockerd[2418]: time="2025-01-30T13:05:25.326100347Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jan 30 13:05:25.326408 dockerd[2418]: time="2025-01-30T13:05:25.326250849Z" level=info msg="Daemon has completed initialization" Jan 30 13:05:25.381001 dockerd[2418]: time="2025-01-30T13:05:25.380939636Z" level=info msg="API listen on /run/docker.sock" Jan 30 13:05:25.381418 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 30 13:05:27.390994 containerd[1702]: time="2025-01-30T13:05:27.390578423Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\"" Jan 30 13:05:28.225517 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3381573624.mount: Deactivated successfully. Jan 30 13:05:29.996281 containerd[1702]: time="2025-01-30T13:05:29.996224705Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:05:29.998354 containerd[1702]: time="2025-01-30T13:05:29.998295134Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.9: active requests=0, bytes read=32677020" Jan 30 13:05:30.001827 containerd[1702]: time="2025-01-30T13:05:30.001773783Z" level=info msg="ImageCreate event name:\"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:05:30.007161 containerd[1702]: time="2025-01-30T13:05:30.007093257Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:05:30.008408 containerd[1702]: time="2025-01-30T13:05:30.008049670Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.9\" with image id \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\", size \"32673812\" in 2.617412047s" Jan 30 13:05:30.008408 containerd[1702]: time="2025-01-30T13:05:30.008092871Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\" returns image reference \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\"" Jan 30 13:05:30.032363 containerd[1702]: time="2025-01-30T13:05:30.032319610Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\"" Jan 30 13:05:30.144489 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 30 13:05:30.151623 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:05:30.694225 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:05:30.698661 (kubelet)[2671]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:05:30.736736 kubelet[2671]: E0130 13:05:30.736681 2671 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:05:30.739111 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:05:30.739347 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:05:32.622115 containerd[1702]: time="2025-01-30T13:05:32.622058247Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:05:32.625311 containerd[1702]: time="2025-01-30T13:05:32.625240987Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=29605753" Jan 30 13:05:32.627961 containerd[1702]: time="2025-01-30T13:05:32.627904521Z" level=info msg="ImageCreate event name:\"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:05:32.633510 containerd[1702]: time="2025-01-30T13:05:32.633467191Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:05:32.634659 containerd[1702]: time="2025-01-30T13:05:32.634512904Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.9\" with image id \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\", size \"31052327\" in 2.602145693s" Jan 30 13:05:32.634659 containerd[1702]: time="2025-01-30T13:05:32.634552405Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" returns image reference \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\"" Jan 30 13:05:32.658070 containerd[1702]: time="2025-01-30T13:05:32.658027200Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\"" Jan 30 13:05:34.021495 containerd[1702]: time="2025-01-30T13:05:34.021441976Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:05:34.024287 containerd[1702]: time="2025-01-30T13:05:34.024222911Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.9: active requests=0, bytes read=17783072" Jan 30 13:05:34.029973 containerd[1702]: time="2025-01-30T13:05:34.029908082Z" level=info msg="ImageCreate event name:\"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:05:34.034841 containerd[1702]: time="2025-01-30T13:05:34.034778744Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:05:34.036077 containerd[1702]: time="2025-01-30T13:05:34.035744856Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.9\" with image id \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\", size \"19229664\" in 1.377674155s" Jan 30 13:05:34.036077 containerd[1702]: time="2025-01-30T13:05:34.035783256Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\" returns image reference \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\"" Jan 30 13:05:34.058951 containerd[1702]: time="2025-01-30T13:05:34.058908947Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 30 13:05:35.400758 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2439151797.mount: Deactivated successfully. Jan 30 13:05:35.873647 containerd[1702]: time="2025-01-30T13:05:35.873505506Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:05:35.875835 containerd[1702]: time="2025-01-30T13:05:35.875786935Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=29058345" Jan 30 13:05:35.879154 containerd[1702]: time="2025-01-30T13:05:35.879097177Z" level=info msg="ImageCreate event name:\"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:05:35.920962 containerd[1702]: time="2025-01-30T13:05:35.920871203Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:05:35.922036 containerd[1702]: time="2025-01-30T13:05:35.921846715Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"29057356\" in 1.862890167s" Jan 30 13:05:35.922036 containerd[1702]: time="2025-01-30T13:05:35.921897616Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\"" Jan 30 13:05:35.944793 containerd[1702]: time="2025-01-30T13:05:35.944756304Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 30 13:05:36.548989 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount795736191.mount: Deactivated successfully. Jan 30 13:05:37.805136 containerd[1702]: time="2025-01-30T13:05:37.805060738Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:05:37.808282 containerd[1702]: time="2025-01-30T13:05:37.808221378Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Jan 30 13:05:37.812683 containerd[1702]: time="2025-01-30T13:05:37.812624033Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:05:37.818382 containerd[1702]: time="2025-01-30T13:05:37.818339905Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:05:37.819772 containerd[1702]: time="2025-01-30T13:05:37.819296517Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.874500513s" Jan 30 13:05:37.819772 containerd[1702]: time="2025-01-30T13:05:37.819337518Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 30 13:05:37.840277 containerd[1702]: time="2025-01-30T13:05:37.840236981Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 30 13:05:38.482672 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3558205110.mount: Deactivated successfully. Jan 30 13:05:38.507063 containerd[1702]: time="2025-01-30T13:05:38.507007281Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:05:38.509181 containerd[1702]: time="2025-01-30T13:05:38.509110507Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" Jan 30 13:05:38.514549 containerd[1702]: time="2025-01-30T13:05:38.514498775Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:05:38.520257 containerd[1702]: time="2025-01-30T13:05:38.520199347Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:05:38.520990 containerd[1702]: time="2025-01-30T13:05:38.520956556Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 680.679874ms" Jan 30 13:05:38.521083 containerd[1702]: time="2025-01-30T13:05:38.520994057Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 30 13:05:38.543716 containerd[1702]: time="2025-01-30T13:05:38.543665142Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 30 13:05:39.184953 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1944824797.mount: Deactivated successfully. Jan 30 13:05:40.896031 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 30 13:05:40.904235 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:05:41.042335 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:05:41.048059 (kubelet)[2817]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:05:41.593543 kubelet[2817]: E0130 13:05:41.593484 2817 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:05:41.595942 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:05:41.596165 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:05:41.894008 containerd[1702]: time="2025-01-30T13:05:41.893775840Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:05:41.896170 containerd[1702]: time="2025-01-30T13:05:41.896088372Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238579" Jan 30 13:05:41.900196 containerd[1702]: time="2025-01-30T13:05:41.900109627Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:05:41.904251 containerd[1702]: time="2025-01-30T13:05:41.904200583Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:05:41.906061 containerd[1702]: time="2025-01-30T13:05:41.905768305Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 3.362061162s" Jan 30 13:05:41.906061 containerd[1702]: time="2025-01-30T13:05:41.905803605Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jan 30 13:05:44.606625 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:05:44.612424 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:05:44.644529 systemd[1]: Reloading requested from client PID 2889 ('systemctl') (unit session-9.scope)... Jan 30 13:05:44.644548 systemd[1]: Reloading... Jan 30 13:05:44.775159 zram_generator::config[2929]: No configuration found. Jan 30 13:05:44.898015 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:05:44.984285 systemd[1]: Reloading finished in 339 ms. Jan 30 13:05:45.039154 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 30 13:05:45.039368 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 30 13:05:45.039666 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:05:45.042097 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:05:45.303920 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:05:45.313458 (kubelet)[3000]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 13:05:45.351806 kubelet[3000]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:05:45.351806 kubelet[3000]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 13:05:45.351806 kubelet[3000]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:05:45.352267 kubelet[3000]: I0130 13:05:45.351848 3000 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:05:45.646886 kubelet[3000]: I0130 13:05:45.646851 3000 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 13:05:45.646886 kubelet[3000]: I0130 13:05:45.646876 3000 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:05:45.647144 kubelet[3000]: I0130 13:05:45.647110 3000 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 13:05:45.984177 kubelet[3000]: I0130 13:05:45.983594 3000 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:05:45.984177 kubelet[3000]: E0130 13:05:45.983692 3000 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.4.12:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.4.12:6443: connect: connection refused Jan 30 13:05:45.993224 kubelet[3000]: I0130 13:05:45.993188 3000 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 13:05:46.014309 kubelet[3000]: I0130 13:05:46.014230 3000 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:05:46.014535 kubelet[3000]: I0130 13:05:46.014305 3000 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4186.1.0-a-d95fc4b65f","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 13:05:46.014923 kubelet[3000]: I0130 13:05:46.014899 3000 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:05:46.014987 kubelet[3000]: I0130 13:05:46.014928 3000 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 13:05:46.015113 kubelet[3000]: I0130 13:05:46.015094 3000 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:05:46.015878 kubelet[3000]: I0130 13:05:46.015856 3000 kubelet.go:400] "Attempting to sync node with API server" Jan 30 13:05:46.015878 kubelet[3000]: I0130 13:05:46.015878 3000 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:05:46.015987 kubelet[3000]: I0130 13:05:46.015907 3000 kubelet.go:312] "Adding apiserver pod source" Jan 30 13:05:46.015987 kubelet[3000]: I0130 13:05:46.015926 3000 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:05:46.021551 kubelet[3000]: W0130 13:05:46.021043 3000 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.4.12:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.4.12:6443: connect: connection refused Jan 30 13:05:46.021551 kubelet[3000]: E0130 13:05:46.021148 3000 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.4.12:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.4.12:6443: connect: connection refused Jan 30 13:05:46.021551 kubelet[3000]: W0130 13:05:46.021444 3000 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.4.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.1.0-a-d95fc4b65f&limit=500&resourceVersion=0": dial tcp 10.200.4.12:6443: connect: connection refused Jan 30 13:05:46.021551 kubelet[3000]: E0130 13:05:46.021488 3000 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.4.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.1.0-a-d95fc4b65f&limit=500&resourceVersion=0": dial tcp 10.200.4.12:6443: connect: connection refused Jan 30 13:05:46.022330 kubelet[3000]: I0130 13:05:46.022101 3000 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 30 13:05:46.024779 kubelet[3000]: I0130 13:05:46.023698 3000 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:05:46.024779 kubelet[3000]: W0130 13:05:46.023762 3000 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 13:05:46.024779 kubelet[3000]: I0130 13:05:46.024641 3000 server.go:1264] "Started kubelet" Jan 30 13:05:46.026571 kubelet[3000]: I0130 13:05:46.026343 3000 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:05:46.027495 kubelet[3000]: I0130 13:05:46.027470 3000 server.go:455] "Adding debug handlers to kubelet server" Jan 30 13:05:46.063174 kubelet[3000]: I0130 13:05:46.062535 3000 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:05:46.063174 kubelet[3000]: I0130 13:05:46.062982 3000 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:05:46.064653 kubelet[3000]: I0130 13:05:46.064625 3000 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:05:46.065272 kubelet[3000]: E0130 13:05:46.065100 3000 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.4.12:6443/api/v1/namespaces/default/events\": dial tcp 10.200.4.12:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4186.1.0-a-d95fc4b65f.181f7a344056bce5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4186.1.0-a-d95fc4b65f,UID:ci-4186.1.0-a-d95fc4b65f,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4186.1.0-a-d95fc4b65f,},FirstTimestamp:2025-01-30 13:05:46.024615141 +0000 UTC m=+0.707945353,LastTimestamp:2025-01-30 13:05:46.024615141 +0000 UTC m=+0.707945353,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4186.1.0-a-d95fc4b65f,}" Jan 30 13:05:46.072608 kubelet[3000]: E0130 13:05:46.072562 3000 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4186.1.0-a-d95fc4b65f\" not found" Jan 30 13:05:46.072702 kubelet[3000]: I0130 13:05:46.072631 3000 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 13:05:46.073187 kubelet[3000]: I0130 13:05:46.072776 3000 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 13:05:46.073187 kubelet[3000]: I0130 13:05:46.072834 3000 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:05:46.074903 kubelet[3000]: W0130 13:05:46.074848 3000 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.4.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.4.12:6443: connect: connection refused Jan 30 13:05:46.074979 kubelet[3000]: E0130 13:05:46.074916 3000 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.4.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.4.12:6443: connect: connection refused Jan 30 13:05:46.075032 kubelet[3000]: E0130 13:05:46.074994 3000 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.1.0-a-d95fc4b65f?timeout=10s\": dial tcp 10.200.4.12:6443: connect: connection refused" interval="200ms" Jan 30 13:05:46.076208 kubelet[3000]: I0130 13:05:46.076174 3000 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:05:46.077239 kubelet[3000]: I0130 13:05:46.076291 3000 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 13:05:46.081314 kubelet[3000]: E0130 13:05:46.081294 3000 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 13:05:46.081956 kubelet[3000]: I0130 13:05:46.081934 3000 factory.go:221] Registration of the containerd container factory successfully Jan 30 13:05:46.101702 kubelet[3000]: I0130 13:05:46.101678 3000 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 13:05:46.101702 kubelet[3000]: I0130 13:05:46.101695 3000 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 13:05:46.101834 kubelet[3000]: I0130 13:05:46.101735 3000 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:05:46.108790 kubelet[3000]: I0130 13:05:46.108760 3000 policy_none.go:49] "None policy: Start" Jan 30 13:05:46.109488 kubelet[3000]: I0130 13:05:46.109467 3000 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 13:05:46.109578 kubelet[3000]: I0130 13:05:46.109492 3000 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:05:46.117922 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 30 13:05:46.130880 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 30 13:05:46.136254 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 30 13:05:46.139488 kubelet[3000]: I0130 13:05:46.139439 3000 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:05:46.141199 kubelet[3000]: I0130 13:05:46.140831 3000 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:05:46.141199 kubelet[3000]: I0130 13:05:46.140881 3000 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 13:05:46.141199 kubelet[3000]: I0130 13:05:46.140905 3000 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 13:05:46.141590 kubelet[3000]: E0130 13:05:46.141556 3000 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 13:05:46.143287 kubelet[3000]: W0130 13:05:46.143115 3000 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.4.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.4.12:6443: connect: connection refused Jan 30 13:05:46.143368 kubelet[3000]: E0130 13:05:46.143298 3000 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.4.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.4.12:6443: connect: connection refused Jan 30 13:05:46.144097 kubelet[3000]: I0130 13:05:46.144070 3000 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:05:46.145352 kubelet[3000]: I0130 13:05:46.145304 3000 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:05:46.145647 kubelet[3000]: I0130 13:05:46.145628 3000 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:05:46.148035 kubelet[3000]: E0130 13:05:46.147980 3000 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4186.1.0-a-d95fc4b65f\" not found" Jan 30 13:05:46.175287 kubelet[3000]: I0130 13:05:46.175154 3000 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186.1.0-a-d95fc4b65f" Jan 30 13:05:46.175604 kubelet[3000]: E0130 13:05:46.175572 3000 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.4.12:6443/api/v1/nodes\": dial tcp 10.200.4.12:6443: connect: connection refused" node="ci-4186.1.0-a-d95fc4b65f" Jan 30 13:05:46.242079 kubelet[3000]: I0130 13:05:46.241873 3000 topology_manager.go:215] "Topology Admit Handler" podUID="b2dfddcb7d751c221fbe05443a478b63" podNamespace="kube-system" podName="kube-apiserver-ci-4186.1.0-a-d95fc4b65f" Jan 30 13:05:46.244445 kubelet[3000]: I0130 13:05:46.244405 3000 topology_manager.go:215] "Topology Admit Handler" podUID="b1eb0583051a855d71f4cdc9c61747b9" podNamespace="kube-system" podName="kube-controller-manager-ci-4186.1.0-a-d95fc4b65f" Jan 30 13:05:46.247072 kubelet[3000]: I0130 13:05:46.246954 3000 topology_manager.go:215] "Topology Admit Handler" podUID="8396f59cea589c89970f59deaf313d4b" podNamespace="kube-system" podName="kube-scheduler-ci-4186.1.0-a-d95fc4b65f" Jan 30 13:05:46.254764 systemd[1]: Created slice kubepods-burstable-podb2dfddcb7d751c221fbe05443a478b63.slice - libcontainer container kubepods-burstable-podb2dfddcb7d751c221fbe05443a478b63.slice. Jan 30 13:05:46.276454 kubelet[3000]: E0130 13:05:46.276385 3000 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.1.0-a-d95fc4b65f?timeout=10s\": dial tcp 10.200.4.12:6443: connect: connection refused" interval="400ms" Jan 30 13:05:46.279084 systemd[1]: Created slice kubepods-burstable-podb1eb0583051a855d71f4cdc9c61747b9.slice - libcontainer container kubepods-burstable-podb1eb0583051a855d71f4cdc9c61747b9.slice. Jan 30 13:05:46.290809 systemd[1]: Created slice kubepods-burstable-pod8396f59cea589c89970f59deaf313d4b.slice - libcontainer container kubepods-burstable-pod8396f59cea589c89970f59deaf313d4b.slice. Jan 30 13:05:46.374008 kubelet[3000]: I0130 13:05:46.373890 3000 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8396f59cea589c89970f59deaf313d4b-kubeconfig\") pod \"kube-scheduler-ci-4186.1.0-a-d95fc4b65f\" (UID: \"8396f59cea589c89970f59deaf313d4b\") " pod="kube-system/kube-scheduler-ci-4186.1.0-a-d95fc4b65f" Jan 30 13:05:46.374008 kubelet[3000]: I0130 13:05:46.373961 3000 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b2dfddcb7d751c221fbe05443a478b63-ca-certs\") pod \"kube-apiserver-ci-4186.1.0-a-d95fc4b65f\" (UID: \"b2dfddcb7d751c221fbe05443a478b63\") " pod="kube-system/kube-apiserver-ci-4186.1.0-a-d95fc4b65f" Jan 30 13:05:46.374008 kubelet[3000]: I0130 13:05:46.374018 3000 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b1eb0583051a855d71f4cdc9c61747b9-ca-certs\") pod \"kube-controller-manager-ci-4186.1.0-a-d95fc4b65f\" (UID: \"b1eb0583051a855d71f4cdc9c61747b9\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-d95fc4b65f" Jan 30 13:05:46.374745 kubelet[3000]: I0130 13:05:46.374069 3000 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b1eb0583051a855d71f4cdc9c61747b9-flexvolume-dir\") pod \"kube-controller-manager-ci-4186.1.0-a-d95fc4b65f\" (UID: \"b1eb0583051a855d71f4cdc9c61747b9\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-d95fc4b65f" Jan 30 13:05:46.374745 kubelet[3000]: I0130 13:05:46.374096 3000 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b1eb0583051a855d71f4cdc9c61747b9-kubeconfig\") pod \"kube-controller-manager-ci-4186.1.0-a-d95fc4b65f\" (UID: \"b1eb0583051a855d71f4cdc9c61747b9\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-d95fc4b65f" Jan 30 13:05:46.374745 kubelet[3000]: I0130 13:05:46.374121 3000 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b2dfddcb7d751c221fbe05443a478b63-k8s-certs\") pod \"kube-apiserver-ci-4186.1.0-a-d95fc4b65f\" (UID: \"b2dfddcb7d751c221fbe05443a478b63\") " pod="kube-system/kube-apiserver-ci-4186.1.0-a-d95fc4b65f" Jan 30 13:05:46.374745 kubelet[3000]: I0130 13:05:46.374182 3000 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b2dfddcb7d751c221fbe05443a478b63-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4186.1.0-a-d95fc4b65f\" (UID: \"b2dfddcb7d751c221fbe05443a478b63\") " pod="kube-system/kube-apiserver-ci-4186.1.0-a-d95fc4b65f" Jan 30 13:05:46.374745 kubelet[3000]: I0130 13:05:46.374212 3000 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b1eb0583051a855d71f4cdc9c61747b9-k8s-certs\") pod \"kube-controller-manager-ci-4186.1.0-a-d95fc4b65f\" (UID: \"b1eb0583051a855d71f4cdc9c61747b9\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-d95fc4b65f" Jan 30 13:05:46.374894 kubelet[3000]: I0130 13:05:46.374242 3000 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b1eb0583051a855d71f4cdc9c61747b9-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4186.1.0-a-d95fc4b65f\" (UID: \"b1eb0583051a855d71f4cdc9c61747b9\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-d95fc4b65f" Jan 30 13:05:46.378196 kubelet[3000]: I0130 13:05:46.378157 3000 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186.1.0-a-d95fc4b65f" Jan 30 13:05:46.378569 kubelet[3000]: E0130 13:05:46.378535 3000 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.4.12:6443/api/v1/nodes\": dial tcp 10.200.4.12:6443: connect: connection refused" node="ci-4186.1.0-a-d95fc4b65f" Jan 30 13:05:46.577640 containerd[1702]: time="2025-01-30T13:05:46.577514557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4186.1.0-a-d95fc4b65f,Uid:b2dfddcb7d751c221fbe05443a478b63,Namespace:kube-system,Attempt:0,}" Jan 30 13:05:46.589139 containerd[1702]: time="2025-01-30T13:05:46.589088216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4186.1.0-a-d95fc4b65f,Uid:b1eb0583051a855d71f4cdc9c61747b9,Namespace:kube-system,Attempt:0,}" Jan 30 13:05:46.593672 containerd[1702]: time="2025-01-30T13:05:46.593641279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4186.1.0-a-d95fc4b65f,Uid:8396f59cea589c89970f59deaf313d4b,Namespace:kube-system,Attempt:0,}" Jan 30 13:05:46.677614 kubelet[3000]: E0130 13:05:46.677548 3000 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.1.0-a-d95fc4b65f?timeout=10s\": dial tcp 10.200.4.12:6443: connect: connection refused" interval="800ms" Jan 30 13:05:46.780818 kubelet[3000]: I0130 13:05:46.780773 3000 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186.1.0-a-d95fc4b65f" Jan 30 13:05:46.781139 kubelet[3000]: E0130 13:05:46.781096 3000 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.4.12:6443/api/v1/nodes\": dial tcp 10.200.4.12:6443: connect: connection refused" node="ci-4186.1.0-a-d95fc4b65f" Jan 30 13:05:47.119843 kubelet[3000]: W0130 13:05:47.119773 3000 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.4.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.1.0-a-d95fc4b65f&limit=500&resourceVersion=0": dial tcp 10.200.4.12:6443: connect: connection refused Jan 30 13:05:47.119843 kubelet[3000]: E0130 13:05:47.119848 3000 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.4.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.1.0-a-d95fc4b65f&limit=500&resourceVersion=0": dial tcp 10.200.4.12:6443: connect: connection refused Jan 30 13:05:47.152772 kubelet[3000]: W0130 13:05:47.152696 3000 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.4.12:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.4.12:6443: connect: connection refused Jan 30 13:05:47.152772 kubelet[3000]: E0130 13:05:47.152774 3000 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.4.12:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.4.12:6443: connect: connection refused Jan 30 13:05:47.256237 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1972952863.mount: Deactivated successfully. Jan 30 13:05:47.282841 containerd[1702]: time="2025-01-30T13:05:47.282771077Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:05:47.297869 containerd[1702]: time="2025-01-30T13:05:47.297803596Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jan 30 13:05:47.301965 containerd[1702]: time="2025-01-30T13:05:47.301926057Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:05:47.313540 containerd[1702]: time="2025-01-30T13:05:47.313491125Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:05:47.319380 containerd[1702]: time="2025-01-30T13:05:47.319166808Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 13:05:47.320042 kubelet[3000]: W0130 13:05:47.319934 3000 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.4.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.4.12:6443: connect: connection refused Jan 30 13:05:47.320042 kubelet[3000]: E0130 13:05:47.320018 3000 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.4.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.4.12:6443: connect: connection refused Jan 30 13:05:47.322406 containerd[1702]: time="2025-01-30T13:05:47.322371855Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:05:47.326466 containerd[1702]: time="2025-01-30T13:05:47.326428014Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:05:47.327226 containerd[1702]: time="2025-01-30T13:05:47.327195925Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 749.559067ms" Jan 30 13:05:47.328867 containerd[1702]: time="2025-01-30T13:05:47.328813649Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 13:05:47.337792 containerd[1702]: time="2025-01-30T13:05:47.337757480Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 748.568962ms" Jan 30 13:05:47.350660 containerd[1702]: time="2025-01-30T13:05:47.350631168Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 756.916387ms" Jan 30 13:05:47.478347 kubelet[3000]: E0130 13:05:47.478282 3000 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.1.0-a-d95fc4b65f?timeout=10s\": dial tcp 10.200.4.12:6443: connect: connection refused" interval="1.6s" Jan 30 13:05:47.506063 kubelet[3000]: W0130 13:05:47.506025 3000 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.4.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.4.12:6443: connect: connection refused Jan 30 13:05:47.506063 kubelet[3000]: E0130 13:05:47.506068 3000 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.4.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.4.12:6443: connect: connection refused Jan 30 13:05:47.583528 kubelet[3000]: I0130 13:05:47.583488 3000 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186.1.0-a-d95fc4b65f" Jan 30 13:05:47.583905 kubelet[3000]: E0130 13:05:47.583868 3000 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.4.12:6443/api/v1/nodes\": dial tcp 10.200.4.12:6443: connect: connection refused" node="ci-4186.1.0-a-d95fc4b65f" Jan 30 13:05:47.999867 kubelet[3000]: E0130 13:05:47.999829 3000 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.4.12:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.4.12:6443: connect: connection refused Jan 30 13:05:48.034258 kubelet[3000]: E0130 13:05:48.033976 3000 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.4.12:6443/api/v1/namespaces/default/events\": dial tcp 10.200.4.12:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4186.1.0-a-d95fc4b65f.181f7a344056bce5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4186.1.0-a-d95fc4b65f,UID:ci-4186.1.0-a-d95fc4b65f,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4186.1.0-a-d95fc4b65f,},FirstTimestamp:2025-01-30 13:05:46.024615141 +0000 UTC m=+0.707945353,LastTimestamp:2025-01-30 13:05:46.024615141 +0000 UTC m=+0.707945353,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4186.1.0-a-d95fc4b65f,}" Jan 30 13:05:48.077719 containerd[1702]: time="2025-01-30T13:05:48.077617780Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:05:48.080241 containerd[1702]: time="2025-01-30T13:05:48.079821312Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:05:48.080241 containerd[1702]: time="2025-01-30T13:05:48.079881513Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:05:48.080241 containerd[1702]: time="2025-01-30T13:05:48.079898413Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:05:48.080241 containerd[1702]: time="2025-01-30T13:05:48.079993914Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:05:48.081497 containerd[1702]: time="2025-01-30T13:05:48.081267233Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:05:48.081497 containerd[1702]: time="2025-01-30T13:05:48.081293833Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:05:48.083046 containerd[1702]: time="2025-01-30T13:05:48.082228247Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:05:48.085025 containerd[1702]: time="2025-01-30T13:05:48.084942987Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:05:48.085228 containerd[1702]: time="2025-01-30T13:05:48.085183990Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:05:48.085393 containerd[1702]: time="2025-01-30T13:05:48.085357693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:05:48.086466 containerd[1702]: time="2025-01-30T13:05:48.086371207Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:05:48.115428 systemd[1]: Started cri-containerd-d0efea7569650eea152c3c654a3e4fa13fdc8c7d851c3f818aea86e63ac025f2.scope - libcontainer container d0efea7569650eea152c3c654a3e4fa13fdc8c7d851c3f818aea86e63ac025f2. Jan 30 13:05:48.122845 systemd[1]: Started cri-containerd-67d44c2995059eff82ddae7ac2cdf396b8da2c755f302e2d737c165fc0143275.scope - libcontainer container 67d44c2995059eff82ddae7ac2cdf396b8da2c755f302e2d737c165fc0143275. Jan 30 13:05:48.130349 systemd[1]: Started cri-containerd-683093ab54d958114de7b388b78ac9f75d426b837f17fb6c4e3a6708ee2b9d00.scope - libcontainer container 683093ab54d958114de7b388b78ac9f75d426b837f17fb6c4e3a6708ee2b9d00. Jan 30 13:05:48.196474 containerd[1702]: time="2025-01-30T13:05:48.196435514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4186.1.0-a-d95fc4b65f,Uid:b2dfddcb7d751c221fbe05443a478b63,Namespace:kube-system,Attempt:0,} returns sandbox id \"d0efea7569650eea152c3c654a3e4fa13fdc8c7d851c3f818aea86e63ac025f2\"" Jan 30 13:05:48.208585 containerd[1702]: time="2025-01-30T13:05:48.207882781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4186.1.0-a-d95fc4b65f,Uid:8396f59cea589c89970f59deaf313d4b,Namespace:kube-system,Attempt:0,} returns sandbox id \"67d44c2995059eff82ddae7ac2cdf396b8da2c755f302e2d737c165fc0143275\"" Jan 30 13:05:48.211934 containerd[1702]: time="2025-01-30T13:05:48.211830639Z" level=info msg="CreateContainer within sandbox \"d0efea7569650eea152c3c654a3e4fa13fdc8c7d851c3f818aea86e63ac025f2\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 30 13:05:48.216911 containerd[1702]: time="2025-01-30T13:05:48.216831612Z" level=info msg="CreateContainer within sandbox \"67d44c2995059eff82ddae7ac2cdf396b8da2c755f302e2d737c165fc0143275\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 30 13:05:48.250277 containerd[1702]: time="2025-01-30T13:05:48.248348272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4186.1.0-a-d95fc4b65f,Uid:b1eb0583051a855d71f4cdc9c61747b9,Namespace:kube-system,Attempt:0,} returns sandbox id \"683093ab54d958114de7b388b78ac9f75d426b837f17fb6c4e3a6708ee2b9d00\"" Jan 30 13:05:48.256081 containerd[1702]: time="2025-01-30T13:05:48.256042584Z" level=info msg="CreateContainer within sandbox \"683093ab54d958114de7b388b78ac9f75d426b837f17fb6c4e3a6708ee2b9d00\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 30 13:05:48.256358 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount846679502.mount: Deactivated successfully. Jan 30 13:05:48.287017 containerd[1702]: time="2025-01-30T13:05:48.286751032Z" level=info msg="CreateContainer within sandbox \"d0efea7569650eea152c3c654a3e4fa13fdc8c7d851c3f818aea86e63ac025f2\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6d85bdcf676e2d11760fbc906fb962ee135347ee6b1f7d41162d72184b4102c3\"" Jan 30 13:05:48.287002 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1405216359.mount: Deactivated successfully. Jan 30 13:05:48.288243 containerd[1702]: time="2025-01-30T13:05:48.287861249Z" level=info msg="StartContainer for \"6d85bdcf676e2d11760fbc906fb962ee135347ee6b1f7d41162d72184b4102c3\"" Jan 30 13:05:48.309349 containerd[1702]: time="2025-01-30T13:05:48.309228861Z" level=info msg="CreateContainer within sandbox \"67d44c2995059eff82ddae7ac2cdf396b8da2c755f302e2d737c165fc0143275\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"475c1455dc969e1de03e9097e20dd4e0525f454748a3d6d854c16036bac131fc\"" Jan 30 13:05:48.311397 containerd[1702]: time="2025-01-30T13:05:48.310418278Z" level=info msg="StartContainer for \"475c1455dc969e1de03e9097e20dd4e0525f454748a3d6d854c16036bac131fc\"" Jan 30 13:05:48.318877 containerd[1702]: time="2025-01-30T13:05:48.318834501Z" level=info msg="CreateContainer within sandbox \"683093ab54d958114de7b388b78ac9f75d426b837f17fb6c4e3a6708ee2b9d00\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"73cf5e03e68c72b6cea521c953a1911027208c14c29502e92fb6e3a42f8743c0\"" Jan 30 13:05:48.319621 containerd[1702]: time="2025-01-30T13:05:48.319384109Z" level=info msg="StartContainer for \"73cf5e03e68c72b6cea521c953a1911027208c14c29502e92fb6e3a42f8743c0\"" Jan 30 13:05:48.320439 systemd[1]: Started cri-containerd-6d85bdcf676e2d11760fbc906fb962ee135347ee6b1f7d41162d72184b4102c3.scope - libcontainer container 6d85bdcf676e2d11760fbc906fb962ee135347ee6b1f7d41162d72184b4102c3. Jan 30 13:05:48.367406 systemd[1]: Started cri-containerd-475c1455dc969e1de03e9097e20dd4e0525f454748a3d6d854c16036bac131fc.scope - libcontainer container 475c1455dc969e1de03e9097e20dd4e0525f454748a3d6d854c16036bac131fc. Jan 30 13:05:48.369645 systemd[1]: Started cri-containerd-73cf5e03e68c72b6cea521c953a1911027208c14c29502e92fb6e3a42f8743c0.scope - libcontainer container 73cf5e03e68c72b6cea521c953a1911027208c14c29502e92fb6e3a42f8743c0. Jan 30 13:05:48.405235 containerd[1702]: time="2025-01-30T13:05:48.404971558Z" level=info msg="StartContainer for \"6d85bdcf676e2d11760fbc906fb962ee135347ee6b1f7d41162d72184b4102c3\" returns successfully" Jan 30 13:05:48.467278 containerd[1702]: time="2025-01-30T13:05:48.466932763Z" level=info msg="StartContainer for \"73cf5e03e68c72b6cea521c953a1911027208c14c29502e92fb6e3a42f8743c0\" returns successfully" Jan 30 13:05:48.487790 containerd[1702]: time="2025-01-30T13:05:48.487707366Z" level=info msg="StartContainer for \"475c1455dc969e1de03e9097e20dd4e0525f454748a3d6d854c16036bac131fc\" returns successfully" Jan 30 13:05:49.186173 kubelet[3000]: I0130 13:05:49.185957 3000 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186.1.0-a-d95fc4b65f" Jan 30 13:05:50.565877 kubelet[3000]: E0130 13:05:50.565833 3000 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4186.1.0-a-d95fc4b65f\" not found" node="ci-4186.1.0-a-d95fc4b65f" Jan 30 13:05:51.431029 kubelet[3000]: I0130 13:05:51.429799 3000 kubelet_node_status.go:76] "Successfully registered node" node="ci-4186.1.0-a-d95fc4b65f" Jan 30 13:05:52.426603 kubelet[3000]: I0130 13:05:52.426551 3000 apiserver.go:52] "Watching apiserver" Jan 30 13:05:52.472943 kubelet[3000]: I0130 13:05:52.472870 3000 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 13:05:53.341320 kubelet[3000]: W0130 13:05:53.339980 3000 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 13:05:53.558064 systemd[1]: Reloading requested from client PID 3271 ('systemctl') (unit session-9.scope)... Jan 30 13:05:53.558080 systemd[1]: Reloading... Jan 30 13:05:53.646208 kubelet[3000]: W0130 13:05:53.644081 3000 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 13:05:53.712348 zram_generator::config[3311]: No configuration found. Jan 30 13:05:53.852913 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:05:53.954534 systemd[1]: Reloading finished in 395 ms. Jan 30 13:05:53.995002 kubelet[3000]: E0130 13:05:53.993921 3000 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{ci-4186.1.0-a-d95fc4b65f.181f7a344056bce5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4186.1.0-a-d95fc4b65f,UID:ci-4186.1.0-a-d95fc4b65f,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4186.1.0-a-d95fc4b65f,},FirstTimestamp:2025-01-30 13:05:46.024615141 +0000 UTC m=+0.707945353,LastTimestamp:2025-01-30 13:05:46.024615141 +0000 UTC m=+0.707945353,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4186.1.0-a-d95fc4b65f,}" Jan 30 13:05:53.994242 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:05:53.995363 kubelet[3000]: I0130 13:05:53.995118 3000 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:05:53.998175 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 13:05:53.998405 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:05:54.003453 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:05:54.346816 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:05:54.356472 (kubelet)[3378]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 13:05:54.396246 kubelet[3378]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:05:54.396246 kubelet[3378]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 13:05:54.396246 kubelet[3378]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:05:54.396730 kubelet[3378]: I0130 13:05:54.396303 3378 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:05:54.400615 kubelet[3378]: I0130 13:05:54.400585 3378 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 13:05:54.400615 kubelet[3378]: I0130 13:05:54.400607 3378 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:05:54.400818 kubelet[3378]: I0130 13:05:54.400800 3378 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 13:05:54.402081 kubelet[3378]: I0130 13:05:54.402055 3378 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 13:05:54.403650 kubelet[3378]: I0130 13:05:54.403351 3378 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:05:54.412472 kubelet[3378]: I0130 13:05:54.412450 3378 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 13:05:54.412711 kubelet[3378]: I0130 13:05:54.412684 3378 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:05:54.412912 kubelet[3378]: I0130 13:05:54.412711 3378 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4186.1.0-a-d95fc4b65f","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 13:05:54.413066 kubelet[3378]: I0130 13:05:54.412927 3378 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:05:54.413066 kubelet[3378]: I0130 13:05:54.412942 3378 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 13:05:54.413066 kubelet[3378]: I0130 13:05:54.412989 3378 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:05:54.413386 kubelet[3378]: I0130 13:05:54.413093 3378 kubelet.go:400] "Attempting to sync node with API server" Jan 30 13:05:54.413386 kubelet[3378]: I0130 13:05:54.413107 3378 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:05:54.413386 kubelet[3378]: I0130 13:05:54.413144 3378 kubelet.go:312] "Adding apiserver pod source" Jan 30 13:05:54.413386 kubelet[3378]: I0130 13:05:54.413163 3378 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:05:54.414285 kubelet[3378]: I0130 13:05:54.414266 3378 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 30 13:05:54.414496 kubelet[3378]: I0130 13:05:54.414479 3378 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:05:54.415038 kubelet[3378]: I0130 13:05:54.414943 3378 server.go:1264] "Started kubelet" Jan 30 13:05:54.420085 kubelet[3378]: I0130 13:05:54.420054 3378 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:05:54.435587 kubelet[3378]: I0130 13:05:54.435211 3378 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:05:54.437030 kubelet[3378]: I0130 13:05:54.436997 3378 server.go:455] "Adding debug handlers to kubelet server" Jan 30 13:05:54.440145 kubelet[3378]: I0130 13:05:54.438027 3378 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:05:54.440145 kubelet[3378]: I0130 13:05:54.438286 3378 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:05:54.440145 kubelet[3378]: I0130 13:05:54.440026 3378 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 13:05:54.442745 kubelet[3378]: I0130 13:05:54.442721 3378 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 13:05:54.442878 kubelet[3378]: I0130 13:05:54.442862 3378 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:05:54.445636 kubelet[3378]: I0130 13:05:54.444926 3378 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:05:54.446271 kubelet[3378]: I0130 13:05:54.446248 3378 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:05:54.446347 kubelet[3378]: I0130 13:05:54.446314 3378 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 13:05:54.446347 kubelet[3378]: I0130 13:05:54.446336 3378 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 13:05:54.446434 kubelet[3378]: E0130 13:05:54.446404 3378 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 13:05:54.449228 kubelet[3378]: I0130 13:05:54.449206 3378 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:05:54.449433 kubelet[3378]: I0130 13:05:54.449409 3378 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 13:05:54.453147 kubelet[3378]: E0130 13:05:54.453113 3378 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 13:05:54.455551 kubelet[3378]: I0130 13:05:54.455446 3378 factory.go:221] Registration of the containerd container factory successfully Jan 30 13:05:54.500573 kubelet[3378]: I0130 13:05:54.500543 3378 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 13:05:54.500573 kubelet[3378]: I0130 13:05:54.500565 3378 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 13:05:54.500763 kubelet[3378]: I0130 13:05:54.500587 3378 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:05:54.500807 kubelet[3378]: I0130 13:05:54.500771 3378 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 30 13:05:54.500807 kubelet[3378]: I0130 13:05:54.500786 3378 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 30 13:05:54.500889 kubelet[3378]: I0130 13:05:54.500811 3378 policy_none.go:49] "None policy: Start" Jan 30 13:05:54.501456 kubelet[3378]: I0130 13:05:54.501430 3378 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 13:05:54.501456 kubelet[3378]: I0130 13:05:54.501458 3378 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:05:54.501630 kubelet[3378]: I0130 13:05:54.501609 3378 state_mem.go:75] "Updated machine memory state" Jan 30 13:05:54.505483 kubelet[3378]: I0130 13:05:54.505457 3378 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:05:54.505903 kubelet[3378]: I0130 13:05:54.505627 3378 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:05:54.505903 kubelet[3378]: I0130 13:05:54.505740 3378 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:05:54.543692 kubelet[3378]: I0130 13:05:54.543418 3378 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186.1.0-a-d95fc4b65f" Jan 30 13:05:54.546698 kubelet[3378]: I0130 13:05:54.546643 3378 topology_manager.go:215] "Topology Admit Handler" podUID="b2dfddcb7d751c221fbe05443a478b63" podNamespace="kube-system" podName="kube-apiserver-ci-4186.1.0-a-d95fc4b65f" Jan 30 13:05:54.546809 kubelet[3378]: I0130 13:05:54.546775 3378 topology_manager.go:215] "Topology Admit Handler" podUID="b1eb0583051a855d71f4cdc9c61747b9" podNamespace="kube-system" podName="kube-controller-manager-ci-4186.1.0-a-d95fc4b65f" Jan 30 13:05:54.547003 kubelet[3378]: I0130 13:05:54.546853 3378 topology_manager.go:215] "Topology Admit Handler" podUID="8396f59cea589c89970f59deaf313d4b" podNamespace="kube-system" podName="kube-scheduler-ci-4186.1.0-a-d95fc4b65f" Jan 30 13:05:54.555360 kubelet[3378]: W0130 13:05:54.555170 3378 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 13:05:54.559638 kubelet[3378]: W0130 13:05:54.559558 3378 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 13:05:54.560292 kubelet[3378]: E0130 13:05:54.559794 3378 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4186.1.0-a-d95fc4b65f\" already exists" pod="kube-system/kube-controller-manager-ci-4186.1.0-a-d95fc4b65f" Jan 30 13:05:54.560292 kubelet[3378]: W0130 13:05:54.559628 3378 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 13:05:54.560292 kubelet[3378]: E0130 13:05:54.559941 3378 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4186.1.0-a-d95fc4b65f\" already exists" pod="kube-system/kube-scheduler-ci-4186.1.0-a-d95fc4b65f" Jan 30 13:05:54.561593 kubelet[3378]: I0130 13:05:54.561575 3378 kubelet_node_status.go:112] "Node was previously registered" node="ci-4186.1.0-a-d95fc4b65f" Jan 30 13:05:54.561735 kubelet[3378]: I0130 13:05:54.561709 3378 kubelet_node_status.go:76] "Successfully registered node" node="ci-4186.1.0-a-d95fc4b65f" Jan 30 13:05:54.743608 kubelet[3378]: I0130 13:05:54.743220 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b2dfddcb7d751c221fbe05443a478b63-k8s-certs\") pod \"kube-apiserver-ci-4186.1.0-a-d95fc4b65f\" (UID: \"b2dfddcb7d751c221fbe05443a478b63\") " pod="kube-system/kube-apiserver-ci-4186.1.0-a-d95fc4b65f" Jan 30 13:05:54.743608 kubelet[3378]: I0130 13:05:54.743283 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b1eb0583051a855d71f4cdc9c61747b9-ca-certs\") pod \"kube-controller-manager-ci-4186.1.0-a-d95fc4b65f\" (UID: \"b1eb0583051a855d71f4cdc9c61747b9\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-d95fc4b65f" Jan 30 13:05:54.743608 kubelet[3378]: I0130 13:05:54.743315 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b1eb0583051a855d71f4cdc9c61747b9-flexvolume-dir\") pod \"kube-controller-manager-ci-4186.1.0-a-d95fc4b65f\" (UID: \"b1eb0583051a855d71f4cdc9c61747b9\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-d95fc4b65f" Jan 30 13:05:54.743608 kubelet[3378]: I0130 13:05:54.743340 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b1eb0583051a855d71f4cdc9c61747b9-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4186.1.0-a-d95fc4b65f\" (UID: \"b1eb0583051a855d71f4cdc9c61747b9\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-d95fc4b65f" Jan 30 13:05:54.743608 kubelet[3378]: I0130 13:05:54.743365 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8396f59cea589c89970f59deaf313d4b-kubeconfig\") pod \"kube-scheduler-ci-4186.1.0-a-d95fc4b65f\" (UID: \"8396f59cea589c89970f59deaf313d4b\") " pod="kube-system/kube-scheduler-ci-4186.1.0-a-d95fc4b65f" Jan 30 13:05:54.744173 kubelet[3378]: I0130 13:05:54.743387 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b2dfddcb7d751c221fbe05443a478b63-ca-certs\") pod \"kube-apiserver-ci-4186.1.0-a-d95fc4b65f\" (UID: \"b2dfddcb7d751c221fbe05443a478b63\") " pod="kube-system/kube-apiserver-ci-4186.1.0-a-d95fc4b65f" Jan 30 13:05:54.744173 kubelet[3378]: I0130 13:05:54.743409 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b2dfddcb7d751c221fbe05443a478b63-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4186.1.0-a-d95fc4b65f\" (UID: \"b2dfddcb7d751c221fbe05443a478b63\") " pod="kube-system/kube-apiserver-ci-4186.1.0-a-d95fc4b65f" Jan 30 13:05:54.744173 kubelet[3378]: I0130 13:05:54.743431 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b1eb0583051a855d71f4cdc9c61747b9-k8s-certs\") pod \"kube-controller-manager-ci-4186.1.0-a-d95fc4b65f\" (UID: \"b1eb0583051a855d71f4cdc9c61747b9\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-d95fc4b65f" Jan 30 13:05:54.744173 kubelet[3378]: I0130 13:05:54.743473 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b1eb0583051a855d71f4cdc9c61747b9-kubeconfig\") pod \"kube-controller-manager-ci-4186.1.0-a-d95fc4b65f\" (UID: \"b1eb0583051a855d71f4cdc9c61747b9\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-d95fc4b65f" Jan 30 13:05:54.806384 sudo[3409]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 30 13:05:54.806743 sudo[3409]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 30 13:05:55.314710 sudo[3409]: pam_unix(sudo:session): session closed for user root Jan 30 13:05:55.422508 kubelet[3378]: I0130 13:05:55.422460 3378 apiserver.go:52] "Watching apiserver" Jan 30 13:05:55.443768 kubelet[3378]: I0130 13:05:55.443706 3378 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 13:05:55.493669 kubelet[3378]: W0130 13:05:55.493469 3378 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 13:05:55.494081 kubelet[3378]: E0130 13:05:55.493642 3378 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4186.1.0-a-d95fc4b65f\" already exists" pod="kube-system/kube-apiserver-ci-4186.1.0-a-d95fc4b65f" Jan 30 13:05:55.528897 kubelet[3378]: I0130 13:05:55.528389 3378 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4186.1.0-a-d95fc4b65f" podStartSLOduration=1.528259485 podStartE2EDuration="1.528259485s" podCreationTimestamp="2025-01-30 13:05:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:05:55.527831779 +0000 UTC m=+1.167895695" watchObservedRunningTime="2025-01-30 13:05:55.528259485 +0000 UTC m=+1.168323301" Jan 30 13:05:55.555743 kubelet[3378]: I0130 13:05:55.555233 3378 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4186.1.0-a-d95fc4b65f" podStartSLOduration=2.555209862 podStartE2EDuration="2.555209862s" podCreationTimestamp="2025-01-30 13:05:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:05:55.540111651 +0000 UTC m=+1.180175567" watchObservedRunningTime="2025-01-30 13:05:55.555209862 +0000 UTC m=+1.195273778" Jan 30 13:05:55.572371 kubelet[3378]: I0130 13:05:55.571096 3378 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4186.1.0-a-d95fc4b65f" podStartSLOduration=2.571073683 podStartE2EDuration="2.571073683s" podCreationTimestamp="2025-01-30 13:05:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:05:55.55650968 +0000 UTC m=+1.196573596" watchObservedRunningTime="2025-01-30 13:05:55.571073683 +0000 UTC m=+1.211137599" Jan 30 13:05:56.511471 sudo[2286]: pam_unix(sudo:session): session closed for user root Jan 30 13:05:56.613808 sshd[2285]: Connection closed by 10.200.16.10 port 47926 Jan 30 13:05:56.614645 sshd-session[2268]: pam_unix(sshd:session): session closed for user core Jan 30 13:05:56.619519 systemd[1]: sshd@6-10.200.4.12:22-10.200.16.10:47926.service: Deactivated successfully. Jan 30 13:05:56.621690 systemd[1]: session-9.scope: Deactivated successfully. Jan 30 13:05:56.621943 systemd[1]: session-9.scope: Consumed 4.403s CPU time, 189.3M memory peak, 0B memory swap peak. Jan 30 13:05:56.622625 systemd-logind[1681]: Session 9 logged out. Waiting for processes to exit. Jan 30 13:05:56.623719 systemd-logind[1681]: Removed session 9. Jan 30 13:06:06.947866 kubelet[3378]: I0130 13:06:06.947651 3378 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 30 13:06:06.948946 containerd[1702]: time="2025-01-30T13:06:06.948900427Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 13:06:06.949545 kubelet[3378]: I0130 13:06:06.949153 3378 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 30 13:06:07.920151 kubelet[3378]: I0130 13:06:07.918073 3378 topology_manager.go:215] "Topology Admit Handler" podUID="04561e50-aef0-4496-9120-b7709f3f8cb8" podNamespace="kube-system" podName="kube-proxy-p27r7" Jan 30 13:06:07.931723 systemd[1]: Created slice kubepods-besteffort-pod04561e50_aef0_4496_9120_b7709f3f8cb8.slice - libcontainer container kubepods-besteffort-pod04561e50_aef0_4496_9120_b7709f3f8cb8.slice. Jan 30 13:06:07.942754 kubelet[3378]: I0130 13:06:07.942716 3378 topology_manager.go:215] "Topology Admit Handler" podUID="9bb8b23f-46eb-43b2-8e91-9b99e2ab914d" podNamespace="kube-system" podName="cilium-xrcbj" Jan 30 13:06:07.953412 systemd[1]: Created slice kubepods-burstable-pod9bb8b23f_46eb_43b2_8e91_9b99e2ab914d.slice - libcontainer container kubepods-burstable-pod9bb8b23f_46eb_43b2_8e91_9b99e2ab914d.slice. Jan 30 13:06:08.016224 kubelet[3378]: I0130 13:06:08.014075 3378 topology_manager.go:215] "Topology Admit Handler" podUID="a10f8b93-8b03-4c8b-b567-f5167ab3e6e1" podNamespace="kube-system" podName="cilium-operator-599987898-svm2s" Jan 30 13:06:08.024402 systemd[1]: Created slice kubepods-besteffort-poda10f8b93_8b03_4c8b_b567_f5167ab3e6e1.slice - libcontainer container kubepods-besteffort-poda10f8b93_8b03_4c8b_b567_f5167ab3e6e1.slice. Jan 30 13:06:08.030203 kubelet[3378]: I0130 13:06:08.029974 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/04561e50-aef0-4496-9120-b7709f3f8cb8-lib-modules\") pod \"kube-proxy-p27r7\" (UID: \"04561e50-aef0-4496-9120-b7709f3f8cb8\") " pod="kube-system/kube-proxy-p27r7" Jan 30 13:06:08.030203 kubelet[3378]: I0130 13:06:08.030015 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9bb8b23f-46eb-43b2-8e91-9b99e2ab914d-cilium-run\") pod \"cilium-xrcbj\" (UID: \"9bb8b23f-46eb-43b2-8e91-9b99e2ab914d\") " pod="kube-system/cilium-xrcbj" Jan 30 13:06:08.030203 kubelet[3378]: I0130 13:06:08.030150 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9bb8b23f-46eb-43b2-8e91-9b99e2ab914d-hostproc\") pod \"cilium-xrcbj\" (UID: \"9bb8b23f-46eb-43b2-8e91-9b99e2ab914d\") " pod="kube-system/cilium-xrcbj" Jan 30 13:06:08.030203 kubelet[3378]: I0130 13:06:08.030176 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9bb8b23f-46eb-43b2-8e91-9b99e2ab914d-bpf-maps\") pod \"cilium-xrcbj\" (UID: \"9bb8b23f-46eb-43b2-8e91-9b99e2ab914d\") " pod="kube-system/cilium-xrcbj" Jan 30 13:06:08.031504 kubelet[3378]: I0130 13:06:08.031342 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5svdl\" (UniqueName: \"kubernetes.io/projected/04561e50-aef0-4496-9120-b7709f3f8cb8-kube-api-access-5svdl\") pod \"kube-proxy-p27r7\" (UID: \"04561e50-aef0-4496-9120-b7709f3f8cb8\") " pod="kube-system/kube-proxy-p27r7" Jan 30 13:06:08.031780 kubelet[3378]: I0130 13:06:08.031623 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/04561e50-aef0-4496-9120-b7709f3f8cb8-kube-proxy\") pod \"kube-proxy-p27r7\" (UID: \"04561e50-aef0-4496-9120-b7709f3f8cb8\") " pod="kube-system/kube-proxy-p27r7" Jan 30 13:06:08.031780 kubelet[3378]: I0130 13:06:08.031699 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/04561e50-aef0-4496-9120-b7709f3f8cb8-xtables-lock\") pod \"kube-proxy-p27r7\" (UID: \"04561e50-aef0-4496-9120-b7709f3f8cb8\") " pod="kube-system/kube-proxy-p27r7" Jan 30 13:06:08.132420 kubelet[3378]: I0130 13:06:08.132356 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9bb8b23f-46eb-43b2-8e91-9b99e2ab914d-cni-path\") pod \"cilium-xrcbj\" (UID: \"9bb8b23f-46eb-43b2-8e91-9b99e2ab914d\") " pod="kube-system/cilium-xrcbj" Jan 30 13:06:08.132420 kubelet[3378]: I0130 13:06:08.132414 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9bb8b23f-46eb-43b2-8e91-9b99e2ab914d-cilium-config-path\") pod \"cilium-xrcbj\" (UID: \"9bb8b23f-46eb-43b2-8e91-9b99e2ab914d\") " pod="kube-system/cilium-xrcbj" Jan 30 13:06:08.132678 kubelet[3378]: I0130 13:06:08.132443 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9bb8b23f-46eb-43b2-8e91-9b99e2ab914d-host-proc-sys-net\") pod \"cilium-xrcbj\" (UID: \"9bb8b23f-46eb-43b2-8e91-9b99e2ab914d\") " pod="kube-system/cilium-xrcbj" Jan 30 13:06:08.132678 kubelet[3378]: I0130 13:06:08.132470 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6s8wx\" (UniqueName: \"kubernetes.io/projected/a10f8b93-8b03-4c8b-b567-f5167ab3e6e1-kube-api-access-6s8wx\") pod \"cilium-operator-599987898-svm2s\" (UID: \"a10f8b93-8b03-4c8b-b567-f5167ab3e6e1\") " pod="kube-system/cilium-operator-599987898-svm2s" Jan 30 13:06:08.132678 kubelet[3378]: I0130 13:06:08.132511 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9bb8b23f-46eb-43b2-8e91-9b99e2ab914d-clustermesh-secrets\") pod \"cilium-xrcbj\" (UID: \"9bb8b23f-46eb-43b2-8e91-9b99e2ab914d\") " pod="kube-system/cilium-xrcbj" Jan 30 13:06:08.132678 kubelet[3378]: I0130 13:06:08.132535 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9bb8b23f-46eb-43b2-8e91-9b99e2ab914d-cilium-cgroup\") pod \"cilium-xrcbj\" (UID: \"9bb8b23f-46eb-43b2-8e91-9b99e2ab914d\") " pod="kube-system/cilium-xrcbj" Jan 30 13:06:08.132678 kubelet[3378]: I0130 13:06:08.132577 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9bb8b23f-46eb-43b2-8e91-9b99e2ab914d-etc-cni-netd\") pod \"cilium-xrcbj\" (UID: \"9bb8b23f-46eb-43b2-8e91-9b99e2ab914d\") " pod="kube-system/cilium-xrcbj" Jan 30 13:06:08.132934 kubelet[3378]: I0130 13:06:08.132650 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9bb8b23f-46eb-43b2-8e91-9b99e2ab914d-host-proc-sys-kernel\") pod \"cilium-xrcbj\" (UID: \"9bb8b23f-46eb-43b2-8e91-9b99e2ab914d\") " pod="kube-system/cilium-xrcbj" Jan 30 13:06:08.132934 kubelet[3378]: I0130 13:06:08.132719 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnhbz\" (UniqueName: \"kubernetes.io/projected/9bb8b23f-46eb-43b2-8e91-9b99e2ab914d-kube-api-access-wnhbz\") pod \"cilium-xrcbj\" (UID: \"9bb8b23f-46eb-43b2-8e91-9b99e2ab914d\") " pod="kube-system/cilium-xrcbj" Jan 30 13:06:08.132934 kubelet[3378]: I0130 13:06:08.132764 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9bb8b23f-46eb-43b2-8e91-9b99e2ab914d-xtables-lock\") pod \"cilium-xrcbj\" (UID: \"9bb8b23f-46eb-43b2-8e91-9b99e2ab914d\") " pod="kube-system/cilium-xrcbj" Jan 30 13:06:08.132934 kubelet[3378]: I0130 13:06:08.132811 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9bb8b23f-46eb-43b2-8e91-9b99e2ab914d-hubble-tls\") pod \"cilium-xrcbj\" (UID: \"9bb8b23f-46eb-43b2-8e91-9b99e2ab914d\") " pod="kube-system/cilium-xrcbj" Jan 30 13:06:08.132934 kubelet[3378]: I0130 13:06:08.132844 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a10f8b93-8b03-4c8b-b567-f5167ab3e6e1-cilium-config-path\") pod \"cilium-operator-599987898-svm2s\" (UID: \"a10f8b93-8b03-4c8b-b567-f5167ab3e6e1\") " pod="kube-system/cilium-operator-599987898-svm2s" Jan 30 13:06:08.133203 kubelet[3378]: I0130 13:06:08.132891 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9bb8b23f-46eb-43b2-8e91-9b99e2ab914d-lib-modules\") pod \"cilium-xrcbj\" (UID: \"9bb8b23f-46eb-43b2-8e91-9b99e2ab914d\") " pod="kube-system/cilium-xrcbj" Jan 30 13:06:08.259422 containerd[1702]: time="2025-01-30T13:06:08.255665576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-p27r7,Uid:04561e50-aef0-4496-9120-b7709f3f8cb8,Namespace:kube-system,Attempt:0,}" Jan 30 13:06:08.304105 containerd[1702]: time="2025-01-30T13:06:08.303796730Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:06:08.304105 containerd[1702]: time="2025-01-30T13:06:08.303880931Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:06:08.304105 containerd[1702]: time="2025-01-30T13:06:08.303903232Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:06:08.304105 containerd[1702]: time="2025-01-30T13:06:08.303987233Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:06:08.320312 systemd[1]: Started cri-containerd-27708799c8f61bcc1c5a012e4d5b111ef8e628adb554b8d57e707b49bcadd429.scope - libcontainer container 27708799c8f61bcc1c5a012e4d5b111ef8e628adb554b8d57e707b49bcadd429. Jan 30 13:06:08.330315 containerd[1702]: time="2025-01-30T13:06:08.329934185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-svm2s,Uid:a10f8b93-8b03-4c8b-b567-f5167ab3e6e1,Namespace:kube-system,Attempt:0,}" Jan 30 13:06:08.342306 containerd[1702]: time="2025-01-30T13:06:08.342253952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-p27r7,Uid:04561e50-aef0-4496-9120-b7709f3f8cb8,Namespace:kube-system,Attempt:0,} returns sandbox id \"27708799c8f61bcc1c5a012e4d5b111ef8e628adb554b8d57e707b49bcadd429\"" Jan 30 13:06:08.345243 containerd[1702]: time="2025-01-30T13:06:08.345097791Z" level=info msg="CreateContainer within sandbox \"27708799c8f61bcc1c5a012e4d5b111ef8e628adb554b8d57e707b49bcadd429\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 13:06:08.429788 containerd[1702]: time="2025-01-30T13:06:08.429683640Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:06:08.429788 containerd[1702]: time="2025-01-30T13:06:08.429778541Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:06:08.430003 containerd[1702]: time="2025-01-30T13:06:08.429807742Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:06:08.430049 containerd[1702]: time="2025-01-30T13:06:08.429996044Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:06:08.447310 systemd[1]: Started cri-containerd-265d86bb294ac99d72309060e79fe9af40eab8813e001f45cff3a5b3eab5fba5.scope - libcontainer container 265d86bb294ac99d72309060e79fe9af40eab8813e001f45cff3a5b3eab5fba5. Jan 30 13:06:08.453704 containerd[1702]: time="2025-01-30T13:06:08.453595665Z" level=info msg="CreateContainer within sandbox \"27708799c8f61bcc1c5a012e4d5b111ef8e628adb554b8d57e707b49bcadd429\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"aa1d47fff6c6bdbdda06bfea2cbdc3f51e2aa4f3d53fd16a3935ff44aad8c763\"" Jan 30 13:06:08.456232 containerd[1702]: time="2025-01-30T13:06:08.455331688Z" level=info msg="StartContainer for \"aa1d47fff6c6bdbdda06bfea2cbdc3f51e2aa4f3d53fd16a3935ff44aad8c763\"" Jan 30 13:06:08.494037 systemd[1]: Started cri-containerd-aa1d47fff6c6bdbdda06bfea2cbdc3f51e2aa4f3d53fd16a3935ff44aad8c763.scope - libcontainer container aa1d47fff6c6bdbdda06bfea2cbdc3f51e2aa4f3d53fd16a3935ff44aad8c763. Jan 30 13:06:08.509708 containerd[1702]: time="2025-01-30T13:06:08.509585125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-svm2s,Uid:a10f8b93-8b03-4c8b-b567-f5167ab3e6e1,Namespace:kube-system,Attempt:0,} returns sandbox id \"265d86bb294ac99d72309060e79fe9af40eab8813e001f45cff3a5b3eab5fba5\"" Jan 30 13:06:08.515032 containerd[1702]: time="2025-01-30T13:06:08.514945398Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 30 13:06:08.561495 containerd[1702]: time="2025-01-30T13:06:08.561449330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xrcbj,Uid:9bb8b23f-46eb-43b2-8e91-9b99e2ab914d,Namespace:kube-system,Attempt:0,}" Jan 30 13:06:08.574254 containerd[1702]: time="2025-01-30T13:06:08.574092202Z" level=info msg="StartContainer for \"aa1d47fff6c6bdbdda06bfea2cbdc3f51e2aa4f3d53fd16a3935ff44aad8c763\" returns successfully" Jan 30 13:06:08.627577 containerd[1702]: time="2025-01-30T13:06:08.627293624Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:06:08.628396 containerd[1702]: time="2025-01-30T13:06:08.627482327Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:06:08.628519 containerd[1702]: time="2025-01-30T13:06:08.628427940Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:06:08.630898 containerd[1702]: time="2025-01-30T13:06:08.628609142Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:06:08.651321 systemd[1]: Started cri-containerd-9b365537e524b60789f300ccfdc8988b6c826dae2eb437be357c2acbd90900f4.scope - libcontainer container 9b365537e524b60789f300ccfdc8988b6c826dae2eb437be357c2acbd90900f4. Jan 30 13:06:08.672960 containerd[1702]: time="2025-01-30T13:06:08.672919844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xrcbj,Uid:9bb8b23f-46eb-43b2-8e91-9b99e2ab914d,Namespace:kube-system,Attempt:0,} returns sandbox id \"9b365537e524b60789f300ccfdc8988b6c826dae2eb437be357c2acbd90900f4\"" Jan 30 13:06:09.527699 kubelet[3378]: I0130 13:06:09.527631 3378 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-p27r7" podStartSLOduration=2.527609053 podStartE2EDuration="2.527609053s" podCreationTimestamp="2025-01-30 13:06:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:06:09.52736725 +0000 UTC m=+15.167431166" watchObservedRunningTime="2025-01-30 13:06:09.527609053 +0000 UTC m=+15.167672969" Jan 30 13:06:10.201225 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2794624742.mount: Deactivated successfully. Jan 30 13:06:10.951308 containerd[1702]: time="2025-01-30T13:06:10.951248890Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:06:10.954195 containerd[1702]: time="2025-01-30T13:06:10.954144230Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 30 13:06:10.958894 containerd[1702]: time="2025-01-30T13:06:10.958824993Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:06:10.960262 containerd[1702]: time="2025-01-30T13:06:10.960205512Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.445168513s" Jan 30 13:06:10.960262 containerd[1702]: time="2025-01-30T13:06:10.960254713Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 30 13:06:10.961523 containerd[1702]: time="2025-01-30T13:06:10.961496329Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 30 13:06:10.963211 containerd[1702]: time="2025-01-30T13:06:10.963177752Z" level=info msg="CreateContainer within sandbox \"265d86bb294ac99d72309060e79fe9af40eab8813e001f45cff3a5b3eab5fba5\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 30 13:06:11.010944 containerd[1702]: time="2025-01-30T13:06:11.010892700Z" level=info msg="CreateContainer within sandbox \"265d86bb294ac99d72309060e79fe9af40eab8813e001f45cff3a5b3eab5fba5\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"d4f733231b0743ffc945e78ad6e106e8fe8ce5693b6bcf674b2595c2dd4c690c\"" Jan 30 13:06:11.012244 containerd[1702]: time="2025-01-30T13:06:11.011405907Z" level=info msg="StartContainer for \"d4f733231b0743ffc945e78ad6e106e8fe8ce5693b6bcf674b2595c2dd4c690c\"" Jan 30 13:06:11.040286 systemd[1]: Started cri-containerd-d4f733231b0743ffc945e78ad6e106e8fe8ce5693b6bcf674b2595c2dd4c690c.scope - libcontainer container d4f733231b0743ffc945e78ad6e106e8fe8ce5693b6bcf674b2595c2dd4c690c. Jan 30 13:06:11.066753 containerd[1702]: time="2025-01-30T13:06:11.066597357Z" level=info msg="StartContainer for \"d4f733231b0743ffc945e78ad6e106e8fe8ce5693b6bcf674b2595c2dd4c690c\" returns successfully" Jan 30 13:06:17.481822 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount209214369.mount: Deactivated successfully. Jan 30 13:06:19.654155 containerd[1702]: time="2025-01-30T13:06:19.654076296Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:06:19.656868 containerd[1702]: time="2025-01-30T13:06:19.656811629Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 30 13:06:19.661293 containerd[1702]: time="2025-01-30T13:06:19.661239183Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:06:19.663003 containerd[1702]: time="2025-01-30T13:06:19.662966703Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.701148269s" Jan 30 13:06:19.663003 containerd[1702]: time="2025-01-30T13:06:19.662999804Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 30 13:06:19.665776 containerd[1702]: time="2025-01-30T13:06:19.665747337Z" level=info msg="CreateContainer within sandbox \"9b365537e524b60789f300ccfdc8988b6c826dae2eb437be357c2acbd90900f4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 30 13:06:19.695308 containerd[1702]: time="2025-01-30T13:06:19.695263793Z" level=info msg="CreateContainer within sandbox \"9b365537e524b60789f300ccfdc8988b6c826dae2eb437be357c2acbd90900f4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1310e072b2397fb7a80a01446025e61d03ee376aea0d4495a30cc561e6ce5b43\"" Jan 30 13:06:19.695878 containerd[1702]: time="2025-01-30T13:06:19.695778399Z" level=info msg="StartContainer for \"1310e072b2397fb7a80a01446025e61d03ee376aea0d4495a30cc561e6ce5b43\"" Jan 30 13:06:19.726373 systemd[1]: Started cri-containerd-1310e072b2397fb7a80a01446025e61d03ee376aea0d4495a30cc561e6ce5b43.scope - libcontainer container 1310e072b2397fb7a80a01446025e61d03ee376aea0d4495a30cc561e6ce5b43. Jan 30 13:06:19.757248 containerd[1702]: time="2025-01-30T13:06:19.757078338Z" level=info msg="StartContainer for \"1310e072b2397fb7a80a01446025e61d03ee376aea0d4495a30cc561e6ce5b43\" returns successfully" Jan 30 13:06:19.767195 systemd[1]: cri-containerd-1310e072b2397fb7a80a01446025e61d03ee376aea0d4495a30cc561e6ce5b43.scope: Deactivated successfully. Jan 30 13:06:20.561224 kubelet[3378]: I0130 13:06:20.561154 3378 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-svm2s" podStartSLOduration=10.112802372 podStartE2EDuration="12.561089527s" podCreationTimestamp="2025-01-30 13:06:08 +0000 UTC" firstStartedPulling="2025-01-30 13:06:08.513040872 +0000 UTC m=+14.153104788" lastFinishedPulling="2025-01-30 13:06:10.961328127 +0000 UTC m=+16.601391943" observedRunningTime="2025-01-30 13:06:11.578524473 +0000 UTC m=+17.218588289" watchObservedRunningTime="2025-01-30 13:06:20.561089527 +0000 UTC m=+26.201153343" Jan 30 13:06:20.684815 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1310e072b2397fb7a80a01446025e61d03ee376aea0d4495a30cc561e6ce5b43-rootfs.mount: Deactivated successfully. Jan 30 13:06:23.943994 containerd[1702]: time="2025-01-30T13:06:23.943922293Z" level=info msg="shim disconnected" id=1310e072b2397fb7a80a01446025e61d03ee376aea0d4495a30cc561e6ce5b43 namespace=k8s.io Jan 30 13:06:23.943994 containerd[1702]: time="2025-01-30T13:06:23.943992094Z" level=warning msg="cleaning up after shim disconnected" id=1310e072b2397fb7a80a01446025e61d03ee376aea0d4495a30cc561e6ce5b43 namespace=k8s.io Jan 30 13:06:23.943994 containerd[1702]: time="2025-01-30T13:06:23.944003294Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:06:24.554312 containerd[1702]: time="2025-01-30T13:06:24.554234948Z" level=info msg="CreateContainer within sandbox \"9b365537e524b60789f300ccfdc8988b6c826dae2eb437be357c2acbd90900f4\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 30 13:06:24.594828 containerd[1702]: time="2025-01-30T13:06:24.594786436Z" level=info msg="CreateContainer within sandbox \"9b365537e524b60789f300ccfdc8988b6c826dae2eb437be357c2acbd90900f4\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3e30113559f1f56c5f4f1b9e01b571ab576685b6496fdb78dffb73fbd7841106\"" Jan 30 13:06:24.596205 containerd[1702]: time="2025-01-30T13:06:24.595451744Z" level=info msg="StartContainer for \"3e30113559f1f56c5f4f1b9e01b571ab576685b6496fdb78dffb73fbd7841106\"" Jan 30 13:06:24.659504 systemd[1]: Started cri-containerd-3e30113559f1f56c5f4f1b9e01b571ab576685b6496fdb78dffb73fbd7841106.scope - libcontainer container 3e30113559f1f56c5f4f1b9e01b571ab576685b6496fdb78dffb73fbd7841106. Jan 30 13:06:24.696754 containerd[1702]: time="2025-01-30T13:06:24.696154758Z" level=info msg="StartContainer for \"3e30113559f1f56c5f4f1b9e01b571ab576685b6496fdb78dffb73fbd7841106\" returns successfully" Jan 30 13:06:24.708754 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 13:06:24.709276 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:06:24.709363 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:06:24.716548 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:06:24.716800 systemd[1]: cri-containerd-3e30113559f1f56c5f4f1b9e01b571ab576685b6496fdb78dffb73fbd7841106.scope: Deactivated successfully. Jan 30 13:06:24.738779 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:06:24.744640 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3e30113559f1f56c5f4f1b9e01b571ab576685b6496fdb78dffb73fbd7841106-rootfs.mount: Deactivated successfully. Jan 30 13:06:24.758006 containerd[1702]: time="2025-01-30T13:06:24.757936802Z" level=info msg="shim disconnected" id=3e30113559f1f56c5f4f1b9e01b571ab576685b6496fdb78dffb73fbd7841106 namespace=k8s.io Jan 30 13:06:24.758006 containerd[1702]: time="2025-01-30T13:06:24.758003603Z" level=warning msg="cleaning up after shim disconnected" id=3e30113559f1f56c5f4f1b9e01b571ab576685b6496fdb78dffb73fbd7841106 namespace=k8s.io Jan 30 13:06:24.758006 containerd[1702]: time="2025-01-30T13:06:24.758014503Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:06:25.556657 containerd[1702]: time="2025-01-30T13:06:25.556590627Z" level=info msg="CreateContainer within sandbox \"9b365537e524b60789f300ccfdc8988b6c826dae2eb437be357c2acbd90900f4\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 30 13:06:25.604656 containerd[1702]: time="2025-01-30T13:06:25.604608606Z" level=info msg="CreateContainer within sandbox \"9b365537e524b60789f300ccfdc8988b6c826dae2eb437be357c2acbd90900f4\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ca4fb1bfb1cb8fd0072bc17d7665746d3d82d9d1b7e01a40d8a54267cda4cad0\"" Jan 30 13:06:25.605289 containerd[1702]: time="2025-01-30T13:06:25.605197513Z" level=info msg="StartContainer for \"ca4fb1bfb1cb8fd0072bc17d7665746d3d82d9d1b7e01a40d8a54267cda4cad0\"" Jan 30 13:06:25.643894 systemd[1]: run-containerd-runc-k8s.io-ca4fb1bfb1cb8fd0072bc17d7665746d3d82d9d1b7e01a40d8a54267cda4cad0-runc.DeM9gt.mount: Deactivated successfully. Jan 30 13:06:25.649285 systemd[1]: Started cri-containerd-ca4fb1bfb1cb8fd0072bc17d7665746d3d82d9d1b7e01a40d8a54267cda4cad0.scope - libcontainer container ca4fb1bfb1cb8fd0072bc17d7665746d3d82d9d1b7e01a40d8a54267cda4cad0. Jan 30 13:06:25.677293 systemd[1]: cri-containerd-ca4fb1bfb1cb8fd0072bc17d7665746d3d82d9d1b7e01a40d8a54267cda4cad0.scope: Deactivated successfully. Jan 30 13:06:25.679277 containerd[1702]: time="2025-01-30T13:06:25.679018802Z" level=info msg="StartContainer for \"ca4fb1bfb1cb8fd0072bc17d7665746d3d82d9d1b7e01a40d8a54267cda4cad0\" returns successfully" Jan 30 13:06:25.718767 containerd[1702]: time="2025-01-30T13:06:25.718680480Z" level=info msg="shim disconnected" id=ca4fb1bfb1cb8fd0072bc17d7665746d3d82d9d1b7e01a40d8a54267cda4cad0 namespace=k8s.io Jan 30 13:06:25.718767 containerd[1702]: time="2025-01-30T13:06:25.718766681Z" level=warning msg="cleaning up after shim disconnected" id=ca4fb1bfb1cb8fd0072bc17d7665746d3d82d9d1b7e01a40d8a54267cda4cad0 namespace=k8s.io Jan 30 13:06:25.719243 containerd[1702]: time="2025-01-30T13:06:25.718780182Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:06:26.560683 containerd[1702]: time="2025-01-30T13:06:26.560633927Z" level=info msg="CreateContainer within sandbox \"9b365537e524b60789f300ccfdc8988b6c826dae2eb437be357c2acbd90900f4\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 30 13:06:26.585792 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ca4fb1bfb1cb8fd0072bc17d7665746d3d82d9d1b7e01a40d8a54267cda4cad0-rootfs.mount: Deactivated successfully. Jan 30 13:06:26.601778 containerd[1702]: time="2025-01-30T13:06:26.601726522Z" level=info msg="CreateContainer within sandbox \"9b365537e524b60789f300ccfdc8988b6c826dae2eb437be357c2acbd90900f4\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"70f957fa837ba1e88efa17a606ad30e7099f24b6a9644c5e16920d0b7aee97af\"" Jan 30 13:06:26.602805 containerd[1702]: time="2025-01-30T13:06:26.602236228Z" level=info msg="StartContainer for \"70f957fa837ba1e88efa17a606ad30e7099f24b6a9644c5e16920d0b7aee97af\"" Jan 30 13:06:26.634281 systemd[1]: Started cri-containerd-70f957fa837ba1e88efa17a606ad30e7099f24b6a9644c5e16920d0b7aee97af.scope - libcontainer container 70f957fa837ba1e88efa17a606ad30e7099f24b6a9644c5e16920d0b7aee97af. Jan 30 13:06:26.658405 systemd[1]: cri-containerd-70f957fa837ba1e88efa17a606ad30e7099f24b6a9644c5e16920d0b7aee97af.scope: Deactivated successfully. Jan 30 13:06:26.668027 containerd[1702]: time="2025-01-30T13:06:26.667832418Z" level=info msg="StartContainer for \"70f957fa837ba1e88efa17a606ad30e7099f24b6a9644c5e16920d0b7aee97af\" returns successfully" Jan 30 13:06:26.686429 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-70f957fa837ba1e88efa17a606ad30e7099f24b6a9644c5e16920d0b7aee97af-rootfs.mount: Deactivated successfully. Jan 30 13:06:26.696622 containerd[1702]: time="2025-01-30T13:06:26.696544164Z" level=info msg="shim disconnected" id=70f957fa837ba1e88efa17a606ad30e7099f24b6a9644c5e16920d0b7aee97af namespace=k8s.io Jan 30 13:06:26.696751 containerd[1702]: time="2025-01-30T13:06:26.696624865Z" level=warning msg="cleaning up after shim disconnected" id=70f957fa837ba1e88efa17a606ad30e7099f24b6a9644c5e16920d0b7aee97af namespace=k8s.io Jan 30 13:06:26.696751 containerd[1702]: time="2025-01-30T13:06:26.696637166Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:06:27.566789 containerd[1702]: time="2025-01-30T13:06:27.566577733Z" level=info msg="CreateContainer within sandbox \"9b365537e524b60789f300ccfdc8988b6c826dae2eb437be357c2acbd90900f4\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 30 13:06:27.609453 containerd[1702]: time="2025-01-30T13:06:27.609415035Z" level=info msg="CreateContainer within sandbox \"9b365537e524b60789f300ccfdc8988b6c826dae2eb437be357c2acbd90900f4\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e1bfb471d06703d0f55a728beb6da81b1f6164c2c7426f60d85e9f57454a6361\"" Jan 30 13:06:27.610007 containerd[1702]: time="2025-01-30T13:06:27.609946326Z" level=info msg="StartContainer for \"e1bfb471d06703d0f55a728beb6da81b1f6164c2c7426f60d85e9f57454a6361\"" Jan 30 13:06:27.644257 systemd[1]: Started cri-containerd-e1bfb471d06703d0f55a728beb6da81b1f6164c2c7426f60d85e9f57454a6361.scope - libcontainer container e1bfb471d06703d0f55a728beb6da81b1f6164c2c7426f60d85e9f57454a6361. Jan 30 13:06:27.676805 containerd[1702]: time="2025-01-30T13:06:27.676435943Z" level=info msg="StartContainer for \"e1bfb471d06703d0f55a728beb6da81b1f6164c2c7426f60d85e9f57454a6361\" returns successfully" Jan 30 13:06:27.794640 kubelet[3378]: I0130 13:06:27.794607 3378 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 30 13:06:27.825533 kubelet[3378]: I0130 13:06:27.824381 3378 topology_manager.go:215] "Topology Admit Handler" podUID="3bfa2f49-e111-4864-9950-d2093f5b0b02" podNamespace="kube-system" podName="coredns-7db6d8ff4d-z89zz" Jan 30 13:06:27.828565 kubelet[3378]: W0130 13:06:27.828532 3378 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-4186.1.0-a-d95fc4b65f" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4186.1.0-a-d95fc4b65f' and this object Jan 30 13:06:27.828685 kubelet[3378]: E0130 13:06:27.828575 3378 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-4186.1.0-a-d95fc4b65f" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4186.1.0-a-d95fc4b65f' and this object Jan 30 13:06:27.829883 kubelet[3378]: I0130 13:06:27.829847 3378 topology_manager.go:215] "Topology Admit Handler" podUID="2f02bf51-dcd8-4544-b9fa-5de75f3389a9" podNamespace="kube-system" podName="coredns-7db6d8ff4d-928n5" Jan 30 13:06:27.836693 systemd[1]: Created slice kubepods-burstable-pod3bfa2f49_e111_4864_9950_d2093f5b0b02.slice - libcontainer container kubepods-burstable-pod3bfa2f49_e111_4864_9950_d2093f5b0b02.slice. Jan 30 13:06:27.845224 systemd[1]: Created slice kubepods-burstable-pod2f02bf51_dcd8_4544_b9fa_5de75f3389a9.slice - libcontainer container kubepods-burstable-pod2f02bf51_dcd8_4544_b9fa_5de75f3389a9.slice. Jan 30 13:06:27.967753 kubelet[3378]: I0130 13:06:27.967546 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3bfa2f49-e111-4864-9950-d2093f5b0b02-config-volume\") pod \"coredns-7db6d8ff4d-z89zz\" (UID: \"3bfa2f49-e111-4864-9950-d2093f5b0b02\") " pod="kube-system/coredns-7db6d8ff4d-z89zz" Jan 30 13:06:27.967753 kubelet[3378]: I0130 13:06:27.967603 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qlh9\" (UniqueName: \"kubernetes.io/projected/2f02bf51-dcd8-4544-b9fa-5de75f3389a9-kube-api-access-2qlh9\") pod \"coredns-7db6d8ff4d-928n5\" (UID: \"2f02bf51-dcd8-4544-b9fa-5de75f3389a9\") " pod="kube-system/coredns-7db6d8ff4d-928n5" Jan 30 13:06:27.967753 kubelet[3378]: I0130 13:06:27.967635 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8g8w5\" (UniqueName: \"kubernetes.io/projected/3bfa2f49-e111-4864-9950-d2093f5b0b02-kube-api-access-8g8w5\") pod \"coredns-7db6d8ff4d-z89zz\" (UID: \"3bfa2f49-e111-4864-9950-d2093f5b0b02\") " pod="kube-system/coredns-7db6d8ff4d-z89zz" Jan 30 13:06:27.967753 kubelet[3378]: I0130 13:06:27.967659 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2f02bf51-dcd8-4544-b9fa-5de75f3389a9-config-volume\") pod \"coredns-7db6d8ff4d-928n5\" (UID: \"2f02bf51-dcd8-4544-b9fa-5de75f3389a9\") " pod="kube-system/coredns-7db6d8ff4d-928n5" Jan 30 13:06:28.583877 kubelet[3378]: I0130 13:06:28.583811 3378 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-xrcbj" podStartSLOduration=10.596904279 podStartE2EDuration="21.583788394s" podCreationTimestamp="2025-01-30 13:06:07 +0000 UTC" firstStartedPulling="2025-01-30 13:06:08.676959999 +0000 UTC m=+14.317023815" lastFinishedPulling="2025-01-30 13:06:19.663844114 +0000 UTC m=+25.303907930" observedRunningTime="2025-01-30 13:06:28.583436889 +0000 UTC m=+34.223500705" watchObservedRunningTime="2025-01-30 13:06:28.583788394 +0000 UTC m=+34.223852210" Jan 30 13:06:29.043603 containerd[1702]: time="2025-01-30T13:06:29.043549805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-z89zz,Uid:3bfa2f49-e111-4864-9950-d2093f5b0b02,Namespace:kube-system,Attempt:0,}" Jan 30 13:06:29.050159 containerd[1702]: time="2025-01-30T13:06:29.050104388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-928n5,Uid:2f02bf51-dcd8-4544-b9fa-5de75f3389a9,Namespace:kube-system,Attempt:0,}" Jan 30 13:06:29.832816 systemd-networkd[1327]: cilium_host: Link UP Jan 30 13:06:29.833011 systemd-networkd[1327]: cilium_net: Link UP Jan 30 13:06:29.833016 systemd-networkd[1327]: cilium_net: Gained carrier Jan 30 13:06:29.836834 systemd-networkd[1327]: cilium_host: Gained carrier Jan 30 13:06:29.837174 systemd-networkd[1327]: cilium_host: Gained IPv6LL Jan 30 13:06:30.004183 systemd-networkd[1327]: cilium_vxlan: Link UP Jan 30 13:06:30.004193 systemd-networkd[1327]: cilium_vxlan: Gained carrier Jan 30 13:06:30.390363 kernel: NET: Registered PF_ALG protocol family Jan 30 13:06:30.470281 systemd-networkd[1327]: cilium_net: Gained IPv6LL Jan 30 13:06:31.248195 systemd-networkd[1327]: lxc_health: Link UP Jan 30 13:06:31.264816 systemd-networkd[1327]: lxc_health: Gained carrier Jan 30 13:06:31.430313 systemd-networkd[1327]: cilium_vxlan: Gained IPv6LL Jan 30 13:06:31.631459 kernel: eth0: renamed from tmp46c4b Jan 30 13:06:31.635363 systemd-networkd[1327]: lxc39e723592b35: Link UP Jan 30 13:06:31.639368 systemd-networkd[1327]: lxc39e723592b35: Gained carrier Jan 30 13:06:31.660218 systemd-networkd[1327]: lxc27de91d92069: Link UP Jan 30 13:06:31.672232 kernel: eth0: renamed from tmp405b4 Jan 30 13:06:31.675668 systemd-networkd[1327]: lxc27de91d92069: Gained carrier Jan 30 13:06:32.966411 systemd-networkd[1327]: lxc39e723592b35: Gained IPv6LL Jan 30 13:06:33.094398 systemd-networkd[1327]: lxc_health: Gained IPv6LL Jan 30 13:06:33.414350 systemd-networkd[1327]: lxc27de91d92069: Gained IPv6LL Jan 30 13:06:35.421927 containerd[1702]: time="2025-01-30T13:06:35.420673523Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:06:35.421927 containerd[1702]: time="2025-01-30T13:06:35.420744124Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:06:35.421927 containerd[1702]: time="2025-01-30T13:06:35.420767124Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:06:35.421927 containerd[1702]: time="2025-01-30T13:06:35.420859625Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:06:35.448155 containerd[1702]: time="2025-01-30T13:06:35.441328388Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:06:35.448155 containerd[1702]: time="2025-01-30T13:06:35.441458289Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:06:35.448155 containerd[1702]: time="2025-01-30T13:06:35.441496189Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:06:35.448155 containerd[1702]: time="2025-01-30T13:06:35.441632190Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:06:35.472327 systemd[1]: Started cri-containerd-46c4badaf22eb747e2dfd09215f6a359cc759945e2ed51b053d7356ac1569d9d.scope - libcontainer container 46c4badaf22eb747e2dfd09215f6a359cc759945e2ed51b053d7356ac1569d9d. Jan 30 13:06:35.477344 systemd[1]: Started cri-containerd-405b4f18160d4601966181ac372e062525404bec0cbd414525943b2c58039e6b.scope - libcontainer container 405b4f18160d4601966181ac372e062525404bec0cbd414525943b2c58039e6b. Jan 30 13:06:35.554424 containerd[1702]: time="2025-01-30T13:06:35.554364687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-z89zz,Uid:3bfa2f49-e111-4864-9950-d2093f5b0b02,Namespace:kube-system,Attempt:0,} returns sandbox id \"46c4badaf22eb747e2dfd09215f6a359cc759945e2ed51b053d7356ac1569d9d\"" Jan 30 13:06:35.559349 containerd[1702]: time="2025-01-30T13:06:35.558962623Z" level=info msg="CreateContainer within sandbox \"46c4badaf22eb747e2dfd09215f6a359cc759945e2ed51b053d7356ac1569d9d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 13:06:35.595342 containerd[1702]: time="2025-01-30T13:06:35.595298813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-928n5,Uid:2f02bf51-dcd8-4544-b9fa-5de75f3389a9,Namespace:kube-system,Attempt:0,} returns sandbox id \"405b4f18160d4601966181ac372e062525404bec0cbd414525943b2c58039e6b\"" Jan 30 13:06:35.600502 containerd[1702]: time="2025-01-30T13:06:35.600065850Z" level=info msg="CreateContainer within sandbox \"405b4f18160d4601966181ac372e062525404bec0cbd414525943b2c58039e6b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 13:06:35.623376 containerd[1702]: time="2025-01-30T13:06:35.623338436Z" level=info msg="CreateContainer within sandbox \"46c4badaf22eb747e2dfd09215f6a359cc759945e2ed51b053d7356ac1569d9d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"808de54ef352edb8d5e18a36bad8e6c306f1cb5158f376cf12081f6f5d21d82b\"" Jan 30 13:06:35.627720 containerd[1702]: time="2025-01-30T13:06:35.625154350Z" level=info msg="StartContainer for \"808de54ef352edb8d5e18a36bad8e6c306f1cb5158f376cf12081f6f5d21d82b\"" Jan 30 13:06:35.661597 containerd[1702]: time="2025-01-30T13:06:35.661553740Z" level=info msg="CreateContainer within sandbox \"405b4f18160d4601966181ac372e062525404bec0cbd414525943b2c58039e6b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e21c5a7099bb6be729d6c1b4be81b5f2f94fb37904ac1d82f2f1a722505df1fc\"" Jan 30 13:06:35.662974 containerd[1702]: time="2025-01-30T13:06:35.662942851Z" level=info msg="StartContainer for \"e21c5a7099bb6be729d6c1b4be81b5f2f94fb37904ac1d82f2f1a722505df1fc\"" Jan 30 13:06:35.675434 systemd[1]: Started cri-containerd-808de54ef352edb8d5e18a36bad8e6c306f1cb5158f376cf12081f6f5d21d82b.scope - libcontainer container 808de54ef352edb8d5e18a36bad8e6c306f1cb5158f376cf12081f6f5d21d82b. Jan 30 13:06:35.701385 systemd[1]: Started cri-containerd-e21c5a7099bb6be729d6c1b4be81b5f2f94fb37904ac1d82f2f1a722505df1fc.scope - libcontainer container e21c5a7099bb6be729d6c1b4be81b5f2f94fb37904ac1d82f2f1a722505df1fc. Jan 30 13:06:35.718023 containerd[1702]: time="2025-01-30T13:06:35.717985589Z" level=info msg="StartContainer for \"808de54ef352edb8d5e18a36bad8e6c306f1cb5158f376cf12081f6f5d21d82b\" returns successfully" Jan 30 13:06:35.749619 containerd[1702]: time="2025-01-30T13:06:35.749577240Z" level=info msg="StartContainer for \"e21c5a7099bb6be729d6c1b4be81b5f2f94fb37904ac1d82f2f1a722505df1fc\" returns successfully" Jan 30 13:06:36.431073 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1409580762.mount: Deactivated successfully. Jan 30 13:06:36.609898 kubelet[3378]: I0130 13:06:36.609835 3378 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-z89zz" podStartSLOduration=28.609814383 podStartE2EDuration="28.609814383s" podCreationTimestamp="2025-01-30 13:06:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:06:36.607792167 +0000 UTC m=+42.247856083" watchObservedRunningTime="2025-01-30 13:06:36.609814383 +0000 UTC m=+42.249878199" Jan 30 13:07:56.606510 systemd[1]: Started sshd@7-10.200.4.12:22-10.200.16.10:37220.service - OpenSSH per-connection server daemon (10.200.16.10:37220). Jan 30 13:07:57.251976 sshd[4753]: Accepted publickey for core from 10.200.16.10 port 37220 ssh2: RSA SHA256:R1VmL1R3PxweWMZWYfiOmewd1nMkLMPBC099indN3nA Jan 30 13:07:57.253636 sshd-session[4753]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:07:57.259201 systemd-logind[1681]: New session 10 of user core. Jan 30 13:07:57.265293 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 30 13:07:57.796004 sshd[4755]: Connection closed by 10.200.16.10 port 37220 Jan 30 13:07:57.796797 sshd-session[4753]: pam_unix(sshd:session): session closed for user core Jan 30 13:07:57.799838 systemd[1]: sshd@7-10.200.4.12:22-10.200.16.10:37220.service: Deactivated successfully. Jan 30 13:07:57.802321 systemd[1]: session-10.scope: Deactivated successfully. Jan 30 13:07:57.804314 systemd-logind[1681]: Session 10 logged out. Waiting for processes to exit. Jan 30 13:07:57.805421 systemd-logind[1681]: Removed session 10. Jan 30 13:08:02.911119 systemd[1]: Started sshd@8-10.200.4.12:22-10.200.16.10:37228.service - OpenSSH per-connection server daemon (10.200.16.10:37228). Jan 30 13:08:03.564982 sshd[4767]: Accepted publickey for core from 10.200.16.10 port 37228 ssh2: RSA SHA256:R1VmL1R3PxweWMZWYfiOmewd1nMkLMPBC099indN3nA Jan 30 13:08:03.566351 sshd-session[4767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:08:03.571310 systemd-logind[1681]: New session 11 of user core. Jan 30 13:08:03.580343 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 30 13:08:04.076362 sshd[4769]: Connection closed by 10.200.16.10 port 37228 Jan 30 13:08:04.077166 sshd-session[4767]: pam_unix(sshd:session): session closed for user core Jan 30 13:08:04.079961 systemd[1]: sshd@8-10.200.4.12:22-10.200.16.10:37228.service: Deactivated successfully. Jan 30 13:08:04.082229 systemd[1]: session-11.scope: Deactivated successfully. Jan 30 13:08:04.083917 systemd-logind[1681]: Session 11 logged out. Waiting for processes to exit. Jan 30 13:08:04.084947 systemd-logind[1681]: Removed session 11. Jan 30 13:08:09.191618 systemd[1]: Started sshd@9-10.200.4.12:22-10.200.16.10:41736.service - OpenSSH per-connection server daemon (10.200.16.10:41736). Jan 30 13:08:09.837311 sshd[4783]: Accepted publickey for core from 10.200.16.10 port 41736 ssh2: RSA SHA256:R1VmL1R3PxweWMZWYfiOmewd1nMkLMPBC099indN3nA Jan 30 13:08:09.838989 sshd-session[4783]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:08:09.843094 systemd-logind[1681]: New session 12 of user core. Jan 30 13:08:09.853298 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 30 13:08:10.349057 sshd[4785]: Connection closed by 10.200.16.10 port 41736 Jan 30 13:08:10.349943 sshd-session[4783]: pam_unix(sshd:session): session closed for user core Jan 30 13:08:10.353106 systemd[1]: sshd@9-10.200.4.12:22-10.200.16.10:41736.service: Deactivated successfully. Jan 30 13:08:10.355113 systemd[1]: session-12.scope: Deactivated successfully. Jan 30 13:08:10.356889 systemd-logind[1681]: Session 12 logged out. Waiting for processes to exit. Jan 30 13:08:10.357920 systemd-logind[1681]: Removed session 12. Jan 30 13:08:15.468420 systemd[1]: Started sshd@10-10.200.4.12:22-10.200.16.10:41752.service - OpenSSH per-connection server daemon (10.200.16.10:41752). Jan 30 13:08:16.113162 sshd[4797]: Accepted publickey for core from 10.200.16.10 port 41752 ssh2: RSA SHA256:R1VmL1R3PxweWMZWYfiOmewd1nMkLMPBC099indN3nA Jan 30 13:08:16.114587 sshd-session[4797]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:08:16.118539 systemd-logind[1681]: New session 13 of user core. Jan 30 13:08:16.125334 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 30 13:08:16.640671 sshd[4799]: Connection closed by 10.200.16.10 port 41752 Jan 30 13:08:16.641459 sshd-session[4797]: pam_unix(sshd:session): session closed for user core Jan 30 13:08:16.644851 systemd[1]: sshd@10-10.200.4.12:22-10.200.16.10:41752.service: Deactivated successfully. Jan 30 13:08:16.647145 systemd[1]: session-13.scope: Deactivated successfully. Jan 30 13:08:16.648777 systemd-logind[1681]: Session 13 logged out. Waiting for processes to exit. Jan 30 13:08:16.649951 systemd-logind[1681]: Removed session 13. Jan 30 13:08:21.760479 systemd[1]: Started sshd@11-10.200.4.12:22-10.200.16.10:33612.service - OpenSSH per-connection server daemon (10.200.16.10:33612). Jan 30 13:08:22.403481 sshd[4811]: Accepted publickey for core from 10.200.16.10 port 33612 ssh2: RSA SHA256:R1VmL1R3PxweWMZWYfiOmewd1nMkLMPBC099indN3nA Jan 30 13:08:22.404995 sshd-session[4811]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:08:22.409758 systemd-logind[1681]: New session 14 of user core. Jan 30 13:08:22.416300 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 30 13:08:22.917090 sshd[4813]: Connection closed by 10.200.16.10 port 33612 Jan 30 13:08:22.917934 sshd-session[4811]: pam_unix(sshd:session): session closed for user core Jan 30 13:08:22.922370 systemd[1]: sshd@11-10.200.4.12:22-10.200.16.10:33612.service: Deactivated successfully. Jan 30 13:08:22.924348 systemd[1]: session-14.scope: Deactivated successfully. Jan 30 13:08:22.925275 systemd-logind[1681]: Session 14 logged out. Waiting for processes to exit. Jan 30 13:08:22.926247 systemd-logind[1681]: Removed session 14. Jan 30 13:08:23.038422 systemd[1]: Started sshd@12-10.200.4.12:22-10.200.16.10:33616.service - OpenSSH per-connection server daemon (10.200.16.10:33616). Jan 30 13:08:23.680753 sshd[4824]: Accepted publickey for core from 10.200.16.10 port 33616 ssh2: RSA SHA256:R1VmL1R3PxweWMZWYfiOmewd1nMkLMPBC099indN3nA Jan 30 13:08:23.682503 sshd-session[4824]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:08:23.687849 systemd-logind[1681]: New session 15 of user core. Jan 30 13:08:23.692289 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 30 13:08:24.232616 sshd[4826]: Connection closed by 10.200.16.10 port 33616 Jan 30 13:08:24.233658 sshd-session[4824]: pam_unix(sshd:session): session closed for user core Jan 30 13:08:24.236743 systemd[1]: sshd@12-10.200.4.12:22-10.200.16.10:33616.service: Deactivated successfully. Jan 30 13:08:24.238821 systemd[1]: session-15.scope: Deactivated successfully. Jan 30 13:08:24.240661 systemd-logind[1681]: Session 15 logged out. Waiting for processes to exit. Jan 30 13:08:24.241817 systemd-logind[1681]: Removed session 15. Jan 30 13:08:24.351451 systemd[1]: Started sshd@13-10.200.4.12:22-10.200.16.10:33628.service - OpenSSH per-connection server daemon (10.200.16.10:33628). Jan 30 13:08:24.997463 sshd[4834]: Accepted publickey for core from 10.200.16.10 port 33628 ssh2: RSA SHA256:R1VmL1R3PxweWMZWYfiOmewd1nMkLMPBC099indN3nA Jan 30 13:08:24.998932 sshd-session[4834]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:08:25.003762 systemd-logind[1681]: New session 16 of user core. Jan 30 13:08:25.006309 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 30 13:08:25.527438 sshd[4836]: Connection closed by 10.200.16.10 port 33628 Jan 30 13:08:25.528342 sshd-session[4834]: pam_unix(sshd:session): session closed for user core Jan 30 13:08:25.531789 systemd[1]: sshd@13-10.200.4.12:22-10.200.16.10:33628.service: Deactivated successfully. Jan 30 13:08:25.534154 systemd[1]: session-16.scope: Deactivated successfully. Jan 30 13:08:25.535767 systemd-logind[1681]: Session 16 logged out. Waiting for processes to exit. Jan 30 13:08:25.537057 systemd-logind[1681]: Removed session 16. Jan 30 13:08:30.641518 systemd[1]: Started sshd@14-10.200.4.12:22-10.200.16.10:39710.service - OpenSSH per-connection server daemon (10.200.16.10:39710). Jan 30 13:08:31.288580 sshd[4847]: Accepted publickey for core from 10.200.16.10 port 39710 ssh2: RSA SHA256:R1VmL1R3PxweWMZWYfiOmewd1nMkLMPBC099indN3nA Jan 30 13:08:31.290303 sshd-session[4847]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:08:31.295651 systemd-logind[1681]: New session 17 of user core. Jan 30 13:08:31.304276 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 30 13:08:31.802991 sshd[4849]: Connection closed by 10.200.16.10 port 39710 Jan 30 13:08:31.803730 sshd-session[4847]: pam_unix(sshd:session): session closed for user core Jan 30 13:08:31.807629 systemd[1]: sshd@14-10.200.4.12:22-10.200.16.10:39710.service: Deactivated successfully. Jan 30 13:08:31.809734 systemd[1]: session-17.scope: Deactivated successfully. Jan 30 13:08:31.810668 systemd-logind[1681]: Session 17 logged out. Waiting for processes to exit. Jan 30 13:08:31.811618 systemd-logind[1681]: Removed session 17. Jan 30 13:08:31.922794 systemd[1]: Started sshd@15-10.200.4.12:22-10.200.16.10:39726.service - OpenSSH per-connection server daemon (10.200.16.10:39726). Jan 30 13:08:32.563783 sshd[4860]: Accepted publickey for core from 10.200.16.10 port 39726 ssh2: RSA SHA256:R1VmL1R3PxweWMZWYfiOmewd1nMkLMPBC099indN3nA Jan 30 13:08:32.566151 sshd-session[4860]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:08:32.571043 systemd-logind[1681]: New session 18 of user core. Jan 30 13:08:32.578344 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 30 13:08:33.099833 sshd[4862]: Connection closed by 10.200.16.10 port 39726 Jan 30 13:08:33.100696 sshd-session[4860]: pam_unix(sshd:session): session closed for user core Jan 30 13:08:33.103845 systemd[1]: sshd@15-10.200.4.12:22-10.200.16.10:39726.service: Deactivated successfully. Jan 30 13:08:33.106242 systemd[1]: session-18.scope: Deactivated successfully. Jan 30 13:08:33.107667 systemd-logind[1681]: Session 18 logged out. Waiting for processes to exit. Jan 30 13:08:33.108956 systemd-logind[1681]: Removed session 18. Jan 30 13:08:33.226510 systemd[1]: Started sshd@16-10.200.4.12:22-10.200.16.10:39728.service - OpenSSH per-connection server daemon (10.200.16.10:39728). Jan 30 13:08:33.879927 sshd[4871]: Accepted publickey for core from 10.200.16.10 port 39728 ssh2: RSA SHA256:R1VmL1R3PxweWMZWYfiOmewd1nMkLMPBC099indN3nA Jan 30 13:08:33.881732 sshd-session[4871]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:08:33.886636 systemd-logind[1681]: New session 19 of user core. Jan 30 13:08:33.891276 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 30 13:08:35.747375 sshd[4873]: Connection closed by 10.200.16.10 port 39728 Jan 30 13:08:35.748244 sshd-session[4871]: pam_unix(sshd:session): session closed for user core Jan 30 13:08:35.751957 systemd[1]: sshd@16-10.200.4.12:22-10.200.16.10:39728.service: Deactivated successfully. Jan 30 13:08:35.754281 systemd[1]: session-19.scope: Deactivated successfully. Jan 30 13:08:35.755946 systemd-logind[1681]: Session 19 logged out. Waiting for processes to exit. Jan 30 13:08:35.757203 systemd-logind[1681]: Removed session 19. Jan 30 13:08:35.864450 systemd[1]: Started sshd@17-10.200.4.12:22-10.200.16.10:44806.service - OpenSSH per-connection server daemon (10.200.16.10:44806). Jan 30 13:08:36.512023 sshd[4890]: Accepted publickey for core from 10.200.16.10 port 44806 ssh2: RSA SHA256:R1VmL1R3PxweWMZWYfiOmewd1nMkLMPBC099indN3nA Jan 30 13:08:36.513771 sshd-session[4890]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:08:36.518567 systemd-logind[1681]: New session 20 of user core. Jan 30 13:08:36.521282 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 30 13:08:37.149341 sshd[4892]: Connection closed by 10.200.16.10 port 44806 Jan 30 13:08:37.150052 sshd-session[4890]: pam_unix(sshd:session): session closed for user core Jan 30 13:08:37.152865 systemd[1]: sshd@17-10.200.4.12:22-10.200.16.10:44806.service: Deactivated successfully. Jan 30 13:08:37.155074 systemd[1]: session-20.scope: Deactivated successfully. Jan 30 13:08:37.156564 systemd-logind[1681]: Session 20 logged out. Waiting for processes to exit. Jan 30 13:08:37.157816 systemd-logind[1681]: Removed session 20. Jan 30 13:08:37.265433 systemd[1]: Started sshd@18-10.200.4.12:22-10.200.16.10:44808.service - OpenSSH per-connection server daemon (10.200.16.10:44808). Jan 30 13:08:37.911166 sshd[4901]: Accepted publickey for core from 10.200.16.10 port 44808 ssh2: RSA SHA256:R1VmL1R3PxweWMZWYfiOmewd1nMkLMPBC099indN3nA Jan 30 13:08:37.912591 sshd-session[4901]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:08:37.917066 systemd-logind[1681]: New session 21 of user core. Jan 30 13:08:37.922300 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 30 13:08:38.439630 sshd[4903]: Connection closed by 10.200.16.10 port 44808 Jan 30 13:08:38.440378 sshd-session[4901]: pam_unix(sshd:session): session closed for user core Jan 30 13:08:38.444073 systemd[1]: sshd@18-10.200.4.12:22-10.200.16.10:44808.service: Deactivated successfully. Jan 30 13:08:38.446316 systemd[1]: session-21.scope: Deactivated successfully. Jan 30 13:08:38.447511 systemd-logind[1681]: Session 21 logged out. Waiting for processes to exit. Jan 30 13:08:38.449234 systemd-logind[1681]: Removed session 21. Jan 30 13:08:43.558437 systemd[1]: Started sshd@19-10.200.4.12:22-10.200.16.10:44810.service - OpenSSH per-connection server daemon (10.200.16.10:44810). Jan 30 13:08:44.209886 sshd[4919]: Accepted publickey for core from 10.200.16.10 port 44810 ssh2: RSA SHA256:R1VmL1R3PxweWMZWYfiOmewd1nMkLMPBC099indN3nA Jan 30 13:08:44.211379 sshd-session[4919]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:08:44.216116 systemd-logind[1681]: New session 22 of user core. Jan 30 13:08:44.218288 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 30 13:08:44.738365 sshd[4921]: Connection closed by 10.200.16.10 port 44810 Jan 30 13:08:44.739244 sshd-session[4919]: pam_unix(sshd:session): session closed for user core Jan 30 13:08:44.742630 systemd[1]: sshd@19-10.200.4.12:22-10.200.16.10:44810.service: Deactivated successfully. Jan 30 13:08:44.745422 systemd[1]: session-22.scope: Deactivated successfully. Jan 30 13:08:44.747496 systemd-logind[1681]: Session 22 logged out. Waiting for processes to exit. Jan 30 13:08:44.748609 systemd-logind[1681]: Removed session 22. Jan 30 13:08:49.860441 systemd[1]: Started sshd@20-10.200.4.12:22-10.200.16.10:48772.service - OpenSSH per-connection server daemon (10.200.16.10:48772). Jan 30 13:08:50.503385 sshd[4932]: Accepted publickey for core from 10.200.16.10 port 48772 ssh2: RSA SHA256:R1VmL1R3PxweWMZWYfiOmewd1nMkLMPBC099indN3nA Jan 30 13:08:50.504833 sshd-session[4932]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:08:50.508872 systemd-logind[1681]: New session 23 of user core. Jan 30 13:08:50.518273 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 30 13:08:51.013351 sshd[4934]: Connection closed by 10.200.16.10 port 48772 Jan 30 13:08:51.014279 sshd-session[4932]: pam_unix(sshd:session): session closed for user core Jan 30 13:08:51.017723 systemd[1]: sshd@20-10.200.4.12:22-10.200.16.10:48772.service: Deactivated successfully. Jan 30 13:08:51.019905 systemd[1]: session-23.scope: Deactivated successfully. Jan 30 13:08:51.021449 systemd-logind[1681]: Session 23 logged out. Waiting for processes to exit. Jan 30 13:08:51.022511 systemd-logind[1681]: Removed session 23. Jan 30 13:08:56.131606 systemd[1]: Started sshd@21-10.200.4.12:22-10.200.16.10:55778.service - OpenSSH per-connection server daemon (10.200.16.10:55778). Jan 30 13:08:56.786282 sshd[4947]: Accepted publickey for core from 10.200.16.10 port 55778 ssh2: RSA SHA256:R1VmL1R3PxweWMZWYfiOmewd1nMkLMPBC099indN3nA Jan 30 13:08:56.787770 sshd-session[4947]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:08:56.792199 systemd-logind[1681]: New session 24 of user core. Jan 30 13:08:56.796307 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 30 13:08:57.295027 sshd[4949]: Connection closed by 10.200.16.10 port 55778 Jan 30 13:08:57.295965 sshd-session[4947]: pam_unix(sshd:session): session closed for user core Jan 30 13:08:57.300311 systemd[1]: sshd@21-10.200.4.12:22-10.200.16.10:55778.service: Deactivated successfully. Jan 30 13:08:57.302807 systemd[1]: session-24.scope: Deactivated successfully. Jan 30 13:08:57.303985 systemd-logind[1681]: Session 24 logged out. Waiting for processes to exit. Jan 30 13:08:57.305259 systemd-logind[1681]: Removed session 24. Jan 30 13:08:57.412227 systemd[1]: Started sshd@22-10.200.4.12:22-10.200.16.10:55782.service - OpenSSH per-connection server daemon (10.200.16.10:55782). Jan 30 13:08:58.138324 sshd[4959]: Accepted publickey for core from 10.200.16.10 port 55782 ssh2: RSA SHA256:R1VmL1R3PxweWMZWYfiOmewd1nMkLMPBC099indN3nA Jan 30 13:08:58.139821 sshd-session[4959]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:08:58.144483 systemd-logind[1681]: New session 25 of user core. Jan 30 13:08:58.148492 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 30 13:09:00.481045 kubelet[3378]: I0130 13:09:00.479855 3378 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-928n5" podStartSLOduration=172.479828024 podStartE2EDuration="2m52.479828024s" podCreationTimestamp="2025-01-30 13:06:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:06:36.645375566 +0000 UTC m=+42.285439382" watchObservedRunningTime="2025-01-30 13:09:00.479828024 +0000 UTC m=+186.119891840" Jan 30 13:09:00.495786 containerd[1702]: time="2025-01-30T13:09:00.494838134Z" level=info msg="StopContainer for \"d4f733231b0743ffc945e78ad6e106e8fe8ce5693b6bcf674b2595c2dd4c690c\" with timeout 30 (s)" Jan 30 13:09:00.495786 containerd[1702]: time="2025-01-30T13:09:00.495404439Z" level=info msg="Stop container \"d4f733231b0743ffc945e78ad6e106e8fe8ce5693b6bcf674b2595c2dd4c690c\" with signal terminated" Jan 30 13:09:00.517822 systemd[1]: run-containerd-runc-k8s.io-e1bfb471d06703d0f55a728beb6da81b1f6164c2c7426f60d85e9f57454a6361-runc.t0RiA6.mount: Deactivated successfully. Jan 30 13:09:00.518806 systemd[1]: cri-containerd-d4f733231b0743ffc945e78ad6e106e8fe8ce5693b6bcf674b2595c2dd4c690c.scope: Deactivated successfully. Jan 30 13:09:00.530089 containerd[1702]: time="2025-01-30T13:09:00.530051295Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 13:09:00.540090 containerd[1702]: time="2025-01-30T13:09:00.540050169Z" level=info msg="StopContainer for \"e1bfb471d06703d0f55a728beb6da81b1f6164c2c7426f60d85e9f57454a6361\" with timeout 2 (s)" Jan 30 13:09:00.540558 containerd[1702]: time="2025-01-30T13:09:00.540463372Z" level=info msg="Stop container \"e1bfb471d06703d0f55a728beb6da81b1f6164c2c7426f60d85e9f57454a6361\" with signal terminated" Jan 30 13:09:00.550669 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d4f733231b0743ffc945e78ad6e106e8fe8ce5693b6bcf674b2595c2dd4c690c-rootfs.mount: Deactivated successfully. Jan 30 13:09:00.551650 systemd-networkd[1327]: lxc_health: Link DOWN Jan 30 13:09:00.551655 systemd-networkd[1327]: lxc_health: Lost carrier Jan 30 13:09:00.568614 systemd[1]: cri-containerd-e1bfb471d06703d0f55a728beb6da81b1f6164c2c7426f60d85e9f57454a6361.scope: Deactivated successfully. Jan 30 13:09:00.569425 systemd[1]: cri-containerd-e1bfb471d06703d0f55a728beb6da81b1f6164c2c7426f60d85e9f57454a6361.scope: Consumed 7.275s CPU time. Jan 30 13:09:00.588355 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e1bfb471d06703d0f55a728beb6da81b1f6164c2c7426f60d85e9f57454a6361-rootfs.mount: Deactivated successfully. Jan 30 13:09:02.441980 containerd[1702]: time="2025-01-30T13:09:02.441838820Z" level=info msg="shim disconnected" id=e1bfb471d06703d0f55a728beb6da81b1f6164c2c7426f60d85e9f57454a6361 namespace=k8s.io Jan 30 13:09:02.441980 containerd[1702]: time="2025-01-30T13:09:02.441974121Z" level=warning msg="cleaning up after shim disconnected" id=e1bfb471d06703d0f55a728beb6da81b1f6164c2c7426f60d85e9f57454a6361 namespace=k8s.io Jan 30 13:09:02.441980 containerd[1702]: time="2025-01-30T13:09:02.441987121Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:09:02.443298 containerd[1702]: time="2025-01-30T13:09:02.442324524Z" level=info msg="shim disconnected" id=d4f733231b0743ffc945e78ad6e106e8fe8ce5693b6bcf674b2595c2dd4c690c namespace=k8s.io Jan 30 13:09:02.443298 containerd[1702]: time="2025-01-30T13:09:02.442373824Z" level=warning msg="cleaning up after shim disconnected" id=d4f733231b0743ffc945e78ad6e106e8fe8ce5693b6bcf674b2595c2dd4c690c namespace=k8s.io Jan 30 13:09:02.443298 containerd[1702]: time="2025-01-30T13:09:02.442384224Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:09:02.469289 containerd[1702]: time="2025-01-30T13:09:02.469240323Z" level=info msg="StopContainer for \"d4f733231b0743ffc945e78ad6e106e8fe8ce5693b6bcf674b2595c2dd4c690c\" returns successfully" Jan 30 13:09:02.469936 containerd[1702]: time="2025-01-30T13:09:02.469906028Z" level=info msg="StopPodSandbox for \"265d86bb294ac99d72309060e79fe9af40eab8813e001f45cff3a5b3eab5fba5\"" Jan 30 13:09:02.470045 containerd[1702]: time="2025-01-30T13:09:02.469945928Z" level=info msg="Container to stop \"d4f733231b0743ffc945e78ad6e106e8fe8ce5693b6bcf674b2595c2dd4c690c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:09:02.471901 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-265d86bb294ac99d72309060e79fe9af40eab8813e001f45cff3a5b3eab5fba5-shm.mount: Deactivated successfully. Jan 30 13:09:02.473910 containerd[1702]: time="2025-01-30T13:09:02.473497554Z" level=info msg="StopContainer for \"e1bfb471d06703d0f55a728beb6da81b1f6164c2c7426f60d85e9f57454a6361\" returns successfully" Jan 30 13:09:02.474103 containerd[1702]: time="2025-01-30T13:09:02.474008258Z" level=info msg="StopPodSandbox for \"9b365537e524b60789f300ccfdc8988b6c826dae2eb437be357c2acbd90900f4\"" Jan 30 13:09:02.474103 containerd[1702]: time="2025-01-30T13:09:02.474055058Z" level=info msg="Container to stop \"1310e072b2397fb7a80a01446025e61d03ee376aea0d4495a30cc561e6ce5b43\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:09:02.474103 containerd[1702]: time="2025-01-30T13:09:02.474094359Z" level=info msg="Container to stop \"3e30113559f1f56c5f4f1b9e01b571ab576685b6496fdb78dffb73fbd7841106\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:09:02.474616 containerd[1702]: time="2025-01-30T13:09:02.474106259Z" level=info msg="Container to stop \"ca4fb1bfb1cb8fd0072bc17d7665746d3d82d9d1b7e01a40d8a54267cda4cad0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:09:02.474616 containerd[1702]: time="2025-01-30T13:09:02.474120859Z" level=info msg="Container to stop \"70f957fa837ba1e88efa17a606ad30e7099f24b6a9644c5e16920d0b7aee97af\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:09:02.474616 containerd[1702]: time="2025-01-30T13:09:02.474153259Z" level=info msg="Container to stop \"e1bfb471d06703d0f55a728beb6da81b1f6164c2c7426f60d85e9f57454a6361\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:09:02.483119 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9b365537e524b60789f300ccfdc8988b6c826dae2eb437be357c2acbd90900f4-shm.mount: Deactivated successfully. Jan 30 13:09:02.484918 systemd[1]: cri-containerd-265d86bb294ac99d72309060e79fe9af40eab8813e001f45cff3a5b3eab5fba5.scope: Deactivated successfully. Jan 30 13:09:02.495471 systemd[1]: cri-containerd-9b365537e524b60789f300ccfdc8988b6c826dae2eb437be357c2acbd90900f4.scope: Deactivated successfully. Jan 30 13:09:02.517922 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-265d86bb294ac99d72309060e79fe9af40eab8813e001f45cff3a5b3eab5fba5-rootfs.mount: Deactivated successfully. Jan 30 13:09:02.524976 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9b365537e524b60789f300ccfdc8988b6c826dae2eb437be357c2acbd90900f4-rootfs.mount: Deactivated successfully. Jan 30 13:09:02.533165 containerd[1702]: time="2025-01-30T13:09:02.533083895Z" level=info msg="shim disconnected" id=265d86bb294ac99d72309060e79fe9af40eab8813e001f45cff3a5b3eab5fba5 namespace=k8s.io Jan 30 13:09:02.533313 containerd[1702]: time="2025-01-30T13:09:02.533176695Z" level=warning msg="cleaning up after shim disconnected" id=265d86bb294ac99d72309060e79fe9af40eab8813e001f45cff3a5b3eab5fba5 namespace=k8s.io Jan 30 13:09:02.533313 containerd[1702]: time="2025-01-30T13:09:02.533190495Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:09:02.534578 containerd[1702]: time="2025-01-30T13:09:02.533083895Z" level=info msg="shim disconnected" id=9b365537e524b60789f300ccfdc8988b6c826dae2eb437be357c2acbd90900f4 namespace=k8s.io Jan 30 13:09:02.534662 containerd[1702]: time="2025-01-30T13:09:02.534578406Z" level=warning msg="cleaning up after shim disconnected" id=9b365537e524b60789f300ccfdc8988b6c826dae2eb437be357c2acbd90900f4 namespace=k8s.io Jan 30 13:09:02.534662 containerd[1702]: time="2025-01-30T13:09:02.534589406Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:09:02.542854 sshd[4961]: Connection closed by 10.200.16.10 port 55782 Jan 30 13:09:02.545015 sshd-session[4959]: pam_unix(sshd:session): session closed for user core Jan 30 13:09:02.550640 systemd[1]: sshd@22-10.200.4.12:22-10.200.16.10:55782.service: Deactivated successfully. Jan 30 13:09:02.556978 systemd[1]: session-25.scope: Deactivated successfully. Jan 30 13:09:02.557812 systemd[1]: session-25.scope: Consumed 1.408s CPU time. Jan 30 13:09:02.558935 systemd-logind[1681]: Session 25 logged out. Waiting for processes to exit. Jan 30 13:09:02.560542 systemd-logind[1681]: Removed session 25. Jan 30 13:09:02.565146 containerd[1702]: time="2025-01-30T13:09:02.563001216Z" level=info msg="TearDown network for sandbox \"265d86bb294ac99d72309060e79fe9af40eab8813e001f45cff3a5b3eab5fba5\" successfully" Jan 30 13:09:02.565146 containerd[1702]: time="2025-01-30T13:09:02.563033916Z" level=info msg="StopPodSandbox for \"265d86bb294ac99d72309060e79fe9af40eab8813e001f45cff3a5b3eab5fba5\" returns successfully" Jan 30 13:09:02.565146 containerd[1702]: time="2025-01-30T13:09:02.563181917Z" level=warning msg="cleanup warnings time=\"2025-01-30T13:09:02Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 30 13:09:02.567225 containerd[1702]: time="2025-01-30T13:09:02.567006045Z" level=info msg="TearDown network for sandbox \"9b365537e524b60789f300ccfdc8988b6c826dae2eb437be357c2acbd90900f4\" successfully" Jan 30 13:09:02.567365 containerd[1702]: time="2025-01-30T13:09:02.567338348Z" level=info msg="StopPodSandbox for \"9b365537e524b60789f300ccfdc8988b6c826dae2eb437be357c2acbd90900f4\" returns successfully" Jan 30 13:09:02.644353 kubelet[3378]: I0130 13:09:02.644300 3378 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9bb8b23f-46eb-43b2-8e91-9b99e2ab914d-hostproc\") pod \"9bb8b23f-46eb-43b2-8e91-9b99e2ab914d\" (UID: \"9bb8b23f-46eb-43b2-8e91-9b99e2ab914d\") " Jan 30 13:09:02.644888 kubelet[3378]: I0130 13:09:02.644362 3378 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9bb8b23f-46eb-43b2-8e91-9b99e2ab914d-lib-modules\") pod \"9bb8b23f-46eb-43b2-8e91-9b99e2ab914d\" (UID: \"9bb8b23f-46eb-43b2-8e91-9b99e2ab914d\") " Jan 30 13:09:02.644888 kubelet[3378]: I0130 13:09:02.644389 3378 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9bb8b23f-46eb-43b2-8e91-9b99e2ab914d-host-proc-sys-kernel\") pod \"9bb8b23f-46eb-43b2-8e91-9b99e2ab914d\" (UID: \"9bb8b23f-46eb-43b2-8e91-9b99e2ab914d\") " Jan 30 13:09:02.644888 kubelet[3378]: I0130 13:09:02.644412 3378 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9bb8b23f-46eb-43b2-8e91-9b99e2ab914d-cilium-cgroup\") pod \"9bb8b23f-46eb-43b2-8e91-9b99e2ab914d\" (UID: \"9bb8b23f-46eb-43b2-8e91-9b99e2ab914d\") " Jan 30 13:09:02.644888 kubelet[3378]: I0130 13:09:02.644434 3378 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9bb8b23f-46eb-43b2-8e91-9b99e2ab914d-xtables-lock\") pod \"9bb8b23f-46eb-43b2-8e91-9b99e2ab914d\" (UID: \"9bb8b23f-46eb-43b2-8e91-9b99e2ab914d\") " Jan 30 13:09:02.644888 kubelet[3378]: I0130 13:09:02.644488 3378 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9bb8b23f-46eb-43b2-8e91-9b99e2ab914d-cilium-run\") pod \"9bb8b23f-46eb-43b2-8e91-9b99e2ab914d\" (UID: \"9bb8b23f-46eb-43b2-8e91-9b99e2ab914d\") " Jan 30 13:09:02.644888 kubelet[3378]: I0130 13:09:02.644523 3378 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9bb8b23f-46eb-43b2-8e91-9b99e2ab914d-host-proc-sys-net\") pod \"9bb8b23f-46eb-43b2-8e91-9b99e2ab914d\" (UID: \"9bb8b23f-46eb-43b2-8e91-9b99e2ab914d\") " Jan 30 13:09:02.645252 kubelet[3378]: I0130 13:09:02.644562 3378 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a10f8b93-8b03-4c8b-b567-f5167ab3e6e1-cilium-config-path\") pod \"a10f8b93-8b03-4c8b-b567-f5167ab3e6e1\" (UID: \"a10f8b93-8b03-4c8b-b567-f5167ab3e6e1\") " Jan 30 13:09:02.645252 kubelet[3378]: I0130 13:09:02.644592 3378 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9bb8b23f-46eb-43b2-8e91-9b99e2ab914d-cilium-config-path\") pod \"9bb8b23f-46eb-43b2-8e91-9b99e2ab914d\" (UID: \"9bb8b23f-46eb-43b2-8e91-9b99e2ab914d\") " Jan 30 13:09:02.645252 kubelet[3378]: I0130 13:09:02.644624 3378 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6s8wx\" (UniqueName: \"kubernetes.io/projected/a10f8b93-8b03-4c8b-b567-f5167ab3e6e1-kube-api-access-6s8wx\") pod \"a10f8b93-8b03-4c8b-b567-f5167ab3e6e1\" (UID: \"a10f8b93-8b03-4c8b-b567-f5167ab3e6e1\") " Jan 30 13:09:02.645252 kubelet[3378]: I0130 13:09:02.644664 3378 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9bb8b23f-46eb-43b2-8e91-9b99e2ab914d-clustermesh-secrets\") pod \"9bb8b23f-46eb-43b2-8e91-9b99e2ab914d\" (UID: \"9bb8b23f-46eb-43b2-8e91-9b99e2ab914d\") " Jan 30 13:09:02.645252 kubelet[3378]: I0130 13:09:02.644694 3378 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wnhbz\" (UniqueName: \"kubernetes.io/projected/9bb8b23f-46eb-43b2-8e91-9b99e2ab914d-kube-api-access-wnhbz\") pod \"9bb8b23f-46eb-43b2-8e91-9b99e2ab914d\" (UID: \"9bb8b23f-46eb-43b2-8e91-9b99e2ab914d\") " Jan 30 13:09:02.645252 kubelet[3378]: I0130 13:09:02.644721 3378 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9bb8b23f-46eb-43b2-8e91-9b99e2ab914d-hubble-tls\") pod \"9bb8b23f-46eb-43b2-8e91-9b99e2ab914d\" (UID: \"9bb8b23f-46eb-43b2-8e91-9b99e2ab914d\") " Jan 30 13:09:02.645553 kubelet[3378]: I0130 13:09:02.644748 3378 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9bb8b23f-46eb-43b2-8e91-9b99e2ab914d-bpf-maps\") pod \"9bb8b23f-46eb-43b2-8e91-9b99e2ab914d\" (UID: \"9bb8b23f-46eb-43b2-8e91-9b99e2ab914d\") " Jan 30 13:09:02.645553 kubelet[3378]: I0130 13:09:02.644770 3378 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9bb8b23f-46eb-43b2-8e91-9b99e2ab914d-cni-path\") pod \"9bb8b23f-46eb-43b2-8e91-9b99e2ab914d\" (UID: \"9bb8b23f-46eb-43b2-8e91-9b99e2ab914d\") " Jan 30 13:09:02.645553 kubelet[3378]: I0130 13:09:02.644794 3378 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9bb8b23f-46eb-43b2-8e91-9b99e2ab914d-etc-cni-netd\") pod \"9bb8b23f-46eb-43b2-8e91-9b99e2ab914d\" (UID: \"9bb8b23f-46eb-43b2-8e91-9b99e2ab914d\") " Jan 30 13:09:02.645553 kubelet[3378]: I0130 13:09:02.644892 3378 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9bb8b23f-46eb-43b2-8e91-9b99e2ab914d-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "9bb8b23f-46eb-43b2-8e91-9b99e2ab914d" (UID: "9bb8b23f-46eb-43b2-8e91-9b99e2ab914d"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:09:02.645553 kubelet[3378]: I0130 13:09:02.644946 3378 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9bb8b23f-46eb-43b2-8e91-9b99e2ab914d-hostproc" (OuterVolumeSpecName: "hostproc") pod "9bb8b23f-46eb-43b2-8e91-9b99e2ab914d" (UID: "9bb8b23f-46eb-43b2-8e91-9b99e2ab914d"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:09:02.645805 kubelet[3378]: I0130 13:09:02.644973 3378 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9bb8b23f-46eb-43b2-8e91-9b99e2ab914d-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "9bb8b23f-46eb-43b2-8e91-9b99e2ab914d" (UID: "9bb8b23f-46eb-43b2-8e91-9b99e2ab914d"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:09:02.645805 kubelet[3378]: I0130 13:09:02.644996 3378 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9bb8b23f-46eb-43b2-8e91-9b99e2ab914d-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "9bb8b23f-46eb-43b2-8e91-9b99e2ab914d" (UID: "9bb8b23f-46eb-43b2-8e91-9b99e2ab914d"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:09:02.645805 kubelet[3378]: I0130 13:09:02.645020 3378 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9bb8b23f-46eb-43b2-8e91-9b99e2ab914d-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "9bb8b23f-46eb-43b2-8e91-9b99e2ab914d" (UID: "9bb8b23f-46eb-43b2-8e91-9b99e2ab914d"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:09:02.645805 kubelet[3378]: I0130 13:09:02.645043 3378 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9bb8b23f-46eb-43b2-8e91-9b99e2ab914d-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "9bb8b23f-46eb-43b2-8e91-9b99e2ab914d" (UID: "9bb8b23f-46eb-43b2-8e91-9b99e2ab914d"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:09:02.645805 kubelet[3378]: I0130 13:09:02.645065 3378 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9bb8b23f-46eb-43b2-8e91-9b99e2ab914d-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "9bb8b23f-46eb-43b2-8e91-9b99e2ab914d" (UID: "9bb8b23f-46eb-43b2-8e91-9b99e2ab914d"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:09:02.646115 kubelet[3378]: I0130 13:09:02.645087 3378 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9bb8b23f-46eb-43b2-8e91-9b99e2ab914d-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "9bb8b23f-46eb-43b2-8e91-9b99e2ab914d" (UID: "9bb8b23f-46eb-43b2-8e91-9b99e2ab914d"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:09:02.650813 kubelet[3378]: I0130 13:09:02.650773 3378 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9bb8b23f-46eb-43b2-8e91-9b99e2ab914d-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "9bb8b23f-46eb-43b2-8e91-9b99e2ab914d" (UID: "9bb8b23f-46eb-43b2-8e91-9b99e2ab914d"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:09:02.651144 kubelet[3378]: I0130 13:09:02.650924 3378 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9bb8b23f-46eb-43b2-8e91-9b99e2ab914d-cni-path" (OuterVolumeSpecName: "cni-path") pod "9bb8b23f-46eb-43b2-8e91-9b99e2ab914d" (UID: "9bb8b23f-46eb-43b2-8e91-9b99e2ab914d"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:09:02.658216 kubelet[3378]: I0130 13:09:02.658081 3378 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9bb8b23f-46eb-43b2-8e91-9b99e2ab914d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9bb8b23f-46eb-43b2-8e91-9b99e2ab914d" (UID: "9bb8b23f-46eb-43b2-8e91-9b99e2ab914d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:09:02.658319 kubelet[3378]: I0130 13:09:02.658288 3378 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bb8b23f-46eb-43b2-8e91-9b99e2ab914d-kube-api-access-wnhbz" (OuterVolumeSpecName: "kube-api-access-wnhbz") pod "9bb8b23f-46eb-43b2-8e91-9b99e2ab914d" (UID: "9bb8b23f-46eb-43b2-8e91-9b99e2ab914d"). InnerVolumeSpecName "kube-api-access-wnhbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:09:02.659674 kubelet[3378]: I0130 13:09:02.659229 3378 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bb8b23f-46eb-43b2-8e91-9b99e2ab914d-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "9bb8b23f-46eb-43b2-8e91-9b99e2ab914d" (UID: "9bb8b23f-46eb-43b2-8e91-9b99e2ab914d"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:09:02.659817 systemd[1]: var-lib-kubelet-pods-9bb8b23f\x2d46eb\x2d43b2\x2d8e91\x2d9b99e2ab914d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwnhbz.mount: Deactivated successfully. Jan 30 13:09:02.659956 systemd[1]: var-lib-kubelet-pods-9bb8b23f\x2d46eb\x2d43b2\x2d8e91\x2d9b99e2ab914d-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 30 13:09:02.660765 kubelet[3378]: I0130 13:09:02.660740 3378 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a10f8b93-8b03-4c8b-b567-f5167ab3e6e1-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a10f8b93-8b03-4c8b-b567-f5167ab3e6e1" (UID: "a10f8b93-8b03-4c8b-b567-f5167ab3e6e1"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:09:02.663959 kubelet[3378]: I0130 13:09:02.663859 3378 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bb8b23f-46eb-43b2-8e91-9b99e2ab914d-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "9bb8b23f-46eb-43b2-8e91-9b99e2ab914d" (UID: "9bb8b23f-46eb-43b2-8e91-9b99e2ab914d"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:09:02.664242 kubelet[3378]: I0130 13:09:02.664206 3378 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a10f8b93-8b03-4c8b-b567-f5167ab3e6e1-kube-api-access-6s8wx" (OuterVolumeSpecName: "kube-api-access-6s8wx") pod "a10f8b93-8b03-4c8b-b567-f5167ab3e6e1" (UID: "a10f8b93-8b03-4c8b-b567-f5167ab3e6e1"). InnerVolumeSpecName "kube-api-access-6s8wx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:09:02.668407 systemd[1]: Started sshd@23-10.200.4.12:22-10.200.16.10:55794.service - OpenSSH per-connection server daemon (10.200.16.10:55794). Jan 30 13:09:02.745567 kubelet[3378]: I0130 13:09:02.745401 3378 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9bb8b23f-46eb-43b2-8e91-9b99e2ab914d-hostproc\") on node \"ci-4186.1.0-a-d95fc4b65f\" DevicePath \"\"" Jan 30 13:09:02.745567 kubelet[3378]: I0130 13:09:02.745454 3378 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9bb8b23f-46eb-43b2-8e91-9b99e2ab914d-cilium-run\") on node \"ci-4186.1.0-a-d95fc4b65f\" DevicePath \"\"" Jan 30 13:09:02.745567 kubelet[3378]: I0130 13:09:02.745468 3378 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9bb8b23f-46eb-43b2-8e91-9b99e2ab914d-lib-modules\") on node \"ci-4186.1.0-a-d95fc4b65f\" DevicePath \"\"" Jan 30 13:09:02.745567 kubelet[3378]: I0130 13:09:02.745482 3378 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9bb8b23f-46eb-43b2-8e91-9b99e2ab914d-host-proc-sys-kernel\") on node \"ci-4186.1.0-a-d95fc4b65f\" DevicePath \"\"" Jan 30 13:09:02.745567 kubelet[3378]: I0130 13:09:02.745495 3378 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9bb8b23f-46eb-43b2-8e91-9b99e2ab914d-cilium-cgroup\") on node \"ci-4186.1.0-a-d95fc4b65f\" DevicePath \"\"" Jan 30 13:09:02.745567 kubelet[3378]: I0130 13:09:02.745510 3378 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9bb8b23f-46eb-43b2-8e91-9b99e2ab914d-xtables-lock\") on node \"ci-4186.1.0-a-d95fc4b65f\" DevicePath \"\"" Jan 30 13:09:02.745567 kubelet[3378]: I0130 13:09:02.745523 3378 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9bb8b23f-46eb-43b2-8e91-9b99e2ab914d-host-proc-sys-net\") on node \"ci-4186.1.0-a-d95fc4b65f\" DevicePath \"\"" Jan 30 13:09:02.745567 kubelet[3378]: I0130 13:09:02.745537 3378 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a10f8b93-8b03-4c8b-b567-f5167ab3e6e1-cilium-config-path\") on node \"ci-4186.1.0-a-d95fc4b65f\" DevicePath \"\"" Jan 30 13:09:02.746089 kubelet[3378]: I0130 13:09:02.745551 3378 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9bb8b23f-46eb-43b2-8e91-9b99e2ab914d-cilium-config-path\") on node \"ci-4186.1.0-a-d95fc4b65f\" DevicePath \"\"" Jan 30 13:09:02.746089 kubelet[3378]: I0130 13:09:02.745566 3378 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9bb8b23f-46eb-43b2-8e91-9b99e2ab914d-hubble-tls\") on node \"ci-4186.1.0-a-d95fc4b65f\" DevicePath \"\"" Jan 30 13:09:02.746089 kubelet[3378]: I0130 13:09:02.745586 3378 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9bb8b23f-46eb-43b2-8e91-9b99e2ab914d-bpf-maps\") on node \"ci-4186.1.0-a-d95fc4b65f\" DevicePath \"\"" Jan 30 13:09:02.746089 kubelet[3378]: I0130 13:09:02.745601 3378 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-6s8wx\" (UniqueName: \"kubernetes.io/projected/a10f8b93-8b03-4c8b-b567-f5167ab3e6e1-kube-api-access-6s8wx\") on node \"ci-4186.1.0-a-d95fc4b65f\" DevicePath \"\"" Jan 30 13:09:02.746089 kubelet[3378]: I0130 13:09:02.745613 3378 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9bb8b23f-46eb-43b2-8e91-9b99e2ab914d-clustermesh-secrets\") on node \"ci-4186.1.0-a-d95fc4b65f\" DevicePath \"\"" Jan 30 13:09:02.746089 kubelet[3378]: I0130 13:09:02.745627 3378 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-wnhbz\" (UniqueName: \"kubernetes.io/projected/9bb8b23f-46eb-43b2-8e91-9b99e2ab914d-kube-api-access-wnhbz\") on node \"ci-4186.1.0-a-d95fc4b65f\" DevicePath \"\"" Jan 30 13:09:02.746089 kubelet[3378]: I0130 13:09:02.745641 3378 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9bb8b23f-46eb-43b2-8e91-9b99e2ab914d-cni-path\") on node \"ci-4186.1.0-a-d95fc4b65f\" DevicePath \"\"" Jan 30 13:09:02.746089 kubelet[3378]: I0130 13:09:02.745654 3378 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9bb8b23f-46eb-43b2-8e91-9b99e2ab914d-etc-cni-netd\") on node \"ci-4186.1.0-a-d95fc4b65f\" DevicePath \"\"" Jan 30 13:09:02.900907 kubelet[3378]: I0130 13:09:02.900613 3378 scope.go:117] "RemoveContainer" containerID="e1bfb471d06703d0f55a728beb6da81b1f6164c2c7426f60d85e9f57454a6361" Jan 30 13:09:02.905490 containerd[1702]: time="2025-01-30T13:09:02.905064643Z" level=info msg="RemoveContainer for \"e1bfb471d06703d0f55a728beb6da81b1f6164c2c7426f60d85e9f57454a6361\"" Jan 30 13:09:02.910084 systemd[1]: Removed slice kubepods-burstable-pod9bb8b23f_46eb_43b2_8e91_9b99e2ab914d.slice - libcontainer container kubepods-burstable-pod9bb8b23f_46eb_43b2_8e91_9b99e2ab914d.slice. Jan 30 13:09:02.910519 systemd[1]: kubepods-burstable-pod9bb8b23f_46eb_43b2_8e91_9b99e2ab914d.slice: Consumed 7.355s CPU time. Jan 30 13:09:02.913733 systemd[1]: Removed slice kubepods-besteffort-poda10f8b93_8b03_4c8b_b567_f5167ab3e6e1.slice - libcontainer container kubepods-besteffort-poda10f8b93_8b03_4c8b_b567_f5167ab3e6e1.slice. Jan 30 13:09:02.920363 containerd[1702]: time="2025-01-30T13:09:02.920327356Z" level=info msg="RemoveContainer for \"e1bfb471d06703d0f55a728beb6da81b1f6164c2c7426f60d85e9f57454a6361\" returns successfully" Jan 30 13:09:02.920589 kubelet[3378]: I0130 13:09:02.920566 3378 scope.go:117] "RemoveContainer" containerID="70f957fa837ba1e88efa17a606ad30e7099f24b6a9644c5e16920d0b7aee97af" Jan 30 13:09:02.922524 containerd[1702]: time="2025-01-30T13:09:02.922008168Z" level=info msg="RemoveContainer for \"70f957fa837ba1e88efa17a606ad30e7099f24b6a9644c5e16920d0b7aee97af\"" Jan 30 13:09:02.933110 containerd[1702]: time="2025-01-30T13:09:02.932979949Z" level=info msg="RemoveContainer for \"70f957fa837ba1e88efa17a606ad30e7099f24b6a9644c5e16920d0b7aee97af\" returns successfully" Jan 30 13:09:02.933469 kubelet[3378]: I0130 13:09:02.933415 3378 scope.go:117] "RemoveContainer" containerID="ca4fb1bfb1cb8fd0072bc17d7665746d3d82d9d1b7e01a40d8a54267cda4cad0" Jan 30 13:09:02.936228 containerd[1702]: time="2025-01-30T13:09:02.935581069Z" level=info msg="RemoveContainer for \"ca4fb1bfb1cb8fd0072bc17d7665746d3d82d9d1b7e01a40d8a54267cda4cad0\"" Jan 30 13:09:02.942010 containerd[1702]: time="2025-01-30T13:09:02.941975816Z" level=info msg="RemoveContainer for \"ca4fb1bfb1cb8fd0072bc17d7665746d3d82d9d1b7e01a40d8a54267cda4cad0\" returns successfully" Jan 30 13:09:02.942218 kubelet[3378]: I0130 13:09:02.942181 3378 scope.go:117] "RemoveContainer" containerID="3e30113559f1f56c5f4f1b9e01b571ab576685b6496fdb78dffb73fbd7841106" Jan 30 13:09:02.943203 containerd[1702]: time="2025-01-30T13:09:02.943176125Z" level=info msg="RemoveContainer for \"3e30113559f1f56c5f4f1b9e01b571ab576685b6496fdb78dffb73fbd7841106\"" Jan 30 13:09:02.951326 containerd[1702]: time="2025-01-30T13:09:02.951300285Z" level=info msg="RemoveContainer for \"3e30113559f1f56c5f4f1b9e01b571ab576685b6496fdb78dffb73fbd7841106\" returns successfully" Jan 30 13:09:02.951499 kubelet[3378]: I0130 13:09:02.951471 3378 scope.go:117] "RemoveContainer" containerID="1310e072b2397fb7a80a01446025e61d03ee376aea0d4495a30cc561e6ce5b43" Jan 30 13:09:02.952820 containerd[1702]: time="2025-01-30T13:09:02.952585794Z" level=info msg="RemoveContainer for \"1310e072b2397fb7a80a01446025e61d03ee376aea0d4495a30cc561e6ce5b43\"" Jan 30 13:09:02.963111 containerd[1702]: time="2025-01-30T13:09:02.963079772Z" level=info msg="RemoveContainer for \"1310e072b2397fb7a80a01446025e61d03ee376aea0d4495a30cc561e6ce5b43\" returns successfully" Jan 30 13:09:02.963303 kubelet[3378]: I0130 13:09:02.963274 3378 scope.go:117] "RemoveContainer" containerID="e1bfb471d06703d0f55a728beb6da81b1f6164c2c7426f60d85e9f57454a6361" Jan 30 13:09:02.963538 containerd[1702]: time="2025-01-30T13:09:02.963493975Z" level=error msg="ContainerStatus for \"e1bfb471d06703d0f55a728beb6da81b1f6164c2c7426f60d85e9f57454a6361\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e1bfb471d06703d0f55a728beb6da81b1f6164c2c7426f60d85e9f57454a6361\": not found" Jan 30 13:09:02.963665 kubelet[3378]: E0130 13:09:02.963640 3378 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e1bfb471d06703d0f55a728beb6da81b1f6164c2c7426f60d85e9f57454a6361\": not found" containerID="e1bfb471d06703d0f55a728beb6da81b1f6164c2c7426f60d85e9f57454a6361" Jan 30 13:09:02.963767 kubelet[3378]: I0130 13:09:02.963671 3378 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e1bfb471d06703d0f55a728beb6da81b1f6164c2c7426f60d85e9f57454a6361"} err="failed to get container status \"e1bfb471d06703d0f55a728beb6da81b1f6164c2c7426f60d85e9f57454a6361\": rpc error: code = NotFound desc = an error occurred when try to find container \"e1bfb471d06703d0f55a728beb6da81b1f6164c2c7426f60d85e9f57454a6361\": not found" Jan 30 13:09:02.963767 kubelet[3378]: I0130 13:09:02.963763 3378 scope.go:117] "RemoveContainer" containerID="70f957fa837ba1e88efa17a606ad30e7099f24b6a9644c5e16920d0b7aee97af" Jan 30 13:09:02.963966 containerd[1702]: time="2025-01-30T13:09:02.963932178Z" level=error msg="ContainerStatus for \"70f957fa837ba1e88efa17a606ad30e7099f24b6a9644c5e16920d0b7aee97af\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"70f957fa837ba1e88efa17a606ad30e7099f24b6a9644c5e16920d0b7aee97af\": not found" Jan 30 13:09:02.964087 kubelet[3378]: E0130 13:09:02.964060 3378 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"70f957fa837ba1e88efa17a606ad30e7099f24b6a9644c5e16920d0b7aee97af\": not found" containerID="70f957fa837ba1e88efa17a606ad30e7099f24b6a9644c5e16920d0b7aee97af" Jan 30 13:09:02.964209 kubelet[3378]: I0130 13:09:02.964091 3378 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"70f957fa837ba1e88efa17a606ad30e7099f24b6a9644c5e16920d0b7aee97af"} err="failed to get container status \"70f957fa837ba1e88efa17a606ad30e7099f24b6a9644c5e16920d0b7aee97af\": rpc error: code = NotFound desc = an error occurred when try to find container \"70f957fa837ba1e88efa17a606ad30e7099f24b6a9644c5e16920d0b7aee97af\": not found" Jan 30 13:09:02.964209 kubelet[3378]: I0130 13:09:02.964114 3378 scope.go:117] "RemoveContainer" containerID="ca4fb1bfb1cb8fd0072bc17d7665746d3d82d9d1b7e01a40d8a54267cda4cad0" Jan 30 13:09:02.964321 containerd[1702]: time="2025-01-30T13:09:02.964294681Z" level=error msg="ContainerStatus for \"ca4fb1bfb1cb8fd0072bc17d7665746d3d82d9d1b7e01a40d8a54267cda4cad0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ca4fb1bfb1cb8fd0072bc17d7665746d3d82d9d1b7e01a40d8a54267cda4cad0\": not found" Jan 30 13:09:02.964456 kubelet[3378]: E0130 13:09:02.964407 3378 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ca4fb1bfb1cb8fd0072bc17d7665746d3d82d9d1b7e01a40d8a54267cda4cad0\": not found" containerID="ca4fb1bfb1cb8fd0072bc17d7665746d3d82d9d1b7e01a40d8a54267cda4cad0" Jan 30 13:09:02.964456 kubelet[3378]: I0130 13:09:02.964434 3378 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ca4fb1bfb1cb8fd0072bc17d7665746d3d82d9d1b7e01a40d8a54267cda4cad0"} err="failed to get container status \"ca4fb1bfb1cb8fd0072bc17d7665746d3d82d9d1b7e01a40d8a54267cda4cad0\": rpc error: code = NotFound desc = an error occurred when try to find container \"ca4fb1bfb1cb8fd0072bc17d7665746d3d82d9d1b7e01a40d8a54267cda4cad0\": not found" Jan 30 13:09:02.964605 kubelet[3378]: I0130 13:09:02.964454 3378 scope.go:117] "RemoveContainer" containerID="3e30113559f1f56c5f4f1b9e01b571ab576685b6496fdb78dffb73fbd7841106" Jan 30 13:09:02.964653 containerd[1702]: time="2025-01-30T13:09:02.964628383Z" level=error msg="ContainerStatus for \"3e30113559f1f56c5f4f1b9e01b571ab576685b6496fdb78dffb73fbd7841106\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3e30113559f1f56c5f4f1b9e01b571ab576685b6496fdb78dffb73fbd7841106\": not found" Jan 30 13:09:02.964759 kubelet[3378]: E0130 13:09:02.964738 3378 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3e30113559f1f56c5f4f1b9e01b571ab576685b6496fdb78dffb73fbd7841106\": not found" containerID="3e30113559f1f56c5f4f1b9e01b571ab576685b6496fdb78dffb73fbd7841106" Jan 30 13:09:02.964828 kubelet[3378]: I0130 13:09:02.964765 3378 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3e30113559f1f56c5f4f1b9e01b571ab576685b6496fdb78dffb73fbd7841106"} err="failed to get container status \"3e30113559f1f56c5f4f1b9e01b571ab576685b6496fdb78dffb73fbd7841106\": rpc error: code = NotFound desc = an error occurred when try to find container \"3e30113559f1f56c5f4f1b9e01b571ab576685b6496fdb78dffb73fbd7841106\": not found" Jan 30 13:09:02.964828 kubelet[3378]: I0130 13:09:02.964784 3378 scope.go:117] "RemoveContainer" containerID="1310e072b2397fb7a80a01446025e61d03ee376aea0d4495a30cc561e6ce5b43" Jan 30 13:09:02.964970 containerd[1702]: time="2025-01-30T13:09:02.964938185Z" level=error msg="ContainerStatus for \"1310e072b2397fb7a80a01446025e61d03ee376aea0d4495a30cc561e6ce5b43\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1310e072b2397fb7a80a01446025e61d03ee376aea0d4495a30cc561e6ce5b43\": not found" Jan 30 13:09:02.965117 kubelet[3378]: E0130 13:09:02.965091 3378 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1310e072b2397fb7a80a01446025e61d03ee376aea0d4495a30cc561e6ce5b43\": not found" containerID="1310e072b2397fb7a80a01446025e61d03ee376aea0d4495a30cc561e6ce5b43" Jan 30 13:09:02.965203 kubelet[3378]: I0130 13:09:02.965119 3378 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1310e072b2397fb7a80a01446025e61d03ee376aea0d4495a30cc561e6ce5b43"} err="failed to get container status \"1310e072b2397fb7a80a01446025e61d03ee376aea0d4495a30cc561e6ce5b43\": rpc error: code = NotFound desc = an error occurred when try to find container \"1310e072b2397fb7a80a01446025e61d03ee376aea0d4495a30cc561e6ce5b43\": not found" Jan 30 13:09:02.965203 kubelet[3378]: I0130 13:09:02.965159 3378 scope.go:117] "RemoveContainer" containerID="d4f733231b0743ffc945e78ad6e106e8fe8ce5693b6bcf674b2595c2dd4c690c" Jan 30 13:09:02.966102 containerd[1702]: time="2025-01-30T13:09:02.966053294Z" level=info msg="RemoveContainer for \"d4f733231b0743ffc945e78ad6e106e8fe8ce5693b6bcf674b2595c2dd4c690c\"" Jan 30 13:09:02.974457 containerd[1702]: time="2025-01-30T13:09:02.974425056Z" level=info msg="RemoveContainer for \"d4f733231b0743ffc945e78ad6e106e8fe8ce5693b6bcf674b2595c2dd4c690c\" returns successfully" Jan 30 13:09:02.974626 kubelet[3378]: I0130 13:09:02.974590 3378 scope.go:117] "RemoveContainer" containerID="d4f733231b0743ffc945e78ad6e106e8fe8ce5693b6bcf674b2595c2dd4c690c" Jan 30 13:09:02.974850 containerd[1702]: time="2025-01-30T13:09:02.974786658Z" level=error msg="ContainerStatus for \"d4f733231b0743ffc945e78ad6e106e8fe8ce5693b6bcf674b2595c2dd4c690c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d4f733231b0743ffc945e78ad6e106e8fe8ce5693b6bcf674b2595c2dd4c690c\": not found" Jan 30 13:09:02.974944 kubelet[3378]: E0130 13:09:02.974921 3378 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d4f733231b0743ffc945e78ad6e106e8fe8ce5693b6bcf674b2595c2dd4c690c\": not found" containerID="d4f733231b0743ffc945e78ad6e106e8fe8ce5693b6bcf674b2595c2dd4c690c" Jan 30 13:09:02.975018 kubelet[3378]: I0130 13:09:02.974948 3378 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d4f733231b0743ffc945e78ad6e106e8fe8ce5693b6bcf674b2595c2dd4c690c"} err="failed to get container status \"d4f733231b0743ffc945e78ad6e106e8fe8ce5693b6bcf674b2595c2dd4c690c\": rpc error: code = NotFound desc = an error occurred when try to find container \"d4f733231b0743ffc945e78ad6e106e8fe8ce5693b6bcf674b2595c2dd4c690c\": not found" Jan 30 13:09:03.309195 sshd[5121]: Accepted publickey for core from 10.200.16.10 port 55794 ssh2: RSA SHA256:R1VmL1R3PxweWMZWYfiOmewd1nMkLMPBC099indN3nA Jan 30 13:09:03.310725 sshd-session[5121]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:09:03.315519 systemd-logind[1681]: New session 26 of user core. Jan 30 13:09:03.319275 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 30 13:09:03.472384 systemd[1]: var-lib-kubelet-pods-a10f8b93\x2d8b03\x2d4c8b\x2db567\x2df5167ab3e6e1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6s8wx.mount: Deactivated successfully. Jan 30 13:09:03.472570 systemd[1]: var-lib-kubelet-pods-9bb8b23f\x2d46eb\x2d43b2\x2d8e91\x2d9b99e2ab914d-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 30 13:09:04.280721 kubelet[3378]: I0130 13:09:04.280658 3378 topology_manager.go:215] "Topology Admit Handler" podUID="1f6851fb-dc50-43a2-ba5b-c5ebfd3cd24a" podNamespace="kube-system" podName="cilium-bb2d6" Jan 30 13:09:04.281218 kubelet[3378]: E0130 13:09:04.280753 3378 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a10f8b93-8b03-4c8b-b567-f5167ab3e6e1" containerName="cilium-operator" Jan 30 13:09:04.281218 kubelet[3378]: E0130 13:09:04.280767 3378 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9bb8b23f-46eb-43b2-8e91-9b99e2ab914d" containerName="mount-cgroup" Jan 30 13:09:04.281218 kubelet[3378]: E0130 13:09:04.280775 3378 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9bb8b23f-46eb-43b2-8e91-9b99e2ab914d" containerName="apply-sysctl-overwrites" Jan 30 13:09:04.281218 kubelet[3378]: E0130 13:09:04.280782 3378 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9bb8b23f-46eb-43b2-8e91-9b99e2ab914d" containerName="mount-bpf-fs" Jan 30 13:09:04.281218 kubelet[3378]: E0130 13:09:04.280792 3378 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9bb8b23f-46eb-43b2-8e91-9b99e2ab914d" containerName="clean-cilium-state" Jan 30 13:09:04.281218 kubelet[3378]: E0130 13:09:04.280802 3378 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9bb8b23f-46eb-43b2-8e91-9b99e2ab914d" containerName="cilium-agent" Jan 30 13:09:04.281218 kubelet[3378]: I0130 13:09:04.280829 3378 memory_manager.go:354] "RemoveStaleState removing state" podUID="a10f8b93-8b03-4c8b-b567-f5167ab3e6e1" containerName="cilium-operator" Jan 30 13:09:04.281218 kubelet[3378]: I0130 13:09:04.280840 3378 memory_manager.go:354] "RemoveStaleState removing state" podUID="9bb8b23f-46eb-43b2-8e91-9b99e2ab914d" containerName="cilium-agent" Jan 30 13:09:04.295605 systemd[1]: Created slice kubepods-burstable-pod1f6851fb_dc50_43a2_ba5b_c5ebfd3cd24a.slice - libcontainer container kubepods-burstable-pod1f6851fb_dc50_43a2_ba5b_c5ebfd3cd24a.slice. Jan 30 13:09:04.352482 kubelet[3378]: I0130 13:09:04.352446 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1f6851fb-dc50-43a2-ba5b-c5ebfd3cd24a-cilium-run\") pod \"cilium-bb2d6\" (UID: \"1f6851fb-dc50-43a2-ba5b-c5ebfd3cd24a\") " pod="kube-system/cilium-bb2d6" Jan 30 13:09:04.352770 kubelet[3378]: I0130 13:09:04.352725 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1f6851fb-dc50-43a2-ba5b-c5ebfd3cd24a-cni-path\") pod \"cilium-bb2d6\" (UID: \"1f6851fb-dc50-43a2-ba5b-c5ebfd3cd24a\") " pod="kube-system/cilium-bb2d6" Jan 30 13:09:04.352855 kubelet[3378]: I0130 13:09:04.352783 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1f6851fb-dc50-43a2-ba5b-c5ebfd3cd24a-xtables-lock\") pod \"cilium-bb2d6\" (UID: \"1f6851fb-dc50-43a2-ba5b-c5ebfd3cd24a\") " pod="kube-system/cilium-bb2d6" Jan 30 13:09:04.352855 kubelet[3378]: I0130 13:09:04.352842 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1f6851fb-dc50-43a2-ba5b-c5ebfd3cd24a-cilium-cgroup\") pod \"cilium-bb2d6\" (UID: \"1f6851fb-dc50-43a2-ba5b-c5ebfd3cd24a\") " pod="kube-system/cilium-bb2d6" Jan 30 13:09:04.352951 kubelet[3378]: I0130 13:09:04.352866 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1f6851fb-dc50-43a2-ba5b-c5ebfd3cd24a-clustermesh-secrets\") pod \"cilium-bb2d6\" (UID: \"1f6851fb-dc50-43a2-ba5b-c5ebfd3cd24a\") " pod="kube-system/cilium-bb2d6" Jan 30 13:09:04.352995 kubelet[3378]: I0130 13:09:04.352947 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1f6851fb-dc50-43a2-ba5b-c5ebfd3cd24a-cilium-config-path\") pod \"cilium-bb2d6\" (UID: \"1f6851fb-dc50-43a2-ba5b-c5ebfd3cd24a\") " pod="kube-system/cilium-bb2d6" Jan 30 13:09:04.353038 kubelet[3378]: I0130 13:09:04.352999 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1f6851fb-dc50-43a2-ba5b-c5ebfd3cd24a-host-proc-sys-kernel\") pod \"cilium-bb2d6\" (UID: \"1f6851fb-dc50-43a2-ba5b-c5ebfd3cd24a\") " pod="kube-system/cilium-bb2d6" Jan 30 13:09:04.353038 kubelet[3378]: I0130 13:09:04.353027 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1f6851fb-dc50-43a2-ba5b-c5ebfd3cd24a-hostproc\") pod \"cilium-bb2d6\" (UID: \"1f6851fb-dc50-43a2-ba5b-c5ebfd3cd24a\") " pod="kube-system/cilium-bb2d6" Jan 30 13:09:04.353116 kubelet[3378]: I0130 13:09:04.353080 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1f6851fb-dc50-43a2-ba5b-c5ebfd3cd24a-bpf-maps\") pod \"cilium-bb2d6\" (UID: \"1f6851fb-dc50-43a2-ba5b-c5ebfd3cd24a\") " pod="kube-system/cilium-bb2d6" Jan 30 13:09:04.353116 kubelet[3378]: I0130 13:09:04.353104 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1f6851fb-dc50-43a2-ba5b-c5ebfd3cd24a-etc-cni-netd\") pod \"cilium-bb2d6\" (UID: \"1f6851fb-dc50-43a2-ba5b-c5ebfd3cd24a\") " pod="kube-system/cilium-bb2d6" Jan 30 13:09:04.353227 kubelet[3378]: I0130 13:09:04.353163 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1f6851fb-dc50-43a2-ba5b-c5ebfd3cd24a-lib-modules\") pod \"cilium-bb2d6\" (UID: \"1f6851fb-dc50-43a2-ba5b-c5ebfd3cd24a\") " pod="kube-system/cilium-bb2d6" Jan 30 13:09:04.353227 kubelet[3378]: I0130 13:09:04.353188 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gf42d\" (UniqueName: \"kubernetes.io/projected/1f6851fb-dc50-43a2-ba5b-c5ebfd3cd24a-kube-api-access-gf42d\") pod \"cilium-bb2d6\" (UID: \"1f6851fb-dc50-43a2-ba5b-c5ebfd3cd24a\") " pod="kube-system/cilium-bb2d6" Jan 30 13:09:04.353304 kubelet[3378]: I0130 13:09:04.353241 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1f6851fb-dc50-43a2-ba5b-c5ebfd3cd24a-cilium-ipsec-secrets\") pod \"cilium-bb2d6\" (UID: \"1f6851fb-dc50-43a2-ba5b-c5ebfd3cd24a\") " pod="kube-system/cilium-bb2d6" Jan 30 13:09:04.353304 kubelet[3378]: I0130 13:09:04.353264 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1f6851fb-dc50-43a2-ba5b-c5ebfd3cd24a-host-proc-sys-net\") pod \"cilium-bb2d6\" (UID: \"1f6851fb-dc50-43a2-ba5b-c5ebfd3cd24a\") " pod="kube-system/cilium-bb2d6" Jan 30 13:09:04.353385 kubelet[3378]: I0130 13:09:04.353302 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1f6851fb-dc50-43a2-ba5b-c5ebfd3cd24a-hubble-tls\") pod \"cilium-bb2d6\" (UID: \"1f6851fb-dc50-43a2-ba5b-c5ebfd3cd24a\") " pod="kube-system/cilium-bb2d6" Jan 30 13:09:04.362112 sshd[5123]: Connection closed by 10.200.16.10 port 55794 Jan 30 13:09:04.362854 sshd-session[5121]: pam_unix(sshd:session): session closed for user core Jan 30 13:09:04.366618 systemd[1]: sshd@23-10.200.4.12:22-10.200.16.10:55794.service: Deactivated successfully. Jan 30 13:09:04.368568 systemd[1]: session-26.scope: Deactivated successfully. Jan 30 13:09:04.369489 systemd-logind[1681]: Session 26 logged out. Waiting for processes to exit. Jan 30 13:09:04.370564 systemd-logind[1681]: Removed session 26. Jan 30 13:09:04.449813 kubelet[3378]: I0130 13:09:04.449764 3378 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9bb8b23f-46eb-43b2-8e91-9b99e2ab914d" path="/var/lib/kubelet/pods/9bb8b23f-46eb-43b2-8e91-9b99e2ab914d/volumes" Jan 30 13:09:04.450412 kubelet[3378]: I0130 13:09:04.450384 3378 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a10f8b93-8b03-4c8b-b567-f5167ab3e6e1" path="/var/lib/kubelet/pods/a10f8b93-8b03-4c8b-b567-f5167ab3e6e1/volumes" Jan 30 13:09:04.499348 systemd[1]: Started sshd@24-10.200.4.12:22-10.200.16.10:55810.service - OpenSSH per-connection server daemon (10.200.16.10:55810). Jan 30 13:09:04.559711 kubelet[3378]: E0130 13:09:04.559556 3378 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 30 13:09:04.601866 containerd[1702]: time="2025-01-30T13:09:04.601816280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bb2d6,Uid:1f6851fb-dc50-43a2-ba5b-c5ebfd3cd24a,Namespace:kube-system,Attempt:0,}" Jan 30 13:09:04.641089 containerd[1702]: time="2025-01-30T13:09:04.640994869Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:09:04.641894 containerd[1702]: time="2025-01-30T13:09:04.641809575Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:09:04.641894 containerd[1702]: time="2025-01-30T13:09:04.641836376Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:09:04.642124 containerd[1702]: time="2025-01-30T13:09:04.641937776Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:09:04.663071 systemd[1]: run-containerd-runc-k8s.io-e24a6e027782a5a9ed04aae7812a89d3ac2277ab036e48c898ab075eaf92e95f-runc.Dlq3Sy.mount: Deactivated successfully. Jan 30 13:09:04.673299 systemd[1]: Started cri-containerd-e24a6e027782a5a9ed04aae7812a89d3ac2277ab036e48c898ab075eaf92e95f.scope - libcontainer container e24a6e027782a5a9ed04aae7812a89d3ac2277ab036e48c898ab075eaf92e95f. Jan 30 13:09:04.695466 containerd[1702]: time="2025-01-30T13:09:04.695359171Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bb2d6,Uid:1f6851fb-dc50-43a2-ba5b-c5ebfd3cd24a,Namespace:kube-system,Attempt:0,} returns sandbox id \"e24a6e027782a5a9ed04aae7812a89d3ac2277ab036e48c898ab075eaf92e95f\"" Jan 30 13:09:04.699105 containerd[1702]: time="2025-01-30T13:09:04.699061698Z" level=info msg="CreateContainer within sandbox \"e24a6e027782a5a9ed04aae7812a89d3ac2277ab036e48c898ab075eaf92e95f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 30 13:09:04.734770 containerd[1702]: time="2025-01-30T13:09:04.734722462Z" level=info msg="CreateContainer within sandbox \"e24a6e027782a5a9ed04aae7812a89d3ac2277ab036e48c898ab075eaf92e95f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"55a30fd83408300a1ebf4a71d5df1d914363bc6721f698e5ca6e4b471860d6c4\"" Jan 30 13:09:04.735306 containerd[1702]: time="2025-01-30T13:09:04.735263266Z" level=info msg="StartContainer for \"55a30fd83408300a1ebf4a71d5df1d914363bc6721f698e5ca6e4b471860d6c4\"" Jan 30 13:09:04.763301 systemd[1]: Started cri-containerd-55a30fd83408300a1ebf4a71d5df1d914363bc6721f698e5ca6e4b471860d6c4.scope - libcontainer container 55a30fd83408300a1ebf4a71d5df1d914363bc6721f698e5ca6e4b471860d6c4. Jan 30 13:09:04.793409 containerd[1702]: time="2025-01-30T13:09:04.793245194Z" level=info msg="StartContainer for \"55a30fd83408300a1ebf4a71d5df1d914363bc6721f698e5ca6e4b471860d6c4\" returns successfully" Jan 30 13:09:04.796975 systemd[1]: cri-containerd-55a30fd83408300a1ebf4a71d5df1d914363bc6721f698e5ca6e4b471860d6c4.scope: Deactivated successfully. Jan 30 13:09:04.876148 containerd[1702]: time="2025-01-30T13:09:04.875908705Z" level=info msg="shim disconnected" id=55a30fd83408300a1ebf4a71d5df1d914363bc6721f698e5ca6e4b471860d6c4 namespace=k8s.io Jan 30 13:09:04.876148 containerd[1702]: time="2025-01-30T13:09:04.875989206Z" level=warning msg="cleaning up after shim disconnected" id=55a30fd83408300a1ebf4a71d5df1d914363bc6721f698e5ca6e4b471860d6c4 namespace=k8s.io Jan 30 13:09:04.876148 containerd[1702]: time="2025-01-30T13:09:04.876005306Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:09:04.916232 containerd[1702]: time="2025-01-30T13:09:04.916185303Z" level=info msg="CreateContainer within sandbox \"e24a6e027782a5a9ed04aae7812a89d3ac2277ab036e48c898ab075eaf92e95f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 30 13:09:04.945140 containerd[1702]: time="2025-01-30T13:09:04.945089116Z" level=info msg="CreateContainer within sandbox \"e24a6e027782a5a9ed04aae7812a89d3ac2277ab036e48c898ab075eaf92e95f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"86e54fd7ebf8af695d7987381b922db28e827a993da838faa534277e45df9ef8\"" Jan 30 13:09:04.945794 containerd[1702]: time="2025-01-30T13:09:04.945712021Z" level=info msg="StartContainer for \"86e54fd7ebf8af695d7987381b922db28e827a993da838faa534277e45df9ef8\"" Jan 30 13:09:04.975375 systemd[1]: Started cri-containerd-86e54fd7ebf8af695d7987381b922db28e827a993da838faa534277e45df9ef8.scope - libcontainer container 86e54fd7ebf8af695d7987381b922db28e827a993da838faa534277e45df9ef8. Jan 30 13:09:05.002390 containerd[1702]: time="2025-01-30T13:09:05.002330639Z" level=info msg="StartContainer for \"86e54fd7ebf8af695d7987381b922db28e827a993da838faa534277e45df9ef8\" returns successfully" Jan 30 13:09:05.006410 systemd[1]: cri-containerd-86e54fd7ebf8af695d7987381b922db28e827a993da838faa534277e45df9ef8.scope: Deactivated successfully. Jan 30 13:09:05.035239 containerd[1702]: time="2025-01-30T13:09:05.035170282Z" level=info msg="shim disconnected" id=86e54fd7ebf8af695d7987381b922db28e827a993da838faa534277e45df9ef8 namespace=k8s.io Jan 30 13:09:05.035239 containerd[1702]: time="2025-01-30T13:09:05.035232882Z" level=warning msg="cleaning up after shim disconnected" id=86e54fd7ebf8af695d7987381b922db28e827a993da838faa534277e45df9ef8 namespace=k8s.io Jan 30 13:09:05.035239 containerd[1702]: time="2025-01-30T13:09:05.035243882Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:09:05.143733 sshd[5136]: Accepted publickey for core from 10.200.16.10 port 55810 ssh2: RSA SHA256:R1VmL1R3PxweWMZWYfiOmewd1nMkLMPBC099indN3nA Jan 30 13:09:05.145425 sshd-session[5136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:09:05.150338 systemd-logind[1681]: New session 27 of user core. Jan 30 13:09:05.161283 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 30 13:09:05.594918 sshd[5300]: Connection closed by 10.200.16.10 port 55810 Jan 30 13:09:05.595840 sshd-session[5136]: pam_unix(sshd:session): session closed for user core Jan 30 13:09:05.599776 systemd[1]: sshd@24-10.200.4.12:22-10.200.16.10:55810.service: Deactivated successfully. Jan 30 13:09:05.602199 systemd[1]: session-27.scope: Deactivated successfully. Jan 30 13:09:05.604039 systemd-logind[1681]: Session 27 logged out. Waiting for processes to exit. Jan 30 13:09:05.605308 systemd-logind[1681]: Removed session 27. Jan 30 13:09:05.716439 systemd[1]: Started sshd@25-10.200.4.12:22-10.200.16.10:55820.service - OpenSSH per-connection server daemon (10.200.16.10:55820). Jan 30 13:09:05.921597 containerd[1702]: time="2025-01-30T13:09:05.921349730Z" level=info msg="CreateContainer within sandbox \"e24a6e027782a5a9ed04aae7812a89d3ac2277ab036e48c898ab075eaf92e95f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 30 13:09:05.957923 containerd[1702]: time="2025-01-30T13:09:05.957871199Z" level=info msg="CreateContainer within sandbox \"e24a6e027782a5a9ed04aae7812a89d3ac2277ab036e48c898ab075eaf92e95f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"eef12cf8ca63a3abf9cfb355d2151fa4aea9d19ebe4e3b80e44ab760315557bc\"" Jan 30 13:09:05.960165 containerd[1702]: time="2025-01-30T13:09:05.958545404Z" level=info msg="StartContainer for \"eef12cf8ca63a3abf9cfb355d2151fa4aea9d19ebe4e3b80e44ab760315557bc\"" Jan 30 13:09:05.992310 systemd[1]: Started cri-containerd-eef12cf8ca63a3abf9cfb355d2151fa4aea9d19ebe4e3b80e44ab760315557bc.scope - libcontainer container eef12cf8ca63a3abf9cfb355d2151fa4aea9d19ebe4e3b80e44ab760315557bc. Jan 30 13:09:06.021279 systemd[1]: cri-containerd-eef12cf8ca63a3abf9cfb355d2151fa4aea9d19ebe4e3b80e44ab760315557bc.scope: Deactivated successfully. Jan 30 13:09:06.023481 containerd[1702]: time="2025-01-30T13:09:06.023429084Z" level=info msg="StartContainer for \"eef12cf8ca63a3abf9cfb355d2151fa4aea9d19ebe4e3b80e44ab760315557bc\" returns successfully" Jan 30 13:09:06.056123 containerd[1702]: time="2025-01-30T13:09:06.056055625Z" level=info msg="shim disconnected" id=eef12cf8ca63a3abf9cfb355d2151fa4aea9d19ebe4e3b80e44ab760315557bc namespace=k8s.io Jan 30 13:09:06.056123 containerd[1702]: time="2025-01-30T13:09:06.056115125Z" level=warning msg="cleaning up after shim disconnected" id=eef12cf8ca63a3abf9cfb355d2151fa4aea9d19ebe4e3b80e44ab760315557bc namespace=k8s.io Jan 30 13:09:06.056417 containerd[1702]: time="2025-01-30T13:09:06.056144126Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:09:06.361076 sshd[5306]: Accepted publickey for core from 10.200.16.10 port 55820 ssh2: RSA SHA256:R1VmL1R3PxweWMZWYfiOmewd1nMkLMPBC099indN3nA Jan 30 13:09:06.362626 sshd-session[5306]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:09:06.368142 systemd-logind[1681]: New session 28 of user core. Jan 30 13:09:06.375287 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 30 13:09:06.481273 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eef12cf8ca63a3abf9cfb355d2151fa4aea9d19ebe4e3b80e44ab760315557bc-rootfs.mount: Deactivated successfully. Jan 30 13:09:06.927157 containerd[1702]: time="2025-01-30T13:09:06.925699351Z" level=info msg="CreateContainer within sandbox \"e24a6e027782a5a9ed04aae7812a89d3ac2277ab036e48c898ab075eaf92e95f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 30 13:09:06.961300 containerd[1702]: time="2025-01-30T13:09:06.961259413Z" level=info msg="CreateContainer within sandbox \"e24a6e027782a5a9ed04aae7812a89d3ac2277ab036e48c898ab075eaf92e95f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4766b1dece7873fc6e814641c423efa725d8e591c078e2eb9264f01f4d59276b\"" Jan 30 13:09:06.961945 containerd[1702]: time="2025-01-30T13:09:06.961914618Z" level=info msg="StartContainer for \"4766b1dece7873fc6e814641c423efa725d8e591c078e2eb9264f01f4d59276b\"" Jan 30 13:09:06.998312 systemd[1]: Started cri-containerd-4766b1dece7873fc6e814641c423efa725d8e591c078e2eb9264f01f4d59276b.scope - libcontainer container 4766b1dece7873fc6e814641c423efa725d8e591c078e2eb9264f01f4d59276b. Jan 30 13:09:07.022663 systemd[1]: cri-containerd-4766b1dece7873fc6e814641c423efa725d8e591c078e2eb9264f01f4d59276b.scope: Deactivated successfully. Jan 30 13:09:07.029076 containerd[1702]: time="2025-01-30T13:09:07.028956713Z" level=info msg="StartContainer for \"4766b1dece7873fc6e814641c423efa725d8e591c078e2eb9264f01f4d59276b\" returns successfully" Jan 30 13:09:07.067471 containerd[1702]: time="2025-01-30T13:09:07.067282797Z" level=info msg="shim disconnected" id=4766b1dece7873fc6e814641c423efa725d8e591c078e2eb9264f01f4d59276b namespace=k8s.io Jan 30 13:09:07.067471 containerd[1702]: time="2025-01-30T13:09:07.067368297Z" level=warning msg="cleaning up after shim disconnected" id=4766b1dece7873fc6e814641c423efa725d8e591c078e2eb9264f01f4d59276b namespace=k8s.io Jan 30 13:09:07.067471 containerd[1702]: time="2025-01-30T13:09:07.067383897Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:09:07.081673 containerd[1702]: time="2025-01-30T13:09:07.081618203Z" level=warning msg="cleanup warnings time=\"2025-01-30T13:09:07Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 30 13:09:07.481704 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4766b1dece7873fc6e814641c423efa725d8e591c078e2eb9264f01f4d59276b-rootfs.mount: Deactivated successfully. Jan 30 13:09:07.931183 containerd[1702]: time="2025-01-30T13:09:07.931004054Z" level=info msg="CreateContainer within sandbox \"e24a6e027782a5a9ed04aae7812a89d3ac2277ab036e48c898ab075eaf92e95f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 30 13:09:07.971438 containerd[1702]: time="2025-01-30T13:09:07.971398083Z" level=info msg="CreateContainer within sandbox \"e24a6e027782a5a9ed04aae7812a89d3ac2277ab036e48c898ab075eaf92e95f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c66d4222ee85a79b17014f177f91de6fb55483d46a7c5c58b4e90843dea9354d\"" Jan 30 13:09:07.972012 containerd[1702]: time="2025-01-30T13:09:07.971916883Z" level=info msg="StartContainer for \"c66d4222ee85a79b17014f177f91de6fb55483d46a7c5c58b4e90843dea9354d\"" Jan 30 13:09:08.004433 systemd[1]: Started cri-containerd-c66d4222ee85a79b17014f177f91de6fb55483d46a7c5c58b4e90843dea9354d.scope - libcontainer container c66d4222ee85a79b17014f177f91de6fb55483d46a7c5c58b4e90843dea9354d. Jan 30 13:09:08.038420 containerd[1702]: time="2025-01-30T13:09:08.038363431Z" level=info msg="StartContainer for \"c66d4222ee85a79b17014f177f91de6fb55483d46a7c5c58b4e90843dea9354d\" returns successfully" Jan 30 13:09:08.388603 kubelet[3378]: I0130 13:09:08.388470 3378 setters.go:580] "Node became not ready" node="ci-4186.1.0-a-d95fc4b65f" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-30T13:09:08Z","lastTransitionTime":"2025-01-30T13:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 30 13:09:08.469201 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 30 13:09:11.303545 systemd-networkd[1327]: lxc_health: Link UP Jan 30 13:09:11.311658 systemd-networkd[1327]: lxc_health: Gained carrier Jan 30 13:09:12.327388 systemd-networkd[1327]: lxc_health: Gained IPv6LL Jan 30 13:09:12.633153 kubelet[3378]: I0130 13:09:12.632531 3378 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-bb2d6" podStartSLOduration=8.632505872 podStartE2EDuration="8.632505872s" podCreationTimestamp="2025-01-30 13:09:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:09:08.948201478 +0000 UTC m=+194.588265294" watchObservedRunningTime="2025-01-30 13:09:12.632505872 +0000 UTC m=+198.272569688" Jan 30 13:09:13.049503 systemd[1]: run-containerd-runc-k8s.io-c66d4222ee85a79b17014f177f91de6fb55483d46a7c5c58b4e90843dea9354d-runc.O603cQ.mount: Deactivated successfully. Jan 30 13:09:17.550098 sshd[5366]: Connection closed by 10.200.16.10 port 55820 Jan 30 13:09:17.551152 sshd-session[5306]: pam_unix(sshd:session): session closed for user core Jan 30 13:09:17.557326 systemd-logind[1681]: Session 28 logged out. Waiting for processes to exit. Jan 30 13:09:17.557917 systemd[1]: sshd@25-10.200.4.12:22-10.200.16.10:55820.service: Deactivated successfully. Jan 30 13:09:17.560946 systemd[1]: session-28.scope: Deactivated successfully. Jan 30 13:09:17.562980 systemd-logind[1681]: Removed session 28.