Jan 14 14:32:38.148268 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 19:40:50 -00 2025 Jan 14 14:32:38.148295 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 14 14:32:38.148304 kernel: BIOS-provided physical RAM map: Jan 14 14:32:38.148313 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 14 14:32:38.148319 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jan 14 14:32:38.148325 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Jan 14 14:32:38.148333 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ff70fff] type 20 Jan 14 14:32:38.148370 kernel: BIOS-e820: [mem 0x000000003ff71000-0x000000003ffc8fff] reserved Jan 14 14:32:38.148384 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jan 14 14:32:38.148390 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jan 14 14:32:38.148398 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jan 14 14:32:38.148406 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jan 14 14:32:38.148412 kernel: printk: bootconsole [earlyser0] enabled Jan 14 14:32:38.148421 kernel: NX (Execute Disable) protection: active Jan 14 14:32:38.148433 kernel: APIC: Static calls initialized Jan 14 14:32:38.148441 kernel: efi: EFI v2.7 by Microsoft Jan 14 14:32:38.148451 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c1a98 Jan 14 14:32:38.148458 kernel: SMBIOS 3.1.0 present. Jan 14 14:32:38.148465 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Jan 14 14:32:38.148475 kernel: Hypervisor detected: Microsoft Hyper-V Jan 14 14:32:38.148482 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Jan 14 14:32:38.148489 kernel: Hyper-V: Host Build 10.0.20348.1633-1-0 Jan 14 14:32:38.148499 kernel: Hyper-V: Nested features: 0x1e0101 Jan 14 14:32:38.148506 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jan 14 14:32:38.148515 kernel: Hyper-V: Using hypercall for remote TLB flush Jan 14 14:32:38.148525 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 14 14:32:38.148532 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 14 14:32:38.148540 kernel: tsc: Marking TSC unstable due to running on Hyper-V Jan 14 14:32:38.148550 kernel: tsc: Detected 2593.905 MHz processor Jan 14 14:32:38.148557 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 14 14:32:38.148565 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 14 14:32:38.148575 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Jan 14 14:32:38.148582 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 14 14:32:38.148592 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 14 14:32:38.148601 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Jan 14 14:32:38.148608 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Jan 14 14:32:38.148615 kernel: Using GB pages for direct mapping Jan 14 14:32:38.148625 kernel: Secure boot disabled Jan 14 14:32:38.148632 kernel: ACPI: Early table checksum verification disabled Jan 14 14:32:38.148639 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jan 14 14:32:38.148653 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 14:32:38.148663 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 14:32:38.148671 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Jan 14 14:32:38.148681 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jan 14 14:32:38.148688 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 14:32:38.148696 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 14:32:38.148706 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 14:32:38.148716 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 14:32:38.148726 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 14:32:38.148734 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 14:32:38.148742 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 14:32:38.148752 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jan 14 14:32:38.148760 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Jan 14 14:32:38.148767 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jan 14 14:32:38.148777 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jan 14 14:32:38.148788 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jan 14 14:32:38.148795 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jan 14 14:32:38.148806 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Jan 14 14:32:38.148813 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Jan 14 14:32:38.148821 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jan 14 14:32:38.148837 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Jan 14 14:32:38.148846 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 14 14:32:38.148855 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 14 14:32:38.148865 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jan 14 14:32:38.148876 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Jan 14 14:32:38.148886 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Jan 14 14:32:38.148894 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jan 14 14:32:38.148903 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jan 14 14:32:38.148913 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jan 14 14:32:38.148921 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jan 14 14:32:38.148930 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jan 14 14:32:38.148940 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jan 14 14:32:38.148948 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jan 14 14:32:38.148961 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jan 14 14:32:38.148968 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jan 14 14:32:38.148976 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Jan 14 14:32:38.148986 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Jan 14 14:32:38.148994 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Jan 14 14:32:38.149001 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Jan 14 14:32:38.149012 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Jan 14 14:32:38.149019 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Jan 14 14:32:38.149027 kernel: Zone ranges: Jan 14 14:32:38.149036 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 14 14:32:38.149044 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 14 14:32:38.149051 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jan 14 14:32:38.149059 kernel: Movable zone start for each node Jan 14 14:32:38.149066 kernel: Early memory node ranges Jan 14 14:32:38.149073 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 14 14:32:38.149081 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Jan 14 14:32:38.149088 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jan 14 14:32:38.149096 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jan 14 14:32:38.149105 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jan 14 14:32:38.149113 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 14 14:32:38.149120 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 14 14:32:38.149127 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Jan 14 14:32:38.149135 kernel: ACPI: PM-Timer IO Port: 0x408 Jan 14 14:32:38.149142 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Jan 14 14:32:38.149150 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Jan 14 14:32:38.149157 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 14 14:32:38.149165 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 14 14:32:38.149175 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jan 14 14:32:38.149185 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 14 14:32:38.149192 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jan 14 14:32:38.149200 kernel: Booting paravirtualized kernel on Hyper-V Jan 14 14:32:38.149211 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 14 14:32:38.149218 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 14 14:32:38.149226 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 14 14:32:38.149236 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 14 14:32:38.149243 kernel: pcpu-alloc: [0] 0 1 Jan 14 14:32:38.149254 kernel: Hyper-V: PV spinlocks enabled Jan 14 14:32:38.149264 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 14 14:32:38.149272 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 14 14:32:38.149282 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 14 14:32:38.149290 kernel: random: crng init done Jan 14 14:32:38.149298 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 14 14:32:38.149307 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 14 14:32:38.149315 kernel: Fallback order for Node 0: 0 Jan 14 14:32:38.149325 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Jan 14 14:32:38.149350 kernel: Policy zone: Normal Jan 14 14:32:38.149363 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 14 14:32:38.149371 kernel: software IO TLB: area num 2. Jan 14 14:32:38.149381 kernel: Memory: 8077072K/8387460K available (12288K kernel code, 2299K rwdata, 22728K rodata, 42844K init, 2348K bss, 310128K reserved, 0K cma-reserved) Jan 14 14:32:38.149390 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 14 14:32:38.149398 kernel: ftrace: allocating 37918 entries in 149 pages Jan 14 14:32:38.149406 kernel: ftrace: allocated 149 pages with 4 groups Jan 14 14:32:38.149414 kernel: Dynamic Preempt: voluntary Jan 14 14:32:38.149425 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 14 14:32:38.149434 kernel: rcu: RCU event tracing is enabled. Jan 14 14:32:38.149444 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 14 14:32:38.149452 kernel: Trampoline variant of Tasks RCU enabled. Jan 14 14:32:38.149460 kernel: Rude variant of Tasks RCU enabled. Jan 14 14:32:38.149468 kernel: Tracing variant of Tasks RCU enabled. Jan 14 14:32:38.149477 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 14 14:32:38.149495 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 14 14:32:38.149503 kernel: Using NULL legacy PIC Jan 14 14:32:38.149511 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jan 14 14:32:38.149519 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 14 14:32:38.149527 kernel: Console: colour dummy device 80x25 Jan 14 14:32:38.149535 kernel: printk: console [tty1] enabled Jan 14 14:32:38.149543 kernel: printk: console [ttyS0] enabled Jan 14 14:32:38.149550 kernel: printk: bootconsole [earlyser0] disabled Jan 14 14:32:38.149558 kernel: ACPI: Core revision 20230628 Jan 14 14:32:38.149570 kernel: Failed to register legacy timer interrupt Jan 14 14:32:38.149581 kernel: APIC: Switch to symmetric I/O mode setup Jan 14 14:32:38.149594 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 14 14:32:38.149605 kernel: Hyper-V: Using IPI hypercalls Jan 14 14:32:38.149613 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jan 14 14:32:38.149621 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jan 14 14:32:38.149629 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jan 14 14:32:38.149637 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jan 14 14:32:38.149646 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jan 14 14:32:38.149655 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jan 14 14:32:38.149668 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593905) Jan 14 14:32:38.149676 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 14 14:32:38.149688 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 14 14:32:38.149696 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 14 14:32:38.149707 kernel: Spectre V2 : Mitigation: Retpolines Jan 14 14:32:38.149715 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 14 14:32:38.149722 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 14 14:32:38.149730 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 14 14:32:38.149738 kernel: RETBleed: Vulnerable Jan 14 14:32:38.149750 kernel: Speculative Store Bypass: Vulnerable Jan 14 14:32:38.149758 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Jan 14 14:32:38.149765 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 14 14:32:38.149773 kernel: GDS: Unknown: Dependent on hypervisor status Jan 14 14:32:38.149785 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 14 14:32:38.149792 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 14 14:32:38.149801 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 14 14:32:38.149811 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 14 14:32:38.149819 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 14 14:32:38.149827 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 14 14:32:38.149837 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 14 14:32:38.149848 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jan 14 14:32:38.149855 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jan 14 14:32:38.149863 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jan 14 14:32:38.149871 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Jan 14 14:32:38.149879 kernel: Freeing SMP alternatives memory: 32K Jan 14 14:32:38.149890 kernel: pid_max: default: 32768 minimum: 301 Jan 14 14:32:38.149898 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 14 14:32:38.149906 kernel: landlock: Up and running. Jan 14 14:32:38.149914 kernel: SELinux: Initializing. Jan 14 14:32:38.149925 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 14 14:32:38.149933 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 14 14:32:38.149944 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Jan 14 14:32:38.149954 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 14 14:32:38.149962 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 14 14:32:38.149970 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 14 14:32:38.149978 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 14 14:32:38.149986 kernel: signal: max sigframe size: 3632 Jan 14 14:32:38.149995 kernel: rcu: Hierarchical SRCU implementation. Jan 14 14:32:38.150003 kernel: rcu: Max phase no-delay instances is 400. Jan 14 14:32:38.150011 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 14 14:32:38.150021 kernel: smp: Bringing up secondary CPUs ... Jan 14 14:32:38.150032 kernel: smpboot: x86: Booting SMP configuration: Jan 14 14:32:38.150043 kernel: .... node #0, CPUs: #1 Jan 14 14:32:38.150052 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Jan 14 14:32:38.150060 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 14 14:32:38.150071 kernel: smp: Brought up 1 node, 2 CPUs Jan 14 14:32:38.150079 kernel: smpboot: Max logical packages: 1 Jan 14 14:32:38.150087 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Jan 14 14:32:38.150097 kernel: devtmpfs: initialized Jan 14 14:32:38.150108 kernel: x86/mm: Memory block size: 128MB Jan 14 14:32:38.150116 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jan 14 14:32:38.150127 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 14 14:32:38.150135 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 14 14:32:38.150144 kernel: pinctrl core: initialized pinctrl subsystem Jan 14 14:32:38.150155 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 14 14:32:38.150163 kernel: audit: initializing netlink subsys (disabled) Jan 14 14:32:38.150179 kernel: audit: type=2000 audit(1736865156.028:1): state=initialized audit_enabled=0 res=1 Jan 14 14:32:38.150187 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 14 14:32:38.150201 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 14 14:32:38.150209 kernel: cpuidle: using governor menu Jan 14 14:32:38.150220 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 14 14:32:38.150228 kernel: dca service started, version 1.12.1 Jan 14 14:32:38.150239 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Jan 14 14:32:38.150247 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 14 14:32:38.150257 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 14 14:32:38.150268 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 14 14:32:38.150276 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 14 14:32:38.150287 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 14 14:32:38.150298 kernel: ACPI: Added _OSI(Module Device) Jan 14 14:32:38.150306 kernel: ACPI: Added _OSI(Processor Device) Jan 14 14:32:38.150315 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 14 14:32:38.150324 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 14 14:32:38.150332 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 14 14:32:38.150353 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 14 14:32:38.150362 kernel: ACPI: Interpreter enabled Jan 14 14:32:38.150372 kernel: ACPI: PM: (supports S0 S5) Jan 14 14:32:38.150383 kernel: ACPI: Using IOAPIC for interrupt routing Jan 14 14:32:38.150394 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 14 14:32:38.150402 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 14 14:32:38.150410 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jan 14 14:32:38.150421 kernel: iommu: Default domain type: Translated Jan 14 14:32:38.150429 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 14 14:32:38.150437 kernel: efivars: Registered efivars operations Jan 14 14:32:38.150445 kernel: PCI: Using ACPI for IRQ routing Jan 14 14:32:38.150454 kernel: PCI: System does not support PCI Jan 14 14:32:38.150465 kernel: vgaarb: loaded Jan 14 14:32:38.150474 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Jan 14 14:32:38.150483 kernel: VFS: Disk quotas dquot_6.6.0 Jan 14 14:32:38.150493 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 14 14:32:38.150501 kernel: pnp: PnP ACPI init Jan 14 14:32:38.150512 kernel: pnp: PnP ACPI: found 3 devices Jan 14 14:32:38.150520 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 14 14:32:38.150528 kernel: NET: Registered PF_INET protocol family Jan 14 14:32:38.150539 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 14 14:32:38.150549 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 14 14:32:38.150559 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 14 14:32:38.150568 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 14 14:32:38.150576 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 14 14:32:38.150586 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 14 14:32:38.155141 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 14 14:32:38.155156 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 14 14:32:38.155169 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 14 14:32:38.155183 kernel: NET: Registered PF_XDP protocol family Jan 14 14:32:38.155204 kernel: PCI: CLS 0 bytes, default 64 Jan 14 14:32:38.155217 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 14 14:32:38.155231 kernel: software IO TLB: mapped [mem 0x000000003b5c1000-0x000000003f5c1000] (64MB) Jan 14 14:32:38.155245 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 14 14:32:38.155259 kernel: Initialise system trusted keyrings Jan 14 14:32:38.155272 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 14 14:32:38.155286 kernel: Key type asymmetric registered Jan 14 14:32:38.155300 kernel: Asymmetric key parser 'x509' registered Jan 14 14:32:38.155314 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 14 14:32:38.155332 kernel: io scheduler mq-deadline registered Jan 14 14:32:38.155378 kernel: io scheduler kyber registered Jan 14 14:32:38.155392 kernel: io scheduler bfq registered Jan 14 14:32:38.155407 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 14 14:32:38.155421 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 14 14:32:38.155436 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 14 14:32:38.155451 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 14 14:32:38.155465 kernel: i8042: PNP: No PS/2 controller found. Jan 14 14:32:38.155647 kernel: rtc_cmos 00:02: registered as rtc0 Jan 14 14:32:38.155773 kernel: rtc_cmos 00:02: setting system clock to 2025-01-14T14:32:37 UTC (1736865157) Jan 14 14:32:38.155881 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jan 14 14:32:38.155897 kernel: intel_pstate: CPU model not supported Jan 14 14:32:38.155911 kernel: efifb: probing for efifb Jan 14 14:32:38.155924 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 14 14:32:38.155938 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 14 14:32:38.155951 kernel: efifb: scrolling: redraw Jan 14 14:32:38.155969 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 14 14:32:38.155982 kernel: Console: switching to colour frame buffer device 128x48 Jan 14 14:32:38.155995 kernel: fb0: EFI VGA frame buffer device Jan 14 14:32:38.156009 kernel: pstore: Using crash dump compression: deflate Jan 14 14:32:38.156023 kernel: pstore: Registered efi_pstore as persistent store backend Jan 14 14:32:38.156036 kernel: NET: Registered PF_INET6 protocol family Jan 14 14:32:38.156050 kernel: Segment Routing with IPv6 Jan 14 14:32:38.156064 kernel: In-situ OAM (IOAM) with IPv6 Jan 14 14:32:38.156077 kernel: NET: Registered PF_PACKET protocol family Jan 14 14:32:38.156091 kernel: Key type dns_resolver registered Jan 14 14:32:38.156108 kernel: IPI shorthand broadcast: enabled Jan 14 14:32:38.156121 kernel: sched_clock: Marking stable (924002800, 49665600)->(1224486500, -250818100) Jan 14 14:32:38.156135 kernel: registered taskstats version 1 Jan 14 14:32:38.156148 kernel: Loading compiled-in X.509 certificates Jan 14 14:32:38.156162 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: e8ca4908f7ff887d90a0430272c92dde55624447' Jan 14 14:32:38.156176 kernel: Key type .fscrypt registered Jan 14 14:32:38.156189 kernel: Key type fscrypt-provisioning registered Jan 14 14:32:38.156203 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 14 14:32:38.156220 kernel: ima: Allocated hash algorithm: sha1 Jan 14 14:32:38.156234 kernel: ima: No architecture policies found Jan 14 14:32:38.156248 kernel: clk: Disabling unused clocks Jan 14 14:32:38.156262 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 14 14:32:38.156277 kernel: Write protecting the kernel read-only data: 36864k Jan 14 14:32:38.156291 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 14 14:32:38.156305 kernel: Run /init as init process Jan 14 14:32:38.156320 kernel: with arguments: Jan 14 14:32:38.156333 kernel: /init Jan 14 14:32:38.156375 kernel: with environment: Jan 14 14:32:38.156389 kernel: HOME=/ Jan 14 14:32:38.156403 kernel: TERM=linux Jan 14 14:32:38.156416 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 14 14:32:38.156433 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 14 14:32:38.156450 systemd[1]: Detected virtualization microsoft. Jan 14 14:32:38.156465 systemd[1]: Detected architecture x86-64. Jan 14 14:32:38.156479 systemd[1]: Running in initrd. Jan 14 14:32:38.156497 systemd[1]: No hostname configured, using default hostname. Jan 14 14:32:38.156511 systemd[1]: Hostname set to . Jan 14 14:32:38.156527 systemd[1]: Initializing machine ID from random generator. Jan 14 14:32:38.156541 systemd[1]: Queued start job for default target initrd.target. Jan 14 14:32:38.156556 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 14:32:38.156571 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 14:32:38.156587 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 14 14:32:38.156602 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 14 14:32:38.156620 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 14 14:32:38.156636 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 14 14:32:38.156653 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 14 14:32:38.156669 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 14 14:32:38.156685 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 14:32:38.156700 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 14 14:32:38.156714 systemd[1]: Reached target paths.target - Path Units. Jan 14 14:32:38.156731 systemd[1]: Reached target slices.target - Slice Units. Jan 14 14:32:38.156745 systemd[1]: Reached target swap.target - Swaps. Jan 14 14:32:38.156759 systemd[1]: Reached target timers.target - Timer Units. Jan 14 14:32:38.156775 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 14:32:38.156790 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 14:32:38.156805 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 14 14:32:38.156821 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 14 14:32:38.156835 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 14 14:32:38.156851 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 14 14:32:38.156870 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 14:32:38.156886 systemd[1]: Reached target sockets.target - Socket Units. Jan 14 14:32:38.156900 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 14 14:32:38.156915 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 14 14:32:38.156930 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 14 14:32:38.156945 systemd[1]: Starting systemd-fsck-usr.service... Jan 14 14:32:38.156961 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 14 14:32:38.156974 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 14 14:32:38.156992 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 14:32:38.157032 systemd-journald[176]: Collecting audit messages is disabled. Jan 14 14:32:38.157067 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 14 14:32:38.157083 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 14:32:38.157103 systemd-journald[176]: Journal started Jan 14 14:32:38.157152 systemd-journald[176]: Runtime Journal (/run/log/journal/0370aafac9e9438281d8f20030799184) is 8.0M, max 158.8M, 150.8M free. Jan 14 14:32:38.165356 systemd[1]: Started systemd-journald.service - Journal Service. Jan 14 14:32:38.165588 systemd[1]: Finished systemd-fsck-usr.service. Jan 14 14:32:38.175091 systemd-modules-load[177]: Inserted module 'overlay' Jan 14 14:32:38.177623 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 14 14:32:38.181518 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 14 14:32:38.187225 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 14:32:38.191767 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 14:32:38.207092 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 14:32:38.231830 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 14 14:32:38.240263 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 14 14:32:38.247605 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 14 14:32:38.258435 kernel: Bridge firewalling registered Jan 14 14:32:38.259360 systemd-modules-load[177]: Inserted module 'br_netfilter' Jan 14 14:32:38.269269 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 14 14:32:38.272489 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 14 14:32:38.285747 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 14:32:38.290831 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 14:32:38.301552 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 14 14:32:38.309678 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 14 14:32:38.321093 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 14 14:32:38.327530 dracut-cmdline[209]: dracut-dracut-053 Jan 14 14:32:38.333024 dracut-cmdline[209]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 14 14:32:38.359155 systemd-resolved[219]: Positive Trust Anchors: Jan 14 14:32:38.359172 systemd-resolved[219]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 14 14:32:38.359227 systemd-resolved[219]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 14 14:32:38.377288 systemd-resolved[219]: Defaulting to hostname 'linux'. Jan 14 14:32:38.378273 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 14 14:32:38.389883 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 14 14:32:38.431364 kernel: SCSI subsystem initialized Jan 14 14:32:38.442367 kernel: Loading iSCSI transport class v2.0-870. Jan 14 14:32:38.453368 kernel: iscsi: registered transport (tcp) Jan 14 14:32:38.475225 kernel: iscsi: registered transport (qla4xxx) Jan 14 14:32:38.475321 kernel: QLogic iSCSI HBA Driver Jan 14 14:32:38.511408 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 14 14:32:38.523476 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 14 14:32:38.559416 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 14 14:32:38.559510 kernel: device-mapper: uevent: version 1.0.3 Jan 14 14:32:38.564613 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 14 14:32:38.605365 kernel: raid6: avx512x4 gen() 18247 MB/s Jan 14 14:32:38.625360 kernel: raid6: avx512x2 gen() 18404 MB/s Jan 14 14:32:38.644351 kernel: raid6: avx512x1 gen() 18247 MB/s Jan 14 14:32:38.663349 kernel: raid6: avx2x4 gen() 18325 MB/s Jan 14 14:32:38.683359 kernel: raid6: avx2x2 gen() 18169 MB/s Jan 14 14:32:38.703656 kernel: raid6: avx2x1 gen() 13649 MB/s Jan 14 14:32:38.703705 kernel: raid6: using algorithm avx512x2 gen() 18404 MB/s Jan 14 14:32:38.724352 kernel: raid6: .... xor() 29189 MB/s, rmw enabled Jan 14 14:32:38.724384 kernel: raid6: using avx512x2 recovery algorithm Jan 14 14:32:38.747368 kernel: xor: automatically using best checksumming function avx Jan 14 14:32:38.893365 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 14 14:32:38.903098 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 14 14:32:38.916611 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 14:32:38.930192 systemd-udevd[395]: Using default interface naming scheme 'v255'. Jan 14 14:32:38.934675 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 14:32:38.950512 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 14 14:32:38.964026 dracut-pre-trigger[406]: rd.md=0: removing MD RAID activation Jan 14 14:32:38.994067 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 14:32:39.007545 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 14 14:32:39.048403 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 14:32:39.063590 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 14 14:32:39.096462 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 14 14:32:39.103525 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 14:32:39.110753 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 14:32:39.121964 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 14 14:32:39.135633 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 14 14:32:39.158424 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 14 14:32:39.179264 kernel: cryptd: max_cpu_qlen set to 1000 Jan 14 14:32:39.165144 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 14:32:39.169280 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 14 14:32:39.172399 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 14:32:39.172589 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 14:32:39.175971 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 14:32:39.205000 kernel: hv_vmbus: Vmbus version:5.2 Jan 14 14:32:39.205063 kernel: AVX2 version of gcm_enc/dec engaged. Jan 14 14:32:39.203615 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 14:32:39.221598 kernel: AES CTR mode by8 optimization enabled Jan 14 14:32:39.214450 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 14 14:32:39.238744 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 14:32:39.238882 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 14:32:39.256690 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 14:32:39.278437 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 14 14:32:39.278510 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 14 14:32:39.284359 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 14 14:32:39.290363 kernel: PTP clock support registered Jan 14 14:32:39.298102 kernel: hv_utils: Registering HyperV Utility Driver Jan 14 14:32:39.298165 kernel: hv_vmbus: registering driver hv_utils Jan 14 14:32:39.301965 kernel: hv_utils: Heartbeat IC version 3.0 Jan 14 14:32:39.302016 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 14 14:32:39.302032 kernel: hv_utils: Shutdown IC version 3.2 Jan 14 14:32:39.307217 kernel: hv_utils: TimeSync IC version 4.0 Jan 14 14:32:40.241807 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jan 14 14:32:40.236938 systemd-resolved[219]: Clock change detected. Flushing caches. Jan 14 14:32:40.243336 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 14:32:40.255413 kernel: hv_vmbus: registering driver hid_hyperv Jan 14 14:32:40.262995 kernel: hv_vmbus: registering driver hv_storvsc Jan 14 14:32:40.263054 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jan 14 14:32:40.272078 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 14 14:32:40.282306 kernel: scsi host1: storvsc_host_t Jan 14 14:32:40.282537 kernel: scsi host0: storvsc_host_t Jan 14 14:32:40.282714 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 14 14:32:40.282753 kernel: hv_vmbus: registering driver hv_netvsc Jan 14 14:32:40.273610 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 14 14:32:40.292440 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jan 14 14:32:40.326088 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 14 14:32:40.331257 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 14 14:32:40.331287 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 14 14:32:40.341969 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 14 14:32:40.359307 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 14 14:32:40.359539 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 14 14:32:40.359720 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 14 14:32:40.359882 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 14 14:32:40.360052 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 14 14:32:40.360077 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 14 14:32:40.341855 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 14:32:40.480485 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 14 14:32:40.497438 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (441) Jan 14 14:32:40.520223 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 14 14:32:40.522402 kernel: BTRFS: device fsid b8e2d3c5-4bed-4339-bed5-268c66823686 devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (449) Jan 14 14:32:40.544367 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 14 14:32:40.548608 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 14 14:32:40.565432 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 14 14:32:40.582643 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 14 14:32:40.600407 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 14 14:32:40.607432 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 14 14:32:41.339990 kernel: hv_netvsc 000d3ab0-5d97-000d-3ab0-5d97000d3ab0 eth0: VF slot 1 added Jan 14 14:32:41.348437 kernel: hv_vmbus: registering driver hv_pci Jan 14 14:32:41.353406 kernel: hv_pci a53207a1-fe2e-4773-912a-67905ad201f4: PCI VMBus probing: Using version 0x10004 Jan 14 14:32:41.415165 kernel: hv_pci a53207a1-fe2e-4773-912a-67905ad201f4: PCI host bridge to bus fe2e:00 Jan 14 14:32:41.415367 kernel: pci_bus fe2e:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Jan 14 14:32:41.415598 kernel: pci_bus fe2e:00: No busn resource found for root bus, will use [bus 00-ff] Jan 14 14:32:41.415746 kernel: pci fe2e:00:02.0: [15b3:1016] type 00 class 0x020000 Jan 14 14:32:41.415937 kernel: pci fe2e:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Jan 14 14:32:41.416112 kernel: pci fe2e:00:02.0: enabling Extended Tags Jan 14 14:32:41.416266 kernel: pci fe2e:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at fe2e:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Jan 14 14:32:41.416463 kernel: pci_bus fe2e:00: busn_res: [bus 00-ff] end is updated to 00 Jan 14 14:32:41.416624 kernel: pci fe2e:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Jan 14 14:32:41.578785 kernel: mlx5_core fe2e:00:02.0: enabling device (0000 -> 0002) Jan 14 14:32:41.822793 kernel: mlx5_core fe2e:00:02.0: firmware version: 14.30.5000 Jan 14 14:32:41.823300 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 14 14:32:41.823323 kernel: hv_netvsc 000d3ab0-5d97-000d-3ab0-5d97000d3ab0 eth0: VF registering: eth1 Jan 14 14:32:41.823869 kernel: mlx5_core fe2e:00:02.0 eth1: joined to eth0 Jan 14 14:32:41.824066 kernel: mlx5_core fe2e:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jan 14 14:32:41.824259 disk-uuid[590]: The operation has completed successfully. Jan 14 14:32:41.832404 kernel: mlx5_core fe2e:00:02.0 enP65070s1: renamed from eth1 Jan 14 14:32:41.848328 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 14 14:32:41.848459 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 14 14:32:41.864535 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 14 14:32:41.870560 sh[686]: Success Jan 14 14:32:41.890882 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 14 14:32:41.958211 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 14 14:32:41.972818 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 14 14:32:41.978241 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 14 14:32:41.997406 kernel: BTRFS info (device dm-0): first mount of filesystem b8e2d3c5-4bed-4339-bed5-268c66823686 Jan 14 14:32:41.997464 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 14 14:32:42.006560 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 14 14:32:42.010161 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 14 14:32:42.013528 kernel: BTRFS info (device dm-0): using free space tree Jan 14 14:32:42.092696 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 14 14:32:42.098800 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 14 14:32:42.114597 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 14 14:32:42.121621 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 14 14:32:42.134478 kernel: BTRFS info (device sda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 14 14:32:42.141142 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 14:32:42.141221 kernel: BTRFS info (device sda6): using free space tree Jan 14 14:32:42.151423 kernel: BTRFS info (device sda6): auto enabling async discard Jan 14 14:32:42.161906 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 14 14:32:42.168378 kernel: BTRFS info (device sda6): last unmount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 14 14:32:42.176449 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 14 14:32:42.191467 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 14 14:32:42.230870 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 14:32:42.248625 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 14 14:32:42.282956 systemd-networkd[870]: lo: Link UP Jan 14 14:32:42.282967 systemd-networkd[870]: lo: Gained carrier Jan 14 14:32:42.285103 systemd-networkd[870]: Enumeration completed Jan 14 14:32:42.285370 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 14 14:32:42.288519 systemd[1]: Reached target network.target - Network. Jan 14 14:32:42.290127 systemd-networkd[870]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 14 14:32:42.290131 systemd-networkd[870]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 14:32:42.365414 kernel: mlx5_core fe2e:00:02.0 enP65070s1: Link up Jan 14 14:32:42.458312 ignition[803]: Ignition 2.19.0 Jan 14 14:32:42.458324 ignition[803]: Stage: fetch-offline Jan 14 14:32:42.458368 ignition[803]: no configs at "/usr/lib/ignition/base.d" Jan 14 14:32:42.458378 ignition[803]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 14:32:42.458497 ignition[803]: parsed url from cmdline: "" Jan 14 14:32:42.458501 ignition[803]: no config URL provided Jan 14 14:32:42.458507 ignition[803]: reading system config file "/usr/lib/ignition/user.ign" Jan 14 14:32:42.458517 ignition[803]: no config at "/usr/lib/ignition/user.ign" Jan 14 14:32:42.458524 ignition[803]: failed to fetch config: resource requires networking Jan 14 14:32:42.460973 ignition[803]: Ignition finished successfully Jan 14 14:32:42.488255 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 14:32:42.499704 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 14 14:32:42.515125 ignition[878]: Ignition 2.19.0 Jan 14 14:32:42.515135 ignition[878]: Stage: fetch Jan 14 14:32:42.515352 ignition[878]: no configs at "/usr/lib/ignition/base.d" Jan 14 14:32:42.515365 ignition[878]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 14:32:42.515496 ignition[878]: parsed url from cmdline: "" Jan 14 14:32:42.515500 ignition[878]: no config URL provided Jan 14 14:32:42.515505 ignition[878]: reading system config file "/usr/lib/ignition/user.ign" Jan 14 14:32:42.515511 ignition[878]: no config at "/usr/lib/ignition/user.ign" Jan 14 14:32:42.515529 ignition[878]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 14 14:32:42.515677 ignition[878]: GET error: Get "http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 14 14:32:42.716279 ignition[878]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #2 Jan 14 14:32:42.721049 ignition[878]: GET error: Get "http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 14 14:32:43.121760 ignition[878]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #3 Jan 14 14:32:43.121970 ignition[878]: GET error: Get "http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 14 14:32:43.343543 kernel: hv_netvsc 000d3ab0-5d97-000d-3ab0-5d97000d3ab0 eth0: Data path switched to VF: enP65070s1 Jan 14 14:32:43.343933 systemd-networkd[870]: enP65070s1: Link UP Jan 14 14:32:43.344074 systemd-networkd[870]: eth0: Link UP Jan 14 14:32:43.344239 systemd-networkd[870]: eth0: Gained carrier Jan 14 14:32:43.344252 systemd-networkd[870]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 14 14:32:43.350829 systemd-networkd[870]: enP65070s1: Gained carrier Jan 14 14:32:43.413462 systemd-networkd[870]: eth0: DHCPv4 address 10.200.8.10/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jan 14 14:32:43.922122 ignition[878]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #4 Jan 14 14:32:43.998751 ignition[878]: GET result: OK Jan 14 14:32:43.998907 ignition[878]: config has been read from IMDS userdata Jan 14 14:32:43.998946 ignition[878]: parsing config with SHA512: cd3801da6d96228d9a35648be866a02314700e320b828d3fe476dd39e48b77ce609e627999d038afcbe57be7ffc11bfb02b478faad8b75691403236a51941280 Jan 14 14:32:44.007040 unknown[878]: fetched base config from "system" Jan 14 14:32:44.007052 unknown[878]: fetched base config from "system" Jan 14 14:32:44.007061 unknown[878]: fetched user config from "azure" Jan 14 14:32:44.014277 ignition[878]: fetch: fetch complete Jan 14 14:32:44.014282 ignition[878]: fetch: fetch passed Jan 14 14:32:44.016658 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 14 14:32:44.014336 ignition[878]: Ignition finished successfully Jan 14 14:32:44.029648 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 14 14:32:44.045238 ignition[886]: Ignition 2.19.0 Jan 14 14:32:44.045249 ignition[886]: Stage: kargs Jan 14 14:32:44.047354 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 14 14:32:44.045489 ignition[886]: no configs at "/usr/lib/ignition/base.d" Jan 14 14:32:44.045504 ignition[886]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 14:32:44.046337 ignition[886]: kargs: kargs passed Jan 14 14:32:44.046381 ignition[886]: Ignition finished successfully Jan 14 14:32:44.071158 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 14 14:32:44.087118 ignition[892]: Ignition 2.19.0 Jan 14 14:32:44.087129 ignition[892]: Stage: disks Jan 14 14:32:44.089180 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 14 14:32:44.087347 ignition[892]: no configs at "/usr/lib/ignition/base.d" Jan 14 14:32:44.087359 ignition[892]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 14:32:44.097733 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 14 14:32:44.088250 ignition[892]: disks: disks passed Jan 14 14:32:44.106835 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 14 14:32:44.088292 ignition[892]: Ignition finished successfully Jan 14 14:32:44.114523 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 14 14:32:44.117698 systemd[1]: Reached target sysinit.target - System Initialization. Jan 14 14:32:44.134681 systemd[1]: Reached target basic.target - Basic System. Jan 14 14:32:44.145569 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 14 14:32:44.165685 systemd-fsck[901]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 14 14:32:44.169744 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 14 14:32:44.183583 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 14 14:32:44.279786 kernel: EXT4-fs (sda9): mounted filesystem 39899d4c-a8b1-4feb-9875-e812cc535888 r/w with ordered data mode. Quota mode: none. Jan 14 14:32:44.280406 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 14 14:32:44.285242 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 14 14:32:44.299521 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 14:32:44.304577 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 14 14:32:44.315498 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (912) Jan 14 14:32:44.316579 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 14 14:32:44.321474 kernel: BTRFS info (device sda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 14 14:32:44.331650 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 14:32:44.331695 kernel: BTRFS info (device sda6): using free space tree Jan 14 14:32:44.330799 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 14 14:32:44.330861 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 14:32:44.337402 kernel: BTRFS info (device sda6): auto enabling async discard Jan 14 14:32:44.348447 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 14:32:44.348652 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 14 14:32:44.353558 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 14 14:32:44.516824 coreos-metadata[914]: Jan 14 14:32:44.516 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 14 14:32:44.523341 coreos-metadata[914]: Jan 14 14:32:44.523 INFO Fetch successful Jan 14 14:32:44.526174 coreos-metadata[914]: Jan 14 14:32:44.523 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 14 14:32:44.523607 systemd-networkd[870]: eth0: Gained IPv6LL Jan 14 14:32:44.539826 coreos-metadata[914]: Jan 14 14:32:44.539 INFO Fetch successful Jan 14 14:32:44.545149 coreos-metadata[914]: Jan 14 14:32:44.543 INFO wrote hostname ci-4081.3.0-a-a739250a79 to /sysroot/etc/hostname Jan 14 14:32:44.548975 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 14 14:32:44.557876 initrd-setup-root[941]: cut: /sysroot/etc/passwd: No such file or directory Jan 14 14:32:44.568265 initrd-setup-root[949]: cut: /sysroot/etc/group: No such file or directory Jan 14 14:32:44.578881 initrd-setup-root[956]: cut: /sysroot/etc/shadow: No such file or directory Jan 14 14:32:44.584192 initrd-setup-root[963]: cut: /sysroot/etc/gshadow: No such file or directory Jan 14 14:32:44.833663 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 14 14:32:44.843500 systemd-networkd[870]: enP65070s1: Gained IPv6LL Jan 14 14:32:44.843742 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 14 14:32:44.853178 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 14 14:32:44.861600 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 14 14:32:44.869492 kernel: BTRFS info (device sda6): last unmount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 14 14:32:44.887255 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 14 14:32:44.899193 ignition[1036]: INFO : Ignition 2.19.0 Jan 14 14:32:44.899193 ignition[1036]: INFO : Stage: mount Jan 14 14:32:44.905161 ignition[1036]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 14:32:44.905161 ignition[1036]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 14:32:44.905161 ignition[1036]: INFO : mount: mount passed Jan 14 14:32:44.905161 ignition[1036]: INFO : Ignition finished successfully Jan 14 14:32:44.911929 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 14 14:32:44.932554 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 14 14:32:44.946456 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 14:32:44.970408 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1046) Jan 14 14:32:44.975404 kernel: BTRFS info (device sda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 14 14:32:44.975446 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 14:32:44.981081 kernel: BTRFS info (device sda6): using free space tree Jan 14 14:32:44.986405 kernel: BTRFS info (device sda6): auto enabling async discard Jan 14 14:32:44.987983 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 14:32:45.015682 ignition[1063]: INFO : Ignition 2.19.0 Jan 14 14:32:45.019883 ignition[1063]: INFO : Stage: files Jan 14 14:32:45.019883 ignition[1063]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 14:32:45.019883 ignition[1063]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 14:32:45.019883 ignition[1063]: DEBUG : files: compiled without relabeling support, skipping Jan 14 14:32:45.032272 ignition[1063]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 14 14:32:45.032272 ignition[1063]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 14 14:32:45.064654 ignition[1063]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 14 14:32:45.069226 ignition[1063]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 14 14:32:45.073075 unknown[1063]: wrote ssh authorized keys file for user: core Jan 14 14:32:45.076066 ignition[1063]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 14 14:32:45.076066 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 14 14:32:45.076066 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 14 14:32:45.116102 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 14 14:32:45.289606 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 14 14:32:45.289606 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 14 14:32:45.289606 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 14 14:32:45.825295 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 14 14:32:45.969315 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 14 14:32:45.976609 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 14 14:32:45.976609 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 14 14:32:45.976609 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 14 14:32:45.976609 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 14 14:32:45.976609 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 14 14:32:45.976609 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 14 14:32:45.976609 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 14 14:32:45.976609 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 14 14:32:46.020834 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 14:32:46.020834 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 14:32:46.020834 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 14 14:32:46.020834 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 14 14:32:46.020834 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 14 14:32:46.020834 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Jan 14 14:32:46.441600 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 14 14:32:46.724346 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 14 14:32:46.724346 ignition[1063]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 14 14:32:46.758094 ignition[1063]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 14 14:32:46.765817 ignition[1063]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 14 14:32:46.765817 ignition[1063]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 14 14:32:46.765817 ignition[1063]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jan 14 14:32:46.765817 ignition[1063]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jan 14 14:32:46.765817 ignition[1063]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 14 14:32:46.765817 ignition[1063]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 14 14:32:46.765817 ignition[1063]: INFO : files: files passed Jan 14 14:32:46.765817 ignition[1063]: INFO : Ignition finished successfully Jan 14 14:32:46.760451 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 14 14:32:46.811700 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 14 14:32:46.816786 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 14 14:32:46.831953 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 14 14:32:46.832189 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 14 14:32:46.843992 initrd-setup-root-after-ignition[1091]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 14:32:46.843992 initrd-setup-root-after-ignition[1091]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 14 14:32:46.858985 initrd-setup-root-after-ignition[1095]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 14:32:46.851046 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 14:32:46.859122 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 14 14:32:46.884656 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 14 14:32:46.910809 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 14 14:32:46.910926 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 14 14:32:46.923025 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 14 14:32:46.926062 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 14 14:32:46.932012 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 14 14:32:46.943560 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 14 14:32:46.956918 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 14:32:46.969640 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 14 14:32:46.987546 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 14 14:32:46.994921 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 14:32:46.998819 systemd[1]: Stopped target timers.target - Timer Units. Jan 14 14:32:47.006949 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 14 14:32:47.007117 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 14:32:47.015160 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 14 14:32:47.025291 systemd[1]: Stopped target basic.target - Basic System. Jan 14 14:32:47.025547 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 14 14:32:47.025965 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 14:32:47.026435 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 14 14:32:47.026934 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 14 14:32:47.027480 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 14:32:47.028110 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 14 14:32:47.028856 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 14 14:32:47.029547 systemd[1]: Stopped target swap.target - Swaps. Jan 14 14:32:47.030138 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 14 14:32:47.030287 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 14 14:32:47.031855 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 14 14:32:47.032604 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 14:32:47.033248 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 14 14:32:47.072790 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 14:32:47.081662 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 14 14:32:47.081838 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 14 14:32:47.136851 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 14 14:32:47.137025 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 14:32:47.151361 systemd[1]: ignition-files.service: Deactivated successfully. Jan 14 14:32:47.151581 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 14 14:32:47.158932 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 14 14:32:47.159079 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 14 14:32:47.180634 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 14 14:32:47.190061 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 14 14:32:47.190838 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 14 14:32:47.190956 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 14:32:47.203456 ignition[1115]: INFO : Ignition 2.19.0 Jan 14 14:32:47.203456 ignition[1115]: INFO : Stage: umount Jan 14 14:32:47.203456 ignition[1115]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 14:32:47.203456 ignition[1115]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 14:32:47.203456 ignition[1115]: INFO : umount: umount passed Jan 14 14:32:47.203456 ignition[1115]: INFO : Ignition finished successfully Jan 14 14:32:47.207061 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 14 14:32:47.207204 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 14:32:47.233597 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 14 14:32:47.233709 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 14 14:32:47.245104 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 14 14:32:47.245362 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 14 14:32:47.251359 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 14 14:32:47.251424 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 14 14:32:47.257861 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 14 14:32:47.257926 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 14 14:32:47.265280 systemd[1]: Stopped target network.target - Network. Jan 14 14:32:47.268906 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 14 14:32:47.268968 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 14:32:47.280999 systemd[1]: Stopped target paths.target - Path Units. Jan 14 14:32:47.286162 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 14 14:32:47.291627 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 14:32:47.316778 systemd[1]: Stopped target slices.target - Slice Units. Jan 14 14:32:47.323275 systemd[1]: Stopped target sockets.target - Socket Units. Jan 14 14:32:47.323435 systemd[1]: iscsid.socket: Deactivated successfully. Jan 14 14:32:47.323485 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 14:32:47.323982 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 14 14:32:47.324016 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 14:32:47.324518 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 14 14:32:47.324563 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 14 14:32:47.325033 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 14 14:32:47.325066 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 14 14:32:47.325662 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 14 14:32:47.326043 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 14 14:32:47.327560 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 14 14:32:47.328104 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 14 14:32:47.328191 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 14 14:32:47.328756 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 14 14:32:47.328839 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 14 14:32:47.330891 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 14 14:32:47.330963 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 14 14:32:47.370472 systemd-networkd[870]: eth0: DHCPv6 lease lost Jan 14 14:32:47.372999 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 14 14:32:47.373106 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 14 14:32:47.385202 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 14 14:32:47.385340 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 14 14:32:47.419008 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 14 14:32:47.419090 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 14 14:32:47.447590 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 14 14:32:47.468446 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 14 14:32:47.468540 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 14:32:47.476283 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 14 14:32:47.476343 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 14 14:32:47.490737 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 14 14:32:47.490812 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 14 14:32:47.499680 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 14 14:32:47.499746 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 14:32:47.505916 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 14:32:47.531088 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 14 14:32:47.531256 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 14:32:47.537522 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 14 14:32:47.537565 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 14 14:32:47.544464 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 14 14:32:47.544512 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 14:32:47.562506 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 14 14:32:47.562578 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 14 14:32:47.571025 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 14 14:32:47.571090 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 14 14:32:47.579220 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 14 14:32:47.579283 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 14:32:47.595543 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 14 14:32:47.598565 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 14 14:32:47.598628 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 14:32:47.603215 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 14 14:32:47.603265 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 14:32:47.613556 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 14 14:32:47.613619 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 14:32:47.625895 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 14:32:47.625962 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 14:32:47.643478 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 14 14:32:47.643588 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 14 14:32:48.959416 kernel: hv_netvsc 000d3ab0-5d97-000d-3ab0-5d97000d3ab0 eth0: Data path switched from VF: enP65070s1 Jan 14 14:32:48.979738 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 14 14:32:48.979877 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 14 14:32:48.984277 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 14 14:32:49.006606 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 14 14:32:49.017219 systemd[1]: Switching root. Jan 14 14:32:49.054520 systemd-journald[176]: Journal stopped Jan 14 14:32:38.148268 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 19:40:50 -00 2025 Jan 14 14:32:38.148295 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 14 14:32:38.148304 kernel: BIOS-provided physical RAM map: Jan 14 14:32:38.148313 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 14 14:32:38.148319 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jan 14 14:32:38.148325 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Jan 14 14:32:38.148333 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ff70fff] type 20 Jan 14 14:32:38.148370 kernel: BIOS-e820: [mem 0x000000003ff71000-0x000000003ffc8fff] reserved Jan 14 14:32:38.148384 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jan 14 14:32:38.148390 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jan 14 14:32:38.148398 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jan 14 14:32:38.148406 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jan 14 14:32:38.148412 kernel: printk: bootconsole [earlyser0] enabled Jan 14 14:32:38.148421 kernel: NX (Execute Disable) protection: active Jan 14 14:32:38.148433 kernel: APIC: Static calls initialized Jan 14 14:32:38.148441 kernel: efi: EFI v2.7 by Microsoft Jan 14 14:32:38.148451 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c1a98 Jan 14 14:32:38.148458 kernel: SMBIOS 3.1.0 present. Jan 14 14:32:38.148465 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Jan 14 14:32:38.148475 kernel: Hypervisor detected: Microsoft Hyper-V Jan 14 14:32:38.148482 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Jan 14 14:32:38.148489 kernel: Hyper-V: Host Build 10.0.20348.1633-1-0 Jan 14 14:32:38.148499 kernel: Hyper-V: Nested features: 0x1e0101 Jan 14 14:32:38.148506 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jan 14 14:32:38.148515 kernel: Hyper-V: Using hypercall for remote TLB flush Jan 14 14:32:38.148525 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 14 14:32:38.148532 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 14 14:32:38.148540 kernel: tsc: Marking TSC unstable due to running on Hyper-V Jan 14 14:32:38.148550 kernel: tsc: Detected 2593.905 MHz processor Jan 14 14:32:38.148557 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 14 14:32:38.148565 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 14 14:32:38.148575 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Jan 14 14:32:38.148582 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 14 14:32:38.148592 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 14 14:32:38.148601 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Jan 14 14:32:38.148608 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Jan 14 14:32:38.148615 kernel: Using GB pages for direct mapping Jan 14 14:32:38.148625 kernel: Secure boot disabled Jan 14 14:32:38.148632 kernel: ACPI: Early table checksum verification disabled Jan 14 14:32:38.148639 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jan 14 14:32:38.148653 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 14:32:38.148663 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 14:32:38.148671 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Jan 14 14:32:38.148681 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jan 14 14:32:38.148688 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 14:32:38.148696 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 14:32:38.148706 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 14:32:38.148716 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 14:32:38.148726 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 14:32:38.148734 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 14:32:38.148742 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 14:32:38.148752 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jan 14 14:32:38.148760 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Jan 14 14:32:38.148767 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jan 14 14:32:38.148777 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jan 14 14:32:38.148788 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jan 14 14:32:38.148795 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jan 14 14:32:38.148806 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Jan 14 14:32:38.148813 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Jan 14 14:32:38.148821 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jan 14 14:32:38.148837 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Jan 14 14:32:38.148846 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 14 14:32:38.148855 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 14 14:32:38.148865 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jan 14 14:32:38.148876 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Jan 14 14:32:38.148886 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Jan 14 14:32:38.148894 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jan 14 14:32:38.148903 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jan 14 14:32:38.148913 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jan 14 14:32:38.148921 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jan 14 14:32:38.148930 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jan 14 14:32:38.148940 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jan 14 14:32:38.148948 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jan 14 14:32:38.148961 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jan 14 14:32:38.148968 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jan 14 14:32:38.148976 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Jan 14 14:32:38.148986 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Jan 14 14:32:38.148994 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Jan 14 14:32:38.149001 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Jan 14 14:32:38.149012 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Jan 14 14:32:38.149019 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Jan 14 14:32:38.149027 kernel: Zone ranges: Jan 14 14:32:38.149036 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 14 14:32:38.149044 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 14 14:32:38.149051 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jan 14 14:32:38.149059 kernel: Movable zone start for each node Jan 14 14:32:38.149066 kernel: Early memory node ranges Jan 14 14:32:38.149073 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 14 14:32:38.149081 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Jan 14 14:32:38.149088 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jan 14 14:32:38.149096 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jan 14 14:32:38.149105 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jan 14 14:32:38.149113 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 14 14:32:38.149120 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 14 14:32:38.149127 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Jan 14 14:32:38.149135 kernel: ACPI: PM-Timer IO Port: 0x408 Jan 14 14:32:38.149142 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Jan 14 14:32:38.149150 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Jan 14 14:32:38.149157 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 14 14:32:38.149165 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 14 14:32:38.149175 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jan 14 14:32:38.149185 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 14 14:32:38.149192 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jan 14 14:32:38.149200 kernel: Booting paravirtualized kernel on Hyper-V Jan 14 14:32:38.149211 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 14 14:32:38.149218 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 14 14:32:38.149226 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 14 14:32:38.149236 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 14 14:32:38.149243 kernel: pcpu-alloc: [0] 0 1 Jan 14 14:32:38.149254 kernel: Hyper-V: PV spinlocks enabled Jan 14 14:32:38.149264 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 14 14:32:38.149272 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 14 14:32:38.149282 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 14 14:32:38.149290 kernel: random: crng init done Jan 14 14:32:38.149298 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 14 14:32:38.149307 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 14 14:32:38.149315 kernel: Fallback order for Node 0: 0 Jan 14 14:32:38.149325 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Jan 14 14:32:38.149350 kernel: Policy zone: Normal Jan 14 14:32:38.149363 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 14 14:32:38.149371 kernel: software IO TLB: area num 2. Jan 14 14:32:38.149381 kernel: Memory: 8077072K/8387460K available (12288K kernel code, 2299K rwdata, 22728K rodata, 42844K init, 2348K bss, 310128K reserved, 0K cma-reserved) Jan 14 14:32:38.149390 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 14 14:32:38.149398 kernel: ftrace: allocating 37918 entries in 149 pages Jan 14 14:32:38.149406 kernel: ftrace: allocated 149 pages with 4 groups Jan 14 14:32:38.149414 kernel: Dynamic Preempt: voluntary Jan 14 14:32:38.149425 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 14 14:32:38.149434 kernel: rcu: RCU event tracing is enabled. Jan 14 14:32:38.149444 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 14 14:32:38.149452 kernel: Trampoline variant of Tasks RCU enabled. Jan 14 14:32:38.149460 kernel: Rude variant of Tasks RCU enabled. Jan 14 14:32:38.149468 kernel: Tracing variant of Tasks RCU enabled. Jan 14 14:32:38.149477 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 14 14:32:38.149495 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 14 14:32:38.149503 kernel: Using NULL legacy PIC Jan 14 14:32:38.149511 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jan 14 14:32:38.149519 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 14 14:32:38.149527 kernel: Console: colour dummy device 80x25 Jan 14 14:32:38.149535 kernel: printk: console [tty1] enabled Jan 14 14:32:38.149543 kernel: printk: console [ttyS0] enabled Jan 14 14:32:38.149550 kernel: printk: bootconsole [earlyser0] disabled Jan 14 14:32:38.149558 kernel: ACPI: Core revision 20230628 Jan 14 14:32:38.149570 kernel: Failed to register legacy timer interrupt Jan 14 14:32:38.149581 kernel: APIC: Switch to symmetric I/O mode setup Jan 14 14:32:38.149594 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 14 14:32:38.149605 kernel: Hyper-V: Using IPI hypercalls Jan 14 14:32:38.149613 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jan 14 14:32:38.149621 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jan 14 14:32:38.149629 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jan 14 14:32:38.149637 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jan 14 14:32:38.149646 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jan 14 14:32:38.149655 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jan 14 14:32:38.149668 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593905) Jan 14 14:32:38.149676 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 14 14:32:38.149688 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 14 14:32:38.149696 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 14 14:32:38.149707 kernel: Spectre V2 : Mitigation: Retpolines Jan 14 14:32:38.149715 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 14 14:32:38.149722 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 14 14:32:38.149730 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 14 14:32:38.149738 kernel: RETBleed: Vulnerable Jan 14 14:32:38.149750 kernel: Speculative Store Bypass: Vulnerable Jan 14 14:32:38.149758 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Jan 14 14:32:38.149765 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 14 14:32:38.149773 kernel: GDS: Unknown: Dependent on hypervisor status Jan 14 14:32:38.149785 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 14 14:32:38.149792 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 14 14:32:38.149801 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 14 14:32:38.149811 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 14 14:32:38.149819 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 14 14:32:38.149827 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 14 14:32:38.149837 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 14 14:32:38.149848 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jan 14 14:32:38.149855 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jan 14 14:32:38.149863 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jan 14 14:32:38.149871 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Jan 14 14:32:38.149879 kernel: Freeing SMP alternatives memory: 32K Jan 14 14:32:38.149890 kernel: pid_max: default: 32768 minimum: 301 Jan 14 14:32:38.149898 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 14 14:32:38.149906 kernel: landlock: Up and running. Jan 14 14:32:38.149914 kernel: SELinux: Initializing. Jan 14 14:32:38.149925 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 14 14:32:38.149933 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 14 14:32:38.149944 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Jan 14 14:32:38.149954 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 14 14:32:38.149962 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 14 14:32:38.149970 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 14 14:32:38.149978 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 14 14:32:38.149986 kernel: signal: max sigframe size: 3632 Jan 14 14:32:38.149995 kernel: rcu: Hierarchical SRCU implementation. Jan 14 14:32:38.150003 kernel: rcu: Max phase no-delay instances is 400. Jan 14 14:32:38.150011 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 14 14:32:38.150021 kernel: smp: Bringing up secondary CPUs ... Jan 14 14:32:38.150032 kernel: smpboot: x86: Booting SMP configuration: Jan 14 14:32:38.150043 kernel: .... node #0, CPUs: #1 Jan 14 14:32:38.150052 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Jan 14 14:32:38.150060 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 14 14:32:38.150071 kernel: smp: Brought up 1 node, 2 CPUs Jan 14 14:32:38.150079 kernel: smpboot: Max logical packages: 1 Jan 14 14:32:38.150087 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Jan 14 14:32:38.150097 kernel: devtmpfs: initialized Jan 14 14:32:38.150108 kernel: x86/mm: Memory block size: 128MB Jan 14 14:32:38.150116 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jan 14 14:32:38.150127 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 14 14:32:38.150135 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 14 14:32:38.150144 kernel: pinctrl core: initialized pinctrl subsystem Jan 14 14:32:38.150155 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 14 14:32:38.150163 kernel: audit: initializing netlink subsys (disabled) Jan 14 14:32:38.150179 kernel: audit: type=2000 audit(1736865156.028:1): state=initialized audit_enabled=0 res=1 Jan 14 14:32:38.150187 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 14 14:32:38.150201 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 14 14:32:38.150209 kernel: cpuidle: using governor menu Jan 14 14:32:38.150220 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 14 14:32:38.150228 kernel: dca service started, version 1.12.1 Jan 14 14:32:38.150239 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Jan 14 14:32:38.150247 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 14 14:32:38.150257 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 14 14:32:38.150268 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 14 14:32:38.150276 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 14 14:32:38.150287 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 14 14:32:38.150298 kernel: ACPI: Added _OSI(Module Device) Jan 14 14:32:38.150306 kernel: ACPI: Added _OSI(Processor Device) Jan 14 14:32:38.150315 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 14 14:32:38.150324 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 14 14:32:38.150332 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 14 14:32:38.150353 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 14 14:32:38.150362 kernel: ACPI: Interpreter enabled Jan 14 14:32:38.150372 kernel: ACPI: PM: (supports S0 S5) Jan 14 14:32:38.150383 kernel: ACPI: Using IOAPIC for interrupt routing Jan 14 14:32:38.150394 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 14 14:32:38.150402 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 14 14:32:38.150410 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jan 14 14:32:38.150421 kernel: iommu: Default domain type: Translated Jan 14 14:32:38.150429 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 14 14:32:38.150437 kernel: efivars: Registered efivars operations Jan 14 14:32:38.150445 kernel: PCI: Using ACPI for IRQ routing Jan 14 14:32:38.150454 kernel: PCI: System does not support PCI Jan 14 14:32:38.150465 kernel: vgaarb: loaded Jan 14 14:32:38.150474 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Jan 14 14:32:38.150483 kernel: VFS: Disk quotas dquot_6.6.0 Jan 14 14:32:38.150493 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 14 14:32:38.150501 kernel: pnp: PnP ACPI init Jan 14 14:32:38.150512 kernel: pnp: PnP ACPI: found 3 devices Jan 14 14:32:38.150520 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 14 14:32:38.150528 kernel: NET: Registered PF_INET protocol family Jan 14 14:32:38.150539 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 14 14:32:38.150549 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 14 14:32:38.150559 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 14 14:32:38.150568 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 14 14:32:38.150576 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 14 14:32:38.150586 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 14 14:32:38.155141 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 14 14:32:38.155156 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 14 14:32:38.155169 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 14 14:32:38.155183 kernel: NET: Registered PF_XDP protocol family Jan 14 14:32:38.155204 kernel: PCI: CLS 0 bytes, default 64 Jan 14 14:32:38.155217 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 14 14:32:38.155231 kernel: software IO TLB: mapped [mem 0x000000003b5c1000-0x000000003f5c1000] (64MB) Jan 14 14:32:38.155245 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 14 14:32:38.155259 kernel: Initialise system trusted keyrings Jan 14 14:32:38.155272 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 14 14:32:38.155286 kernel: Key type asymmetric registered Jan 14 14:32:38.155300 kernel: Asymmetric key parser 'x509' registered Jan 14 14:32:38.155314 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 14 14:32:38.155332 kernel: io scheduler mq-deadline registered Jan 14 14:32:38.155378 kernel: io scheduler kyber registered Jan 14 14:32:38.155392 kernel: io scheduler bfq registered Jan 14 14:32:38.155407 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 14 14:32:38.155421 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 14 14:32:38.155436 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 14 14:32:38.155451 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 14 14:32:38.155465 kernel: i8042: PNP: No PS/2 controller found. Jan 14 14:32:38.155647 kernel: rtc_cmos 00:02: registered as rtc0 Jan 14 14:32:38.155773 kernel: rtc_cmos 00:02: setting system clock to 2025-01-14T14:32:37 UTC (1736865157) Jan 14 14:32:38.155881 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jan 14 14:32:38.155897 kernel: intel_pstate: CPU model not supported Jan 14 14:32:38.155911 kernel: efifb: probing for efifb Jan 14 14:32:38.155924 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 14 14:32:38.155938 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 14 14:32:38.155951 kernel: efifb: scrolling: redraw Jan 14 14:32:38.155969 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 14 14:32:38.155982 kernel: Console: switching to colour frame buffer device 128x48 Jan 14 14:32:38.155995 kernel: fb0: EFI VGA frame buffer device Jan 14 14:32:38.156009 kernel: pstore: Using crash dump compression: deflate Jan 14 14:32:38.156023 kernel: pstore: Registered efi_pstore as persistent store backend Jan 14 14:32:38.156036 kernel: NET: Registered PF_INET6 protocol family Jan 14 14:32:38.156050 kernel: Segment Routing with IPv6 Jan 14 14:32:38.156064 kernel: In-situ OAM (IOAM) with IPv6 Jan 14 14:32:38.156077 kernel: NET: Registered PF_PACKET protocol family Jan 14 14:32:38.156091 kernel: Key type dns_resolver registered Jan 14 14:32:38.156108 kernel: IPI shorthand broadcast: enabled Jan 14 14:32:38.156121 kernel: sched_clock: Marking stable (924002800, 49665600)->(1224486500, -250818100) Jan 14 14:32:38.156135 kernel: registered taskstats version 1 Jan 14 14:32:38.156148 kernel: Loading compiled-in X.509 certificates Jan 14 14:32:38.156162 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: e8ca4908f7ff887d90a0430272c92dde55624447' Jan 14 14:32:38.156176 kernel: Key type .fscrypt registered Jan 14 14:32:38.156189 kernel: Key type fscrypt-provisioning registered Jan 14 14:32:38.156203 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 14 14:32:38.156220 kernel: ima: Allocated hash algorithm: sha1 Jan 14 14:32:38.156234 kernel: ima: No architecture policies found Jan 14 14:32:38.156248 kernel: clk: Disabling unused clocks Jan 14 14:32:38.156262 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 14 14:32:38.156277 kernel: Write protecting the kernel read-only data: 36864k Jan 14 14:32:38.156291 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 14 14:32:38.156305 kernel: Run /init as init process Jan 14 14:32:38.156320 kernel: with arguments: Jan 14 14:32:38.156333 kernel: /init Jan 14 14:32:38.156375 kernel: with environment: Jan 14 14:32:38.156389 kernel: HOME=/ Jan 14 14:32:38.156403 kernel: TERM=linux Jan 14 14:32:38.156416 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 14 14:32:38.156433 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 14 14:32:38.156450 systemd[1]: Detected virtualization microsoft. Jan 14 14:32:38.156465 systemd[1]: Detected architecture x86-64. Jan 14 14:32:38.156479 systemd[1]: Running in initrd. Jan 14 14:32:38.156497 systemd[1]: No hostname configured, using default hostname. Jan 14 14:32:38.156511 systemd[1]: Hostname set to . Jan 14 14:32:38.156527 systemd[1]: Initializing machine ID from random generator. Jan 14 14:32:38.156541 systemd[1]: Queued start job for default target initrd.target. Jan 14 14:32:38.156556 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 14:32:38.156571 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 14:32:38.156587 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 14 14:32:38.156602 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 14 14:32:38.156620 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 14 14:32:38.156636 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 14 14:32:38.156653 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 14 14:32:38.156669 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 14 14:32:38.156685 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 14:32:38.156700 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 14 14:32:38.156714 systemd[1]: Reached target paths.target - Path Units. Jan 14 14:32:38.156731 systemd[1]: Reached target slices.target - Slice Units. Jan 14 14:32:38.156745 systemd[1]: Reached target swap.target - Swaps. Jan 14 14:32:38.156759 systemd[1]: Reached target timers.target - Timer Units. Jan 14 14:32:38.156775 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 14:32:38.156790 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 14:32:38.156805 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 14 14:32:38.156821 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 14 14:32:38.156835 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 14 14:32:38.156851 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 14 14:32:38.156870 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 14:32:38.156886 systemd[1]: Reached target sockets.target - Socket Units. Jan 14 14:32:38.156900 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 14 14:32:38.156915 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 14 14:32:38.156930 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 14 14:32:38.156945 systemd[1]: Starting systemd-fsck-usr.service... Jan 14 14:32:38.156961 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 14 14:32:38.156974 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 14 14:32:38.156992 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 14:32:38.157032 systemd-journald[176]: Collecting audit messages is disabled. Jan 14 14:32:38.157067 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 14 14:32:38.157083 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 14:32:38.157103 systemd-journald[176]: Journal started Jan 14 14:32:38.157152 systemd-journald[176]: Runtime Journal (/run/log/journal/0370aafac9e9438281d8f20030799184) is 8.0M, max 158.8M, 150.8M free. Jan 14 14:32:38.165356 systemd[1]: Started systemd-journald.service - Journal Service. Jan 14 14:32:38.165588 systemd[1]: Finished systemd-fsck-usr.service. Jan 14 14:32:38.175091 systemd-modules-load[177]: Inserted module 'overlay' Jan 14 14:32:38.177623 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 14 14:32:38.181518 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 14 14:32:38.187225 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 14:32:38.191767 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 14:32:38.207092 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 14:32:38.231830 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 14 14:32:38.240263 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 14 14:32:38.247605 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 14 14:32:38.258435 kernel: Bridge firewalling registered Jan 14 14:32:38.259360 systemd-modules-load[177]: Inserted module 'br_netfilter' Jan 14 14:32:38.269269 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 14 14:32:38.272489 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 14 14:32:38.285747 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 14:32:38.290831 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 14:32:38.301552 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 14 14:32:38.309678 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 14 14:32:38.321093 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 14 14:32:38.327530 dracut-cmdline[209]: dracut-dracut-053 Jan 14 14:32:38.333024 dracut-cmdline[209]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 14 14:32:38.359155 systemd-resolved[219]: Positive Trust Anchors: Jan 14 14:32:38.359172 systemd-resolved[219]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 14 14:32:38.359227 systemd-resolved[219]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 14 14:32:38.377288 systemd-resolved[219]: Defaulting to hostname 'linux'. Jan 14 14:32:38.378273 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 14 14:32:38.389883 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 14 14:32:38.431364 kernel: SCSI subsystem initialized Jan 14 14:32:38.442367 kernel: Loading iSCSI transport class v2.0-870. Jan 14 14:32:38.453368 kernel: iscsi: registered transport (tcp) Jan 14 14:32:38.475225 kernel: iscsi: registered transport (qla4xxx) Jan 14 14:32:38.475321 kernel: QLogic iSCSI HBA Driver Jan 14 14:32:38.511408 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 14 14:32:38.523476 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 14 14:32:38.559416 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 14 14:32:38.559510 kernel: device-mapper: uevent: version 1.0.3 Jan 14 14:32:38.564613 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 14 14:32:38.605365 kernel: raid6: avx512x4 gen() 18247 MB/s Jan 14 14:32:38.625360 kernel: raid6: avx512x2 gen() 18404 MB/s Jan 14 14:32:38.644351 kernel: raid6: avx512x1 gen() 18247 MB/s Jan 14 14:32:38.663349 kernel: raid6: avx2x4 gen() 18325 MB/s Jan 14 14:32:38.683359 kernel: raid6: avx2x2 gen() 18169 MB/s Jan 14 14:32:38.703656 kernel: raid6: avx2x1 gen() 13649 MB/s Jan 14 14:32:38.703705 kernel: raid6: using algorithm avx512x2 gen() 18404 MB/s Jan 14 14:32:38.724352 kernel: raid6: .... xor() 29189 MB/s, rmw enabled Jan 14 14:32:38.724384 kernel: raid6: using avx512x2 recovery algorithm Jan 14 14:32:38.747368 kernel: xor: automatically using best checksumming function avx Jan 14 14:32:38.893365 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 14 14:32:38.903098 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 14 14:32:38.916611 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 14:32:38.930192 systemd-udevd[395]: Using default interface naming scheme 'v255'. Jan 14 14:32:38.934675 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 14:32:38.950512 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 14 14:32:38.964026 dracut-pre-trigger[406]: rd.md=0: removing MD RAID activation Jan 14 14:32:38.994067 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 14:32:39.007545 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 14 14:32:39.048403 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 14:32:39.063590 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 14 14:32:39.096462 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 14 14:32:39.103525 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 14:32:39.110753 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 14:32:39.121964 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 14 14:32:39.135633 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 14 14:32:39.158424 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 14 14:32:39.179264 kernel: cryptd: max_cpu_qlen set to 1000 Jan 14 14:32:39.165144 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 14:32:39.169280 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 14 14:32:39.172399 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 14:32:39.172589 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 14:32:39.175971 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 14:32:39.205000 kernel: hv_vmbus: Vmbus version:5.2 Jan 14 14:32:39.205063 kernel: AVX2 version of gcm_enc/dec engaged. Jan 14 14:32:39.203615 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 14:32:39.221598 kernel: AES CTR mode by8 optimization enabled Jan 14 14:32:39.214450 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 14 14:32:39.238744 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 14:32:39.238882 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 14:32:39.256690 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 14:32:39.278437 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 14 14:32:39.278510 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 14 14:32:39.284359 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 14 14:32:39.290363 kernel: PTP clock support registered Jan 14 14:32:39.298102 kernel: hv_utils: Registering HyperV Utility Driver Jan 14 14:32:39.298165 kernel: hv_vmbus: registering driver hv_utils Jan 14 14:32:39.301965 kernel: hv_utils: Heartbeat IC version 3.0 Jan 14 14:32:39.302016 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 14 14:32:39.302032 kernel: hv_utils: Shutdown IC version 3.2 Jan 14 14:32:39.307217 kernel: hv_utils: TimeSync IC version 4.0 Jan 14 14:32:40.241807 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jan 14 14:32:40.236938 systemd-resolved[219]: Clock change detected. Flushing caches. Jan 14 14:32:40.243336 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 14:32:40.255413 kernel: hv_vmbus: registering driver hid_hyperv Jan 14 14:32:40.262995 kernel: hv_vmbus: registering driver hv_storvsc Jan 14 14:32:40.263054 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jan 14 14:32:40.272078 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 14 14:32:40.282306 kernel: scsi host1: storvsc_host_t Jan 14 14:32:40.282537 kernel: scsi host0: storvsc_host_t Jan 14 14:32:40.282714 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 14 14:32:40.282753 kernel: hv_vmbus: registering driver hv_netvsc Jan 14 14:32:40.273610 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 14 14:32:40.292440 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jan 14 14:32:40.326088 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 14 14:32:40.331257 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 14 14:32:40.331287 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 14 14:32:40.341969 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 14 14:32:40.359307 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 14 14:32:40.359539 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 14 14:32:40.359720 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 14 14:32:40.359882 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 14 14:32:40.360052 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 14 14:32:40.360077 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 14 14:32:40.341855 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 14:32:40.480485 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 14 14:32:40.497438 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (441) Jan 14 14:32:40.520223 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 14 14:32:40.522402 kernel: BTRFS: device fsid b8e2d3c5-4bed-4339-bed5-268c66823686 devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (449) Jan 14 14:32:40.544367 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 14 14:32:40.548608 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 14 14:32:40.565432 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 14 14:32:40.582643 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 14 14:32:40.600407 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 14 14:32:40.607432 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 14 14:32:41.339990 kernel: hv_netvsc 000d3ab0-5d97-000d-3ab0-5d97000d3ab0 eth0: VF slot 1 added Jan 14 14:32:41.348437 kernel: hv_vmbus: registering driver hv_pci Jan 14 14:32:41.353406 kernel: hv_pci a53207a1-fe2e-4773-912a-67905ad201f4: PCI VMBus probing: Using version 0x10004 Jan 14 14:32:41.415165 kernel: hv_pci a53207a1-fe2e-4773-912a-67905ad201f4: PCI host bridge to bus fe2e:00 Jan 14 14:32:41.415367 kernel: pci_bus fe2e:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Jan 14 14:32:41.415598 kernel: pci_bus fe2e:00: No busn resource found for root bus, will use [bus 00-ff] Jan 14 14:32:41.415746 kernel: pci fe2e:00:02.0: [15b3:1016] type 00 class 0x020000 Jan 14 14:32:41.415937 kernel: pci fe2e:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Jan 14 14:32:41.416112 kernel: pci fe2e:00:02.0: enabling Extended Tags Jan 14 14:32:41.416266 kernel: pci fe2e:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at fe2e:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Jan 14 14:32:41.416463 kernel: pci_bus fe2e:00: busn_res: [bus 00-ff] end is updated to 00 Jan 14 14:32:41.416624 kernel: pci fe2e:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Jan 14 14:32:41.578785 kernel: mlx5_core fe2e:00:02.0: enabling device (0000 -> 0002) Jan 14 14:32:41.822793 kernel: mlx5_core fe2e:00:02.0: firmware version: 14.30.5000 Jan 14 14:32:41.823300 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 14 14:32:41.823323 kernel: hv_netvsc 000d3ab0-5d97-000d-3ab0-5d97000d3ab0 eth0: VF registering: eth1 Jan 14 14:32:41.823869 kernel: mlx5_core fe2e:00:02.0 eth1: joined to eth0 Jan 14 14:32:41.824066 kernel: mlx5_core fe2e:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jan 14 14:32:41.824259 disk-uuid[590]: The operation has completed successfully. Jan 14 14:32:41.832404 kernel: mlx5_core fe2e:00:02.0 enP65070s1: renamed from eth1 Jan 14 14:32:41.848328 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 14 14:32:41.848459 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 14 14:32:41.864535 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 14 14:32:41.870560 sh[686]: Success Jan 14 14:32:41.890882 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 14 14:32:41.958211 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 14 14:32:41.972818 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 14 14:32:41.978241 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 14 14:32:41.997406 kernel: BTRFS info (device dm-0): first mount of filesystem b8e2d3c5-4bed-4339-bed5-268c66823686 Jan 14 14:32:41.997464 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 14 14:32:42.006560 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 14 14:32:42.010161 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 14 14:32:42.013528 kernel: BTRFS info (device dm-0): using free space tree Jan 14 14:32:42.092696 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 14 14:32:42.098800 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 14 14:32:42.114597 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 14 14:32:42.121621 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 14 14:32:42.134478 kernel: BTRFS info (device sda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 14 14:32:42.141142 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 14:32:42.141221 kernel: BTRFS info (device sda6): using free space tree Jan 14 14:32:42.151423 kernel: BTRFS info (device sda6): auto enabling async discard Jan 14 14:32:42.161906 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 14 14:32:42.168378 kernel: BTRFS info (device sda6): last unmount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 14 14:32:42.176449 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 14 14:32:42.191467 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 14 14:32:42.230870 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 14:32:42.248625 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 14 14:32:42.282956 systemd-networkd[870]: lo: Link UP Jan 14 14:32:42.282967 systemd-networkd[870]: lo: Gained carrier Jan 14 14:32:42.285103 systemd-networkd[870]: Enumeration completed Jan 14 14:32:42.285370 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 14 14:32:42.288519 systemd[1]: Reached target network.target - Network. Jan 14 14:32:42.290127 systemd-networkd[870]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 14 14:32:42.290131 systemd-networkd[870]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 14:32:42.365414 kernel: mlx5_core fe2e:00:02.0 enP65070s1: Link up Jan 14 14:32:42.458312 ignition[803]: Ignition 2.19.0 Jan 14 14:32:42.458324 ignition[803]: Stage: fetch-offline Jan 14 14:32:42.458368 ignition[803]: no configs at "/usr/lib/ignition/base.d" Jan 14 14:32:42.458378 ignition[803]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 14:32:42.458497 ignition[803]: parsed url from cmdline: "" Jan 14 14:32:42.458501 ignition[803]: no config URL provided Jan 14 14:32:42.458507 ignition[803]: reading system config file "/usr/lib/ignition/user.ign" Jan 14 14:32:42.458517 ignition[803]: no config at "/usr/lib/ignition/user.ign" Jan 14 14:32:42.458524 ignition[803]: failed to fetch config: resource requires networking Jan 14 14:32:42.460973 ignition[803]: Ignition finished successfully Jan 14 14:32:42.488255 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 14:32:42.499704 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 14 14:32:42.515125 ignition[878]: Ignition 2.19.0 Jan 14 14:32:42.515135 ignition[878]: Stage: fetch Jan 14 14:32:42.515352 ignition[878]: no configs at "/usr/lib/ignition/base.d" Jan 14 14:32:42.515365 ignition[878]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 14:32:42.515496 ignition[878]: parsed url from cmdline: "" Jan 14 14:32:42.515500 ignition[878]: no config URL provided Jan 14 14:32:42.515505 ignition[878]: reading system config file "/usr/lib/ignition/user.ign" Jan 14 14:32:42.515511 ignition[878]: no config at "/usr/lib/ignition/user.ign" Jan 14 14:32:42.515529 ignition[878]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 14 14:32:42.515677 ignition[878]: GET error: Get "http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 14 14:32:42.716279 ignition[878]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #2 Jan 14 14:32:42.721049 ignition[878]: GET error: Get "http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 14 14:32:43.121760 ignition[878]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #3 Jan 14 14:32:43.121970 ignition[878]: GET error: Get "http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 14 14:32:43.343543 kernel: hv_netvsc 000d3ab0-5d97-000d-3ab0-5d97000d3ab0 eth0: Data path switched to VF: enP65070s1 Jan 14 14:32:43.343933 systemd-networkd[870]: enP65070s1: Link UP Jan 14 14:32:43.344074 systemd-networkd[870]: eth0: Link UP Jan 14 14:32:43.344239 systemd-networkd[870]: eth0: Gained carrier Jan 14 14:32:43.344252 systemd-networkd[870]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 14 14:32:43.350829 systemd-networkd[870]: enP65070s1: Gained carrier Jan 14 14:32:43.413462 systemd-networkd[870]: eth0: DHCPv4 address 10.200.8.10/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jan 14 14:32:43.922122 ignition[878]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #4 Jan 14 14:32:43.998751 ignition[878]: GET result: OK Jan 14 14:32:43.998907 ignition[878]: config has been read from IMDS userdata Jan 14 14:32:43.998946 ignition[878]: parsing config with SHA512: cd3801da6d96228d9a35648be866a02314700e320b828d3fe476dd39e48b77ce609e627999d038afcbe57be7ffc11bfb02b478faad8b75691403236a51941280 Jan 14 14:32:44.007040 unknown[878]: fetched base config from "system" Jan 14 14:32:44.007052 unknown[878]: fetched base config from "system" Jan 14 14:32:44.007061 unknown[878]: fetched user config from "azure" Jan 14 14:32:44.014277 ignition[878]: fetch: fetch complete Jan 14 14:32:44.014282 ignition[878]: fetch: fetch passed Jan 14 14:32:44.016658 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 14 14:32:44.014336 ignition[878]: Ignition finished successfully Jan 14 14:32:44.029648 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 14 14:32:44.045238 ignition[886]: Ignition 2.19.0 Jan 14 14:32:44.045249 ignition[886]: Stage: kargs Jan 14 14:32:44.047354 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 14 14:32:44.045489 ignition[886]: no configs at "/usr/lib/ignition/base.d" Jan 14 14:32:44.045504 ignition[886]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 14:32:44.046337 ignition[886]: kargs: kargs passed Jan 14 14:32:44.046381 ignition[886]: Ignition finished successfully Jan 14 14:32:44.071158 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 14 14:32:44.087118 ignition[892]: Ignition 2.19.0 Jan 14 14:32:44.087129 ignition[892]: Stage: disks Jan 14 14:32:44.089180 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 14 14:32:44.087347 ignition[892]: no configs at "/usr/lib/ignition/base.d" Jan 14 14:32:44.087359 ignition[892]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 14:32:44.097733 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 14 14:32:44.088250 ignition[892]: disks: disks passed Jan 14 14:32:44.106835 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 14 14:32:44.088292 ignition[892]: Ignition finished successfully Jan 14 14:32:44.114523 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 14 14:32:44.117698 systemd[1]: Reached target sysinit.target - System Initialization. Jan 14 14:32:44.134681 systemd[1]: Reached target basic.target - Basic System. Jan 14 14:32:44.145569 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 14 14:32:44.165685 systemd-fsck[901]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 14 14:32:44.169744 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 14 14:32:44.183583 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 14 14:32:44.279786 kernel: EXT4-fs (sda9): mounted filesystem 39899d4c-a8b1-4feb-9875-e812cc535888 r/w with ordered data mode. Quota mode: none. Jan 14 14:32:44.280406 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 14 14:32:44.285242 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 14 14:32:44.299521 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 14:32:44.304577 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 14 14:32:44.315498 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (912) Jan 14 14:32:44.316579 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 14 14:32:44.321474 kernel: BTRFS info (device sda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 14 14:32:44.331650 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 14:32:44.331695 kernel: BTRFS info (device sda6): using free space tree Jan 14 14:32:44.330799 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 14 14:32:44.330861 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 14:32:44.337402 kernel: BTRFS info (device sda6): auto enabling async discard Jan 14 14:32:44.348447 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 14:32:44.348652 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 14 14:32:44.353558 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 14 14:32:44.516824 coreos-metadata[914]: Jan 14 14:32:44.516 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 14 14:32:44.523341 coreos-metadata[914]: Jan 14 14:32:44.523 INFO Fetch successful Jan 14 14:32:44.526174 coreos-metadata[914]: Jan 14 14:32:44.523 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 14 14:32:44.523607 systemd-networkd[870]: eth0: Gained IPv6LL Jan 14 14:32:44.539826 coreos-metadata[914]: Jan 14 14:32:44.539 INFO Fetch successful Jan 14 14:32:44.545149 coreos-metadata[914]: Jan 14 14:32:44.543 INFO wrote hostname ci-4081.3.0-a-a739250a79 to /sysroot/etc/hostname Jan 14 14:32:44.548975 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 14 14:32:44.557876 initrd-setup-root[941]: cut: /sysroot/etc/passwd: No such file or directory Jan 14 14:32:44.568265 initrd-setup-root[949]: cut: /sysroot/etc/group: No such file or directory Jan 14 14:32:44.578881 initrd-setup-root[956]: cut: /sysroot/etc/shadow: No such file or directory Jan 14 14:32:44.584192 initrd-setup-root[963]: cut: /sysroot/etc/gshadow: No such file or directory Jan 14 14:32:44.833663 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 14 14:32:44.843500 systemd-networkd[870]: enP65070s1: Gained IPv6LL Jan 14 14:32:44.843742 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 14 14:32:44.853178 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 14 14:32:44.861600 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 14 14:32:44.869492 kernel: BTRFS info (device sda6): last unmount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 14 14:32:44.887255 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 14 14:32:44.899193 ignition[1036]: INFO : Ignition 2.19.0 Jan 14 14:32:44.899193 ignition[1036]: INFO : Stage: mount Jan 14 14:32:44.905161 ignition[1036]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 14:32:44.905161 ignition[1036]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 14:32:44.905161 ignition[1036]: INFO : mount: mount passed Jan 14 14:32:44.905161 ignition[1036]: INFO : Ignition finished successfully Jan 14 14:32:44.911929 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 14 14:32:44.932554 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 14 14:32:44.946456 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 14:32:44.970408 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1046) Jan 14 14:32:44.975404 kernel: BTRFS info (device sda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 14 14:32:44.975446 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 14:32:44.981081 kernel: BTRFS info (device sda6): using free space tree Jan 14 14:32:44.986405 kernel: BTRFS info (device sda6): auto enabling async discard Jan 14 14:32:44.987983 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 14:32:45.015682 ignition[1063]: INFO : Ignition 2.19.0 Jan 14 14:32:45.019883 ignition[1063]: INFO : Stage: files Jan 14 14:32:45.019883 ignition[1063]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 14:32:45.019883 ignition[1063]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 14:32:45.019883 ignition[1063]: DEBUG : files: compiled without relabeling support, skipping Jan 14 14:32:45.032272 ignition[1063]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 14 14:32:45.032272 ignition[1063]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 14 14:32:45.064654 ignition[1063]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 14 14:32:45.069226 ignition[1063]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 14 14:32:45.073075 unknown[1063]: wrote ssh authorized keys file for user: core Jan 14 14:32:45.076066 ignition[1063]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 14 14:32:45.076066 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 14 14:32:45.076066 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 14 14:32:45.116102 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 14 14:32:45.289606 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 14 14:32:45.289606 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 14 14:32:45.289606 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 14 14:32:45.825295 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 14 14:32:45.969315 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 14 14:32:45.976609 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 14 14:32:45.976609 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 14 14:32:45.976609 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 14 14:32:45.976609 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 14 14:32:45.976609 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 14 14:32:45.976609 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 14 14:32:45.976609 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 14 14:32:45.976609 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 14 14:32:46.020834 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 14:32:46.020834 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 14:32:46.020834 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 14 14:32:46.020834 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 14 14:32:46.020834 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 14 14:32:46.020834 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Jan 14 14:32:46.441600 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 14 14:32:46.724346 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 14 14:32:46.724346 ignition[1063]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 14 14:32:46.758094 ignition[1063]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 14 14:32:46.765817 ignition[1063]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 14 14:32:46.765817 ignition[1063]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 14 14:32:46.765817 ignition[1063]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jan 14 14:32:46.765817 ignition[1063]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jan 14 14:32:46.765817 ignition[1063]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 14 14:32:46.765817 ignition[1063]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 14 14:32:46.765817 ignition[1063]: INFO : files: files passed Jan 14 14:32:46.765817 ignition[1063]: INFO : Ignition finished successfully Jan 14 14:32:46.760451 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 14 14:32:46.811700 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 14 14:32:46.816786 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 14 14:32:46.831953 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 14 14:32:46.832189 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 14 14:32:46.843992 initrd-setup-root-after-ignition[1091]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 14:32:46.843992 initrd-setup-root-after-ignition[1091]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 14 14:32:46.858985 initrd-setup-root-after-ignition[1095]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 14:32:46.851046 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 14:32:46.859122 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 14 14:32:46.884656 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 14 14:32:46.910809 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 14 14:32:46.910926 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 14 14:32:46.923025 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 14 14:32:46.926062 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 14 14:32:46.932012 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 14 14:32:46.943560 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 14 14:32:46.956918 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 14:32:46.969640 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 14 14:32:46.987546 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 14 14:32:46.994921 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 14:32:46.998819 systemd[1]: Stopped target timers.target - Timer Units. Jan 14 14:32:47.006949 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 14 14:32:47.007117 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 14:32:47.015160 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 14 14:32:47.025291 systemd[1]: Stopped target basic.target - Basic System. Jan 14 14:32:47.025547 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 14 14:32:47.025965 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 14:32:47.026435 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 14 14:32:47.026934 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 14 14:32:47.027480 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 14:32:47.028110 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 14 14:32:47.028856 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 14 14:32:47.029547 systemd[1]: Stopped target swap.target - Swaps. Jan 14 14:32:47.030138 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 14 14:32:47.030287 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 14 14:32:47.031855 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 14 14:32:47.032604 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 14:32:47.033248 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 14 14:32:47.072790 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 14:32:47.081662 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 14 14:32:47.081838 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 14 14:32:47.136851 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 14 14:32:47.137025 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 14:32:47.151361 systemd[1]: ignition-files.service: Deactivated successfully. Jan 14 14:32:47.151581 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 14 14:32:47.158932 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 14 14:32:47.159079 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 14 14:32:47.180634 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 14 14:32:47.190061 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 14 14:32:47.190838 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 14 14:32:47.190956 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 14:32:47.203456 ignition[1115]: INFO : Ignition 2.19.0 Jan 14 14:32:47.203456 ignition[1115]: INFO : Stage: umount Jan 14 14:32:47.203456 ignition[1115]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 14:32:47.203456 ignition[1115]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 14:32:47.203456 ignition[1115]: INFO : umount: umount passed Jan 14 14:32:47.203456 ignition[1115]: INFO : Ignition finished successfully Jan 14 14:32:47.207061 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 14 14:32:47.207204 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 14:32:47.233597 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 14 14:32:47.233709 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 14 14:32:47.245104 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 14 14:32:47.245362 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 14 14:32:47.251359 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 14 14:32:47.251424 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 14 14:32:47.257861 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 14 14:32:47.257926 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 14 14:32:47.265280 systemd[1]: Stopped target network.target - Network. Jan 14 14:32:47.268906 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 14 14:32:47.268968 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 14:32:47.280999 systemd[1]: Stopped target paths.target - Path Units. Jan 14 14:32:47.286162 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 14 14:32:47.291627 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 14:32:47.316778 systemd[1]: Stopped target slices.target - Slice Units. Jan 14 14:32:47.323275 systemd[1]: Stopped target sockets.target - Socket Units. Jan 14 14:32:47.323435 systemd[1]: iscsid.socket: Deactivated successfully. Jan 14 14:32:47.323485 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 14:32:47.323982 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 14 14:32:47.324016 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 14:32:47.324518 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 14 14:32:47.324563 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 14 14:32:47.325033 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 14 14:32:47.325066 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 14 14:32:47.325662 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 14 14:32:47.326043 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 14 14:32:47.327560 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 14 14:32:47.328104 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 14 14:32:47.328191 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 14 14:32:47.328756 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 14 14:32:47.328839 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 14 14:32:47.330891 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 14 14:32:47.330963 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 14 14:32:47.370472 systemd-networkd[870]: eth0: DHCPv6 lease lost Jan 14 14:32:47.372999 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 14 14:32:47.373106 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 14 14:32:47.385202 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 14 14:32:47.385340 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 14 14:32:47.419008 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 14 14:32:47.419090 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 14 14:32:47.447590 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 14 14:32:47.468446 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 14 14:32:47.468540 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 14:32:47.476283 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 14 14:32:47.476343 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 14 14:32:47.490737 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 14 14:32:47.490812 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 14 14:32:47.499680 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 14 14:32:47.499746 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 14:32:47.505916 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 14:32:47.531088 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 14 14:32:47.531256 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 14:32:47.537522 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 14 14:32:47.537565 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 14 14:32:47.544464 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 14 14:32:47.544512 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 14:32:47.562506 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 14 14:32:47.562578 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 14 14:32:47.571025 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 14 14:32:47.571090 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 14 14:32:47.579220 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 14 14:32:47.579283 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 14:32:47.595543 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 14 14:32:47.598565 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 14 14:32:47.598628 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 14:32:47.603215 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 14 14:32:47.603265 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 14:32:47.613556 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 14 14:32:47.613619 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 14:32:47.625895 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 14:32:47.625962 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 14:32:47.643478 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 14 14:32:47.643588 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 14 14:32:48.959416 kernel: hv_netvsc 000d3ab0-5d97-000d-3ab0-5d97000d3ab0 eth0: Data path switched from VF: enP65070s1 Jan 14 14:32:48.979738 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 14 14:32:48.979877 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 14 14:32:48.984277 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 14 14:32:49.006606 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 14 14:32:49.017219 systemd[1]: Switching root. Jan 14 14:32:49.054520 systemd-journald[176]: Journal stopped Jan 14 14:32:57.008587 systemd-journald[176]: Received SIGTERM from PID 1 (systemd). Jan 14 14:32:57.008628 kernel: SELinux: policy capability network_peer_controls=1 Jan 14 14:32:57.008645 kernel: SELinux: policy capability open_perms=1 Jan 14 14:32:57.008659 kernel: SELinux: policy capability extended_socket_class=1 Jan 14 14:32:57.008670 kernel: SELinux: policy capability always_check_network=0 Jan 14 14:32:57.008681 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 14 14:32:57.008695 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 14 14:32:57.008710 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 14 14:32:57.008722 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 14 14:32:57.008735 kernel: audit: type=1403 audit(1736865174.955:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 14 14:32:57.008748 systemd[1]: Successfully loaded SELinux policy in 63.691ms. Jan 14 14:32:57.008758 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.323ms. Jan 14 14:32:57.011788 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 14 14:32:57.011823 systemd[1]: Detected virtualization microsoft. Jan 14 14:32:57.011845 systemd[1]: Detected architecture x86-64. Jan 14 14:32:57.011860 systemd[1]: Detected first boot. Jan 14 14:32:57.011876 systemd[1]: Hostname set to . Jan 14 14:32:57.011890 systemd[1]: Initializing machine ID from random generator. Jan 14 14:32:57.011906 zram_generator::config[1158]: No configuration found. Jan 14 14:32:57.011924 systemd[1]: Populated /etc with preset unit settings. Jan 14 14:32:57.011939 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 14 14:32:57.011954 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 14 14:32:57.011970 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 14 14:32:57.011986 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 14 14:32:57.012002 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 14 14:32:57.012019 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 14 14:32:57.012037 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 14 14:32:57.012053 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 14 14:32:57.012068 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 14 14:32:57.012086 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 14 14:32:57.012102 systemd[1]: Created slice user.slice - User and Session Slice. Jan 14 14:32:57.012119 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 14:32:57.012136 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 14:32:57.012152 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 14 14:32:57.012172 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 14 14:32:57.012188 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 14 14:32:57.012205 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 14 14:32:57.012221 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 14 14:32:57.012238 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 14:32:57.012255 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 14 14:32:57.012277 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 14 14:32:57.012296 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 14 14:32:57.012318 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 14 14:32:57.012338 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 14:32:57.012356 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 14 14:32:57.012373 systemd[1]: Reached target slices.target - Slice Units. Jan 14 14:32:57.014437 systemd[1]: Reached target swap.target - Swaps. Jan 14 14:32:57.014471 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 14 14:32:57.014490 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 14 14:32:57.014515 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 14 14:32:57.014534 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 14 14:32:57.014553 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 14:32:57.014573 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 14 14:32:57.014590 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 14 14:32:57.014612 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 14 14:32:57.014630 systemd[1]: Mounting media.mount - External Media Directory... Jan 14 14:32:57.014648 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 14:32:57.014664 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 14 14:32:57.014679 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 14 14:32:57.014696 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 14 14:32:57.014713 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 14 14:32:57.014730 systemd[1]: Reached target machines.target - Containers. Jan 14 14:32:57.014750 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 14 14:32:57.014767 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 14:32:57.014784 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 14 14:32:57.014801 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 14 14:32:57.014817 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 14:32:57.014834 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 14 14:32:57.014849 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 14:32:57.014866 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 14 14:32:57.014883 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 14:32:57.014905 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 14 14:32:57.014924 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 14 14:32:57.014946 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 14 14:32:57.014964 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 14 14:32:57.014983 systemd[1]: Stopped systemd-fsck-usr.service. Jan 14 14:32:57.015002 kernel: fuse: init (API version 7.39) Jan 14 14:32:57.015019 kernel: loop: module loaded Jan 14 14:32:57.015037 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 14 14:32:57.015059 kernel: ACPI: bus type drm_connector registered Jan 14 14:32:57.015114 systemd-journald[1264]: Collecting audit messages is disabled. Jan 14 14:32:57.015154 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 14 14:32:57.015174 systemd-journald[1264]: Journal started Jan 14 14:32:57.015214 systemd-journald[1264]: Runtime Journal (/run/log/journal/dbe429ecab4d40afa19c5a76291806e1) is 8.0M, max 158.8M, 150.8M free. Jan 14 14:32:56.400934 systemd[1]: Queued start job for default target multi-user.target. Jan 14 14:32:56.432954 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 14 14:32:56.433344 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 14 14:32:57.024821 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 14 14:32:57.040079 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 14 14:32:57.058544 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 14 14:32:57.063414 systemd[1]: verity-setup.service: Deactivated successfully. Jan 14 14:32:57.063483 systemd[1]: Stopped verity-setup.service. Jan 14 14:32:57.077409 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 14:32:57.086893 systemd[1]: Started systemd-journald.service - Journal Service. Jan 14 14:32:57.087994 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 14 14:32:57.091901 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 14 14:32:57.095517 systemd[1]: Mounted media.mount - External Media Directory. Jan 14 14:32:57.099010 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 14 14:32:57.102587 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 14 14:32:57.106284 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 14 14:32:57.109752 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 14 14:32:57.114019 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 14:32:57.118953 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 14 14:32:57.119288 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 14 14:32:57.123706 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 14:32:57.124056 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 14:32:57.127638 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 14 14:32:57.127899 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 14 14:32:57.131636 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 14:32:57.131908 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 14:32:57.135974 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 14 14:32:57.136309 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 14 14:32:57.139755 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 14:32:57.139910 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 14:32:57.143191 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 14 14:32:57.146493 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 14 14:32:57.150179 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 14 14:32:57.155050 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 14:32:57.168703 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 14 14:32:57.176503 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 14 14:32:57.181226 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 14 14:32:57.184641 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 14 14:32:57.184693 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 14 14:32:57.189326 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 14 14:32:57.199706 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 14 14:32:57.204159 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 14 14:32:57.207189 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 14:32:57.211518 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 14 14:32:57.231914 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 14 14:32:57.235283 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 14 14:32:57.237318 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 14 14:32:57.240746 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 14 14:32:57.243206 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 14 14:32:57.256575 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 14 14:32:57.259866 systemd-journald[1264]: Time spent on flushing to /var/log/journal/dbe429ecab4d40afa19c5a76291806e1 is 33.770ms for 968 entries. Jan 14 14:32:57.259866 systemd-journald[1264]: System Journal (/var/log/journal/dbe429ecab4d40afa19c5a76291806e1) is 8.0M, max 2.6G, 2.6G free. Jan 14 14:32:57.329675 systemd-journald[1264]: Received client request to flush runtime journal. Jan 14 14:32:57.266654 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 14 14:32:57.281417 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 14 14:32:57.288061 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 14 14:32:57.292633 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 14 14:32:57.303768 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 14 14:32:57.308011 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 14 14:32:57.318705 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 14 14:32:57.354464 kernel: loop0: detected capacity change from 0 to 140768 Jan 14 14:32:57.333589 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 14 14:32:57.342700 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 14 14:32:57.362669 udevadm[1297]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 14 14:32:57.563159 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 14 14:32:57.565361 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 14 14:32:57.565490 systemd-tmpfiles[1296]: ACLs are not supported, ignoring. Jan 14 14:32:57.565511 systemd-tmpfiles[1296]: ACLs are not supported, ignoring. Jan 14 14:32:57.574939 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 14:32:57.584591 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 14 14:32:57.914107 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 14 14:32:58.018415 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 14 14:32:58.024493 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 14 14:32:58.035864 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 14 14:32:58.055420 kernel: loop1: detected capacity change from 0 to 31056 Jan 14 14:32:58.059653 systemd-tmpfiles[1316]: ACLs are not supported, ignoring. Jan 14 14:32:58.059675 systemd-tmpfiles[1316]: ACLs are not supported, ignoring. Jan 14 14:32:58.066248 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 14:32:58.228424 kernel: loop2: detected capacity change from 0 to 211296 Jan 14 14:32:58.266601 kernel: loop3: detected capacity change from 0 to 142488 Jan 14 14:32:58.747420 kernel: loop4: detected capacity change from 0 to 140768 Jan 14 14:32:58.764606 kernel: loop5: detected capacity change from 0 to 31056 Jan 14 14:32:58.772504 kernel: loop6: detected capacity change from 0 to 211296 Jan 14 14:32:58.782380 kernel: loop7: detected capacity change from 0 to 142488 Jan 14 14:32:58.790730 (sd-merge)[1322]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jan 14 14:32:58.791453 (sd-merge)[1322]: Merged extensions into '/usr'. Jan 14 14:32:58.794741 systemd[1]: Reloading requested from client PID 1295 ('systemd-sysext') (unit systemd-sysext.service)... Jan 14 14:32:58.794756 systemd[1]: Reloading... Jan 14 14:32:58.882421 zram_generator::config[1344]: No configuration found. Jan 14 14:32:59.160229 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 14 14:32:59.215934 systemd[1]: Reloading finished in 420 ms. Jan 14 14:32:59.244353 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 14 14:32:59.256607 systemd[1]: Starting ensure-sysext.service... Jan 14 14:32:59.268755 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 14 14:32:59.291859 systemd-tmpfiles[1407]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 14 14:32:59.292317 systemd-tmpfiles[1407]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 14 14:32:59.293482 systemd-tmpfiles[1407]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 14 14:32:59.293884 systemd-tmpfiles[1407]: ACLs are not supported, ignoring. Jan 14 14:32:59.293965 systemd-tmpfiles[1407]: ACLs are not supported, ignoring. Jan 14 14:32:59.308118 systemd-tmpfiles[1407]: Detected autofs mount point /boot during canonicalization of boot. Jan 14 14:32:59.309463 systemd-tmpfiles[1407]: Skipping /boot Jan 14 14:32:59.323820 systemd-tmpfiles[1407]: Detected autofs mount point /boot during canonicalization of boot. Jan 14 14:32:59.323957 systemd-tmpfiles[1407]: Skipping /boot Jan 14 14:32:59.324538 systemd[1]: Reloading requested from client PID 1406 ('systemctl') (unit ensure-sysext.service)... Jan 14 14:32:59.324567 systemd[1]: Reloading... Jan 14 14:32:59.406424 zram_generator::config[1431]: No configuration found. Jan 14 14:32:59.531178 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 14 14:32:59.587519 systemd[1]: Reloading finished in 262 ms. Jan 14 14:32:59.609886 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 14:32:59.625692 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 14 14:33:00.011869 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 14 14:33:00.019102 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 14 14:33:00.209744 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 14 14:33:00.214225 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 14 14:33:00.220663 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 14:33:00.220923 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 14:33:00.222183 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 14:33:00.230494 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 14:33:00.240966 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 14:33:00.243943 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 14:33:00.244260 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 14:33:00.245758 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 14:33:00.246039 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 14:33:00.250353 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 14:33:00.250785 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 14:33:00.258151 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 14 14:33:00.264135 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 14:33:00.264369 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 14:33:00.270896 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 14:33:00.287700 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 14:33:00.291933 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 14:33:00.293003 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 14:33:00.293986 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 14:33:00.294195 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 14:33:00.315790 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 14 14:33:00.327752 augenrules[1522]: No rules Jan 14 14:33:00.331610 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 14 14:33:00.337590 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 14 14:33:00.349674 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 14 14:33:00.354275 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 14:33:00.354459 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 14:33:00.373956 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 14 14:33:00.379150 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 14:33:00.379581 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 14:33:00.393485 systemd[1]: Finished ensure-sysext.service. Jan 14 14:33:00.400661 systemd[1]: Expecting device dev-ptp_hyperv.device - /dev/ptp_hyperv... Jan 14 14:33:00.404379 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 14:33:00.404733 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 14:33:00.411567 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 14 14:33:00.421581 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 14:33:00.425505 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 14:33:00.425583 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 14 14:33:00.425641 systemd[1]: Reached target time-set.target - System Time Set. Jan 14 14:33:00.439577 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 14:33:00.443400 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 14:33:00.443894 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 14 14:33:00.444120 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 14 14:33:00.448933 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 14:33:00.449248 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 14:33:00.458011 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 14 14:33:00.465189 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 14 14:33:00.465263 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 14 14:33:00.469494 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 14 14:33:00.498793 systemd-udevd[1538]: Using default interface naming scheme 'v255'. Jan 14 14:33:00.540329 systemd-resolved[1511]: Positive Trust Anchors: Jan 14 14:33:00.540349 systemd-resolved[1511]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 14 14:33:00.540411 systemd-resolved[1511]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 14 14:33:00.551643 systemd-resolved[1511]: Using system hostname 'ci-4081.3.0-a-a739250a79'. Jan 14 14:33:00.553674 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 14 14:33:00.557427 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 14 14:33:00.573628 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 14:33:00.587616 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 14 14:33:00.658222 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 14 14:33:00.786859 systemd[1]: Condition check resulted in dev-ptp_hyperv.device - /dev/ptp_hyperv being skipped. Jan 14 14:33:00.809851 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 14:33:00.914892 kernel: hv_vmbus: registering driver hv_balloon Jan 14 14:33:00.920411 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jan 14 14:33:00.929408 kernel: mousedev: PS/2 mouse device common for all mice Jan 14 14:33:01.007419 kernel: hv_vmbus: registering driver hyperv_fb Jan 14 14:33:01.011433 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jan 14 14:33:01.012235 systemd-networkd[1550]: lo: Link UP Jan 14 14:33:01.012244 systemd-networkd[1550]: lo: Gained carrier Jan 14 14:33:01.018422 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jan 14 14:33:01.019353 systemd-networkd[1550]: Enumeration completed Jan 14 14:33:01.019490 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 14 14:33:01.022657 systemd[1]: Reached target network.target - Network. Jan 14 14:33:01.029729 systemd-networkd[1550]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 14 14:33:01.029741 systemd-networkd[1550]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 14:33:01.033141 kernel: Console: switching to colour dummy device 80x25 Jan 14 14:33:01.037413 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 14 14:33:01.041414 kernel: Console: switching to colour frame buffer device 128x48 Jan 14 14:33:01.192155 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 14:33:01.192488 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 14:33:01.204550 kernel: mlx5_core fe2e:00:02.0 enP65070s1: Link up Jan 14 14:33:01.205559 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 14:33:01.238415 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1555) Jan 14 14:33:01.292421 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Jan 14 14:33:01.347382 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 14 14:33:01.355571 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 14 14:33:01.573573 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 14:33:01.597332 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 14 14:33:01.694580 kernel: hv_netvsc 000d3ab0-5d97-000d-3ab0-5d97000d3ab0 eth0: Data path switched to VF: enP65070s1 Jan 14 14:33:01.695580 systemd-networkd[1550]: enP65070s1: Link UP Jan 14 14:33:01.695765 systemd-networkd[1550]: eth0: Link UP Jan 14 14:33:01.695772 systemd-networkd[1550]: eth0: Gained carrier Jan 14 14:33:01.695799 systemd-networkd[1550]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 14 14:33:01.699891 systemd-networkd[1550]: enP65070s1: Gained carrier Jan 14 14:33:01.756463 systemd-networkd[1550]: eth0: DHCPv4 address 10.200.8.10/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jan 14 14:33:01.776765 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 14 14:33:01.787603 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 14 14:33:01.812835 lvm[1638]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 14 14:33:01.864976 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 14 14:33:01.870515 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 14 14:33:01.878578 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 14 14:33:01.884921 lvm[1640]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 14 14:33:01.911284 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 14 14:33:02.403559 ldconfig[1290]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 14 14:33:02.415569 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 14 14:33:02.424600 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 14 14:33:02.441181 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 14 14:33:02.445548 systemd[1]: Reached target sysinit.target - System Initialization. Jan 14 14:33:02.449126 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 14 14:33:02.452480 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 14 14:33:02.455993 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 14 14:33:02.459092 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 14 14:33:02.463120 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 14 14:33:02.467324 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 14 14:33:02.467366 systemd[1]: Reached target paths.target - Path Units. Jan 14 14:33:02.471017 systemd[1]: Reached target timers.target - Timer Units. Jan 14 14:33:02.474835 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 14 14:33:02.479293 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 14 14:33:02.493303 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 14 14:33:02.497907 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 14 14:33:02.501663 systemd[1]: Reached target sockets.target - Socket Units. Jan 14 14:33:02.504357 systemd[1]: Reached target basic.target - Basic System. Jan 14 14:33:02.507309 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 14 14:33:02.507342 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 14 14:33:02.517491 systemd[1]: Starting chronyd.service - NTP client/server... Jan 14 14:33:02.522550 systemd[1]: Starting containerd.service - containerd container runtime... Jan 14 14:33:02.532595 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 14 14:33:02.542581 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 14 14:33:02.546883 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 14 14:33:02.559572 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 14 14:33:02.562748 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 14 14:33:02.562809 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Jan 14 14:33:02.565586 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jan 14 14:33:02.569161 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jan 14 14:33:02.576144 jq[1652]: false Jan 14 14:33:02.579559 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 14 14:33:02.585513 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 14 14:33:02.585817 (chronyd)[1647]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jan 14 14:33:02.590636 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 14 14:33:02.598985 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 14 14:33:02.614645 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 14 14:33:02.619475 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 14 14:33:02.620118 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 14 14:33:02.623585 systemd[1]: Starting update-engine.service - Update Engine... Jan 14 14:33:02.634919 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 14 14:33:02.644040 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 14 14:33:02.644555 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 14 14:33:02.647340 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 14 14:33:02.648659 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 14 14:33:02.663951 chronyd[1675]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jan 14 14:33:02.668415 jq[1667]: true Jan 14 14:33:02.678672 jq[1678]: true Jan 14 14:33:02.763504 (ntainerd)[1696]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 14 14:33:02.764439 systemd[1]: motdgen.service: Deactivated successfully. Jan 14 14:33:02.764688 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 14 14:33:02.805968 KVP[1655]: KVP starting; pid is:1655 Jan 14 14:33:02.810776 extend-filesystems[1653]: Found loop4 Jan 14 14:33:02.814024 extend-filesystems[1653]: Found loop5 Jan 14 14:33:02.814024 extend-filesystems[1653]: Found loop6 Jan 14 14:33:02.814024 extend-filesystems[1653]: Found loop7 Jan 14 14:33:02.814024 extend-filesystems[1653]: Found sda Jan 14 14:33:02.814024 extend-filesystems[1653]: Found sda1 Jan 14 14:33:02.814024 extend-filesystems[1653]: Found sda2 Jan 14 14:33:02.814024 extend-filesystems[1653]: Found sda3 Jan 14 14:33:02.814024 extend-filesystems[1653]: Found usr Jan 14 14:33:02.814024 extend-filesystems[1653]: Found sda4 Jan 14 14:33:02.814024 extend-filesystems[1653]: Found sda6 Jan 14 14:33:02.814024 extend-filesystems[1653]: Found sda7 Jan 14 14:33:02.814024 extend-filesystems[1653]: Found sda9 Jan 14 14:33:02.814024 extend-filesystems[1653]: Checking size of /dev/sda9 Jan 14 14:33:02.855857 chronyd[1675]: Timezone right/UTC failed leap second check, ignoring Jan 14 14:33:02.857515 chronyd[1675]: Loaded seccomp filter (level 2) Jan 14 14:33:02.864643 kernel: hv_utils: KVP IC version 4.0 Jan 14 14:33:02.860401 systemd[1]: Started chronyd.service - NTP client/server. Jan 14 14:33:02.860270 KVP[1655]: KVP LIC Version: 3.1 Jan 14 14:33:02.872005 update_engine[1666]: I20250114 14:33:02.871562 1666 main.cc:92] Flatcar Update Engine starting Jan 14 14:33:02.873108 systemd-logind[1663]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 14 14:33:02.873595 systemd-logind[1663]: New seat seat0. Jan 14 14:33:02.875478 systemd[1]: Started systemd-logind.service - User Login Management. Jan 14 14:33:02.905095 dbus-daemon[1650]: [system] SELinux support is enabled Jan 14 14:33:02.912718 update_engine[1666]: I20250114 14:33:02.907923 1666 update_check_scheduler.cc:74] Next update check in 11m56s Jan 14 14:33:02.905323 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 14 14:33:02.944012 tar[1669]: linux-amd64/helm Jan 14 14:33:02.915600 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 14 14:33:02.916832 dbus-daemon[1650]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 14 14:33:02.916542 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 14 14:33:02.921646 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 14 14:33:02.921677 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 14 14:33:02.925066 systemd[1]: Started update-engine.service - Update Engine. Jan 14 14:33:02.935860 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 14 14:33:02.956415 bash[1694]: Updated "/home/core/.ssh/authorized_keys" Jan 14 14:33:02.957295 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 14 14:33:02.963365 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 14 14:33:02.974083 extend-filesystems[1653]: Old size kept for /dev/sda9 Jan 14 14:33:02.976765 extend-filesystems[1653]: Found sr0 Jan 14 14:33:02.980796 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 14 14:33:02.981460 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 14 14:33:03.011802 coreos-metadata[1649]: Jan 14 14:33:03.011 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 14 14:33:03.015970 coreos-metadata[1649]: Jan 14 14:33:03.015 INFO Fetch successful Jan 14 14:33:03.015970 coreos-metadata[1649]: Jan 14 14:33:03.015 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jan 14 14:33:03.021909 coreos-metadata[1649]: Jan 14 14:33:03.021 INFO Fetch successful Jan 14 14:33:03.021909 coreos-metadata[1649]: Jan 14 14:33:03.021 INFO Fetching http://168.63.129.16/machine/3067ffa4-f9fe-4123-8ce7-a2149dc77076/696b5e83%2D851d%2D4864%2Da4fd%2De35bfce46a4c.%5Fci%2D4081.3.0%2Da%2Da739250a79?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jan 14 14:33:03.030325 coreos-metadata[1649]: Jan 14 14:33:03.030 INFO Fetch successful Jan 14 14:33:03.030325 coreos-metadata[1649]: Jan 14 14:33:03.030 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jan 14 14:33:03.044490 coreos-metadata[1649]: Jan 14 14:33:03.044 INFO Fetch successful Jan 14 14:33:03.085688 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1555) Jan 14 14:33:03.087503 systemd-networkd[1550]: enP65070s1: Gained IPv6LL Jan 14 14:33:03.126329 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 14 14:33:03.131943 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 14 14:33:03.325116 sshd_keygen[1676]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 14 14:33:03.358049 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 14 14:33:03.369653 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 14 14:33:03.388540 systemd[1]: issuegen.service: Deactivated successfully. Jan 14 14:33:03.388768 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 14 14:33:03.402147 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 14 14:33:03.403497 systemd-networkd[1550]: eth0: Gained IPv6LL Jan 14 14:33:03.409653 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 14 14:33:03.418121 systemd[1]: Reached target network-online.target - Network is Online. Jan 14 14:33:03.431672 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 14:33:03.447710 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 14 14:33:03.459501 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jan 14 14:33:03.465918 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 14 14:33:03.485884 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 14 14:33:03.498811 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 14 14:33:03.506937 systemd[1]: Reached target getty.target - Login Prompts. Jan 14 14:33:03.534542 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jan 14 14:33:03.715326 tar[1669]: linux-amd64/LICENSE Jan 14 14:33:03.715476 tar[1669]: linux-amd64/README.md Jan 14 14:33:03.734250 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 14 14:33:03.824602 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 14 14:33:03.836827 locksmithd[1708]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 14 14:33:04.322301 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 14:33:04.330725 (kubelet)[1797]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 14:33:04.516428 containerd[1696]: time="2025-01-14T14:33:04.514970900Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 14 14:33:04.561913 containerd[1696]: time="2025-01-14T14:33:04.561848200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 14 14:33:04.564066 containerd[1696]: time="2025-01-14T14:33:04.563715800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 14 14:33:04.564066 containerd[1696]: time="2025-01-14T14:33:04.563757800Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 14 14:33:04.564066 containerd[1696]: time="2025-01-14T14:33:04.563780500Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 14 14:33:04.564066 containerd[1696]: time="2025-01-14T14:33:04.563960300Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 14 14:33:04.564066 containerd[1696]: time="2025-01-14T14:33:04.563985100Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 14 14:33:04.564066 containerd[1696]: time="2025-01-14T14:33:04.564058000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 14 14:33:04.564318 containerd[1696]: time="2025-01-14T14:33:04.564076100Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 14 14:33:04.564318 containerd[1696]: time="2025-01-14T14:33:04.564290900Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 14 14:33:04.564318 containerd[1696]: time="2025-01-14T14:33:04.564313000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 14 14:33:04.564451 containerd[1696]: time="2025-01-14T14:33:04.564336200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 14 14:33:04.564451 containerd[1696]: time="2025-01-14T14:33:04.564352200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 14 14:33:04.564525 containerd[1696]: time="2025-01-14T14:33:04.564475800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 14 14:33:04.565194 containerd[1696]: time="2025-01-14T14:33:04.564719000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 14 14:33:04.565194 containerd[1696]: time="2025-01-14T14:33:04.564887300Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 14 14:33:04.565194 containerd[1696]: time="2025-01-14T14:33:04.564909300Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 14 14:33:04.565194 containerd[1696]: time="2025-01-14T14:33:04.565007800Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 14 14:33:04.565194 containerd[1696]: time="2025-01-14T14:33:04.565063100Z" level=info msg="metadata content store policy set" policy=shared Jan 14 14:33:04.611307 containerd[1696]: time="2025-01-14T14:33:04.610654000Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 14 14:33:04.611307 containerd[1696]: time="2025-01-14T14:33:04.610730100Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 14 14:33:04.611307 containerd[1696]: time="2025-01-14T14:33:04.610754600Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 14 14:33:04.611307 containerd[1696]: time="2025-01-14T14:33:04.610775800Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 14 14:33:04.611307 containerd[1696]: time="2025-01-14T14:33:04.610795700Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 14 14:33:04.611307 containerd[1696]: time="2025-01-14T14:33:04.610969000Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 14 14:33:04.612409 containerd[1696]: time="2025-01-14T14:33:04.612361000Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 14 14:33:04.612559 containerd[1696]: time="2025-01-14T14:33:04.612534100Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 14 14:33:04.612607 containerd[1696]: time="2025-01-14T14:33:04.612566900Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 14 14:33:04.612607 containerd[1696]: time="2025-01-14T14:33:04.612598700Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 14 14:33:04.612684 containerd[1696]: time="2025-01-14T14:33:04.612626700Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 14 14:33:04.612684 containerd[1696]: time="2025-01-14T14:33:04.612653300Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 14 14:33:04.612684 containerd[1696]: time="2025-01-14T14:33:04.612674300Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 14 14:33:04.612793 containerd[1696]: time="2025-01-14T14:33:04.612711300Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 14 14:33:04.612793 containerd[1696]: time="2025-01-14T14:33:04.612740100Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 14 14:33:04.612793 containerd[1696]: time="2025-01-14T14:33:04.612763700Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 14 14:33:04.612901 containerd[1696]: time="2025-01-14T14:33:04.612792300Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 14 14:33:04.612901 containerd[1696]: time="2025-01-14T14:33:04.612819600Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 14 14:33:04.612901 containerd[1696]: time="2025-01-14T14:33:04.612853700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 14 14:33:04.612901 containerd[1696]: time="2025-01-14T14:33:04.612888300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 14 14:33:04.613049 containerd[1696]: time="2025-01-14T14:33:04.612908100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 14 14:33:04.613049 containerd[1696]: time="2025-01-14T14:33:04.612934400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 14 14:33:04.613049 containerd[1696]: time="2025-01-14T14:33:04.612960900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 14 14:33:04.613049 containerd[1696]: time="2025-01-14T14:33:04.612985200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 14 14:33:04.613049 containerd[1696]: time="2025-01-14T14:33:04.613011900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 14 14:33:04.613049 containerd[1696]: time="2025-01-14T14:33:04.613040000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 14 14:33:04.613253 containerd[1696]: time="2025-01-14T14:33:04.613064800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 14 14:33:04.613253 containerd[1696]: time="2025-01-14T14:33:04.613091700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 14 14:33:04.613253 containerd[1696]: time="2025-01-14T14:33:04.613116400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 14 14:33:04.613253 containerd[1696]: time="2025-01-14T14:33:04.613151000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 14 14:33:04.613253 containerd[1696]: time="2025-01-14T14:33:04.613175800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 14 14:33:04.613253 containerd[1696]: time="2025-01-14T14:33:04.613205100Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 14 14:33:04.613253 containerd[1696]: time="2025-01-14T14:33:04.613240400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 14 14:33:04.613500 containerd[1696]: time="2025-01-14T14:33:04.613266200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 14 14:33:04.613500 containerd[1696]: time="2025-01-14T14:33:04.613288400Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 14 14:33:04.613500 containerd[1696]: time="2025-01-14T14:33:04.613348800Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 14 14:33:04.616064 containerd[1696]: time="2025-01-14T14:33:04.613384200Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 14 14:33:04.617582 containerd[1696]: time="2025-01-14T14:33:04.616531400Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 14 14:33:04.617582 containerd[1696]: time="2025-01-14T14:33:04.616566500Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 14 14:33:04.617582 containerd[1696]: time="2025-01-14T14:33:04.616582900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 14 14:33:04.617582 containerd[1696]: time="2025-01-14T14:33:04.616621600Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 14 14:33:04.617582 containerd[1696]: time="2025-01-14T14:33:04.616639100Z" level=info msg="NRI interface is disabled by configuration." Jan 14 14:33:04.617582 containerd[1696]: time="2025-01-14T14:33:04.616653900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 14 14:33:04.617981 containerd[1696]: time="2025-01-14T14:33:04.617116500Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 14 14:33:04.617981 containerd[1696]: time="2025-01-14T14:33:04.617230100Z" level=info msg="Connect containerd service" Jan 14 14:33:04.617981 containerd[1696]: time="2025-01-14T14:33:04.617279700Z" level=info msg="using legacy CRI server" Jan 14 14:33:04.617981 containerd[1696]: time="2025-01-14T14:33:04.617291000Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 14 14:33:04.617981 containerd[1696]: time="2025-01-14T14:33:04.617500400Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 14 14:33:04.619499 containerd[1696]: time="2025-01-14T14:33:04.618595700Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 14 14:33:04.619499 containerd[1696]: time="2025-01-14T14:33:04.618683800Z" level=info msg="Start subscribing containerd event" Jan 14 14:33:04.619499 containerd[1696]: time="2025-01-14T14:33:04.618742600Z" level=info msg="Start recovering state" Jan 14 14:33:04.619499 containerd[1696]: time="2025-01-14T14:33:04.618815200Z" level=info msg="Start event monitor" Jan 14 14:33:04.619499 containerd[1696]: time="2025-01-14T14:33:04.618832000Z" level=info msg="Start snapshots syncer" Jan 14 14:33:04.619499 containerd[1696]: time="2025-01-14T14:33:04.618843600Z" level=info msg="Start cni network conf syncer for default" Jan 14 14:33:04.619499 containerd[1696]: time="2025-01-14T14:33:04.618853700Z" level=info msg="Start streaming server" Jan 14 14:33:04.619499 containerd[1696]: time="2025-01-14T14:33:04.619350500Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 14 14:33:04.620164 containerd[1696]: time="2025-01-14T14:33:04.619653000Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 14 14:33:04.620383 systemd[1]: Started containerd.service - containerd container runtime. Jan 14 14:33:04.623819 containerd[1696]: time="2025-01-14T14:33:04.623430300Z" level=info msg="containerd successfully booted in 0.110061s" Jan 14 14:33:04.625590 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 14 14:33:04.631255 systemd[1]: Startup finished in 6.419s (firmware) + 10.594s (loader) + 1.073s (kernel) + 16.249s (initrd) + 9.737s (userspace) = 44.075s. Jan 14 14:33:04.768830 login[1769]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Jan 14 14:33:04.770333 login[1767]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 14 14:33:04.785295 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 14 14:33:04.795057 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 14 14:33:04.800893 systemd-logind[1663]: New session 2 of user core. Jan 14 14:33:04.815711 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 14 14:33:04.824038 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 14 14:33:04.835763 (systemd)[1812]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 14 14:33:05.176186 kubelet[1797]: E0114 14:33:05.176077 1797 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 14:33:05.179866 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 14:33:05.180043 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 14:33:05.200611 systemd[1812]: Queued start job for default target default.target. Jan 14 14:33:05.205736 systemd[1812]: Created slice app.slice - User Application Slice. Jan 14 14:33:05.205775 systemd[1812]: Reached target paths.target - Paths. Jan 14 14:33:05.205795 systemd[1812]: Reached target timers.target - Timers. Jan 14 14:33:05.208773 systemd[1812]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 14 14:33:05.223694 systemd[1812]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 14 14:33:05.223842 systemd[1812]: Reached target sockets.target - Sockets. Jan 14 14:33:05.223868 systemd[1812]: Reached target basic.target - Basic System. Jan 14 14:33:05.223917 systemd[1812]: Reached target default.target - Main User Target. Jan 14 14:33:05.223954 systemd[1812]: Startup finished in 378ms. Jan 14 14:33:05.224554 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 14 14:33:05.229577 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 14 14:33:05.439584 waagent[1774]: 2025-01-14T14:33:05.439409Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Jan 14 14:33:05.480896 waagent[1774]: 2025-01-14T14:33:05.439983Z INFO Daemon Daemon OS: flatcar 4081.3.0 Jan 14 14:33:05.480896 waagent[1774]: 2025-01-14T14:33:05.441459Z INFO Daemon Daemon Python: 3.11.9 Jan 14 14:33:05.480896 waagent[1774]: 2025-01-14T14:33:05.443019Z INFO Daemon Daemon Run daemon Jan 14 14:33:05.480896 waagent[1774]: 2025-01-14T14:33:05.443490Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.0' Jan 14 14:33:05.480896 waagent[1774]: 2025-01-14T14:33:05.444082Z INFO Daemon Daemon Using waagent for provisioning Jan 14 14:33:05.480896 waagent[1774]: 2025-01-14T14:33:05.445243Z INFO Daemon Daemon Activate resource disk Jan 14 14:33:05.480896 waagent[1774]: 2025-01-14T14:33:05.446206Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jan 14 14:33:05.480896 waagent[1774]: 2025-01-14T14:33:05.450235Z INFO Daemon Daemon Found device: None Jan 14 14:33:05.480896 waagent[1774]: 2025-01-14T14:33:05.451180Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jan 14 14:33:05.480896 waagent[1774]: 2025-01-14T14:33:05.453309Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jan 14 14:33:05.480896 waagent[1774]: 2025-01-14T14:33:05.455838Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 14 14:33:05.480896 waagent[1774]: 2025-01-14T14:33:05.456242Z INFO Daemon Daemon Running default provisioning handler Jan 14 14:33:05.484110 waagent[1774]: 2025-01-14T14:33:05.484031Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jan 14 14:33:05.491571 waagent[1774]: 2025-01-14T14:33:05.491507Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jan 14 14:33:05.496285 waagent[1774]: 2025-01-14T14:33:05.496153Z INFO Daemon Daemon cloud-init is enabled: False Jan 14 14:33:05.500972 waagent[1774]: 2025-01-14T14:33:05.496338Z INFO Daemon Daemon Copying ovf-env.xml Jan 14 14:33:05.621885 waagent[1774]: 2025-01-14T14:33:05.621636Z INFO Daemon Daemon Successfully mounted dvd Jan 14 14:33:05.634024 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jan 14 14:33:05.635742 waagent[1774]: 2025-01-14T14:33:05.635665Z INFO Daemon Daemon Detect protocol endpoint Jan 14 14:33:05.646426 waagent[1774]: 2025-01-14T14:33:05.635966Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 14 14:33:05.646426 waagent[1774]: 2025-01-14T14:33:05.637082Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jan 14 14:33:05.646426 waagent[1774]: 2025-01-14T14:33:05.638155Z INFO Daemon Daemon Test for route to 168.63.129.16 Jan 14 14:33:05.646426 waagent[1774]: 2025-01-14T14:33:05.639334Z INFO Daemon Daemon Route to 168.63.129.16 exists Jan 14 14:33:05.646426 waagent[1774]: 2025-01-14T14:33:05.640506Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jan 14 14:33:05.654808 waagent[1774]: 2025-01-14T14:33:05.653666Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jan 14 14:33:05.654808 waagent[1774]: 2025-01-14T14:33:05.654064Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jan 14 14:33:05.655043 waagent[1774]: 2025-01-14T14:33:05.655004Z INFO Daemon Daemon Server preferred version:2015-04-05 Jan 14 14:33:05.771644 login[1769]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 14 14:33:05.777605 systemd-logind[1663]: New session 1 of user core. Jan 14 14:33:05.783556 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 14 14:33:05.887305 waagent[1774]: 2025-01-14T14:33:05.887197Z INFO Daemon Daemon Initializing goal state during protocol detection Jan 14 14:33:05.891163 waagent[1774]: 2025-01-14T14:33:05.891095Z INFO Daemon Daemon Forcing an update of the goal state. Jan 14 14:33:05.898077 waagent[1774]: 2025-01-14T14:33:05.898013Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 14 14:33:05.914268 waagent[1774]: 2025-01-14T14:33:05.914209Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.159 Jan 14 14:33:05.933366 waagent[1774]: 2025-01-14T14:33:05.914918Z INFO Daemon Jan 14 14:33:05.933366 waagent[1774]: 2025-01-14T14:33:05.915932Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: ead9af3b-d048-4edd-b547-c02a53e3f1af eTag: 13568104584182810854 source: Fabric] Jan 14 14:33:05.933366 waagent[1774]: 2025-01-14T14:33:05.916673Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jan 14 14:33:05.933366 waagent[1774]: 2025-01-14T14:33:05.917325Z INFO Daemon Jan 14 14:33:05.933366 waagent[1774]: 2025-01-14T14:33:05.917784Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jan 14 14:33:05.933366 waagent[1774]: 2025-01-14T14:33:05.922058Z INFO Daemon Daemon Downloading artifacts profile blob Jan 14 14:33:05.998448 waagent[1774]: 2025-01-14T14:33:05.998324Z INFO Daemon Downloaded certificate {'thumbprint': '19E03A0F0CA900412D38202073D7C8F8E16C121B', 'hasPrivateKey': True} Jan 14 14:33:06.003879 waagent[1774]: 2025-01-14T14:33:06.003814Z INFO Daemon Fetch goal state completed Jan 14 14:33:06.011851 waagent[1774]: 2025-01-14T14:33:06.011798Z INFO Daemon Daemon Starting provisioning Jan 14 14:33:06.020067 waagent[1774]: 2025-01-14T14:33:06.012099Z INFO Daemon Daemon Handle ovf-env.xml. Jan 14 14:33:06.020067 waagent[1774]: 2025-01-14T14:33:06.013446Z INFO Daemon Daemon Set hostname [ci-4081.3.0-a-a739250a79] Jan 14 14:33:06.024028 waagent[1774]: 2025-01-14T14:33:06.023955Z INFO Daemon Daemon Publish hostname [ci-4081.3.0-a-a739250a79] Jan 14 14:33:06.032198 waagent[1774]: 2025-01-14T14:33:06.024432Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jan 14 14:33:06.032198 waagent[1774]: 2025-01-14T14:33:06.025486Z INFO Daemon Daemon Primary interface is [eth0] Jan 14 14:33:06.041898 systemd-networkd[1550]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 14 14:33:06.041908 systemd-networkd[1550]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 14:33:06.041959 systemd-networkd[1550]: eth0: DHCP lease lost Jan 14 14:33:06.043233 waagent[1774]: 2025-01-14T14:33:06.043151Z INFO Daemon Daemon Create user account if not exists Jan 14 14:33:06.060515 waagent[1774]: 2025-01-14T14:33:06.044058Z INFO Daemon Daemon User core already exists, skip useradd Jan 14 14:33:06.060515 waagent[1774]: 2025-01-14T14:33:06.044827Z INFO Daemon Daemon Configure sudoer Jan 14 14:33:06.060515 waagent[1774]: 2025-01-14T14:33:06.045609Z INFO Daemon Daemon Configure sshd Jan 14 14:33:06.060515 waagent[1774]: 2025-01-14T14:33:06.046319Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jan 14 14:33:06.060515 waagent[1774]: 2025-01-14T14:33:06.046960Z INFO Daemon Daemon Deploy ssh public key. Jan 14 14:33:06.063509 systemd-networkd[1550]: eth0: DHCPv6 lease lost Jan 14 14:33:06.095457 systemd-networkd[1550]: eth0: DHCPv4 address 10.200.8.10/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jan 14 14:33:15.430556 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 14 14:33:15.435935 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 14:33:15.526731 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 14:33:15.532825 (kubelet)[1872]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 14:33:16.058315 kubelet[1872]: E0114 14:33:16.058249 1872 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 14:33:16.062475 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 14:33:16.062669 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 14:33:26.140658 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 14 14:33:26.148047 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 14:33:26.237829 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 14:33:26.242460 (kubelet)[1887]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 14:33:26.652580 chronyd[1675]: Selected source PHC0 Jan 14 14:33:26.821759 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 14:33:29.297021 kubelet[1887]: E0114 14:33:26.819111 1887 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 14:33:26.821965 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 14:33:36.119785 waagent[1774]: 2025-01-14T14:33:36.119714Z INFO Daemon Daemon Provisioning complete Jan 14 14:33:36.133621 waagent[1774]: 2025-01-14T14:33:36.133559Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jan 14 14:33:36.144541 waagent[1774]: 2025-01-14T14:33:36.133896Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jan 14 14:33:36.144541 waagent[1774]: 2025-01-14T14:33:36.135314Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Jan 14 14:33:36.262030 waagent[1896]: 2025-01-14T14:33:36.261931Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Jan 14 14:33:36.262477 waagent[1896]: 2025-01-14T14:33:36.262100Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.0 Jan 14 14:33:36.262477 waagent[1896]: 2025-01-14T14:33:36.262183Z INFO ExtHandler ExtHandler Python: 3.11.9 Jan 14 14:33:36.280042 waagent[1896]: 2025-01-14T14:33:36.279956Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jan 14 14:33:36.280264 waagent[1896]: 2025-01-14T14:33:36.280216Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 14 14:33:36.280362 waagent[1896]: 2025-01-14T14:33:36.280318Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 14 14:33:36.287972 waagent[1896]: 2025-01-14T14:33:36.287908Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 14 14:33:36.298254 waagent[1896]: 2025-01-14T14:33:36.298198Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.159 Jan 14 14:33:36.298794 waagent[1896]: 2025-01-14T14:33:36.298734Z INFO ExtHandler Jan 14 14:33:36.298884 waagent[1896]: 2025-01-14T14:33:36.298834Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 66079c27-29aa-42e9-9e86-ac141d944964 eTag: 13568104584182810854 source: Fabric] Jan 14 14:33:36.299198 waagent[1896]: 2025-01-14T14:33:36.299144Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 14 14:33:36.299789 waagent[1896]: 2025-01-14T14:33:36.299732Z INFO ExtHandler Jan 14 14:33:36.299865 waagent[1896]: 2025-01-14T14:33:36.299819Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jan 14 14:33:36.303542 waagent[1896]: 2025-01-14T14:33:36.303497Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 14 14:33:36.362832 waagent[1896]: 2025-01-14T14:33:36.362741Z INFO ExtHandler Downloaded certificate {'thumbprint': '19E03A0F0CA900412D38202073D7C8F8E16C121B', 'hasPrivateKey': True} Jan 14 14:33:36.363342 waagent[1896]: 2025-01-14T14:33:36.363284Z INFO ExtHandler Fetch goal state completed Jan 14 14:33:36.380596 waagent[1896]: 2025-01-14T14:33:36.380471Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1896 Jan 14 14:33:36.380758 waagent[1896]: 2025-01-14T14:33:36.380661Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jan 14 14:33:36.382304 waagent[1896]: 2025-01-14T14:33:36.382242Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.0', '', 'Flatcar Container Linux by Kinvolk'] Jan 14 14:33:36.382686 waagent[1896]: 2025-01-14T14:33:36.382636Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jan 14 14:33:36.393449 waagent[1896]: 2025-01-14T14:33:36.393406Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jan 14 14:33:36.393650 waagent[1896]: 2025-01-14T14:33:36.393605Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jan 14 14:33:36.400262 waagent[1896]: 2025-01-14T14:33:36.400220Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jan 14 14:33:36.407233 systemd[1]: Reloading requested from client PID 1909 ('systemctl') (unit waagent.service)... Jan 14 14:33:36.407249 systemd[1]: Reloading... Jan 14 14:33:36.502414 zram_generator::config[1943]: No configuration found. Jan 14 14:33:36.619762 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 14 14:33:36.699534 systemd[1]: Reloading finished in 291 ms. Jan 14 14:33:36.729972 waagent[1896]: 2025-01-14T14:33:36.728253Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Jan 14 14:33:36.734924 systemd[1]: Reloading requested from client PID 2000 ('systemctl') (unit waagent.service)... Jan 14 14:33:36.734940 systemd[1]: Reloading... Jan 14 14:33:36.813465 zram_generator::config[2034]: No configuration found. Jan 14 14:33:36.937883 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 14 14:33:37.017259 systemd[1]: Reloading finished in 281 ms. Jan 14 14:33:37.044459 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 14 14:33:37.045583 waagent[1896]: 2025-01-14T14:33:37.044547Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jan 14 14:33:37.045583 waagent[1896]: 2025-01-14T14:33:37.044746Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jan 14 14:33:37.054294 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 14:33:39.085598 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 14:33:39.096744 (kubelet)[2104]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 14:33:39.291515 kubelet[2104]: E0114 14:33:39.291450 2104 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 14:33:39.294018 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 14:33:39.294227 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 14:33:40.704690 waagent[1896]: 2025-01-14T14:33:40.704599Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jan 14 14:33:40.705368 waagent[1896]: 2025-01-14T14:33:40.705302Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Jan 14 14:33:40.706179 waagent[1896]: 2025-01-14T14:33:40.706115Z INFO ExtHandler ExtHandler Starting env monitor service. Jan 14 14:33:40.706606 waagent[1896]: 2025-01-14T14:33:40.706550Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jan 14 14:33:40.706741 waagent[1896]: 2025-01-14T14:33:40.706701Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 14 14:33:40.706857 waagent[1896]: 2025-01-14T14:33:40.706811Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 14 14:33:40.707177 waagent[1896]: 2025-01-14T14:33:40.707126Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jan 14 14:33:40.707353 waagent[1896]: 2025-01-14T14:33:40.707307Z INFO EnvHandler ExtHandler Configure routes Jan 14 14:33:40.707475 waagent[1896]: 2025-01-14T14:33:40.707436Z INFO EnvHandler ExtHandler Gateway:None Jan 14 14:33:40.707560 waagent[1896]: 2025-01-14T14:33:40.707522Z INFO EnvHandler ExtHandler Routes:None Jan 14 14:33:40.708422 waagent[1896]: 2025-01-14T14:33:40.708074Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jan 14 14:33:40.708940 waagent[1896]: 2025-01-14T14:33:40.708883Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 14 14:33:40.709036 waagent[1896]: 2025-01-14T14:33:40.708993Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 14 14:33:40.709338 waagent[1896]: 2025-01-14T14:33:40.709272Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jan 14 14:33:40.709624 waagent[1896]: 2025-01-14T14:33:40.709566Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jan 14 14:33:40.709624 waagent[1896]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jan 14 14:33:40.709624 waagent[1896]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Jan 14 14:33:40.709624 waagent[1896]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jan 14 14:33:40.709624 waagent[1896]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jan 14 14:33:40.709624 waagent[1896]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 14 14:33:40.709624 waagent[1896]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 14 14:33:40.709896 waagent[1896]: 2025-01-14T14:33:40.709757Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jan 14 14:33:40.709896 waagent[1896]: 2025-01-14T14:33:40.709859Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jan 14 14:33:40.711820 waagent[1896]: 2025-01-14T14:33:40.711773Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jan 14 14:33:40.719013 waagent[1896]: 2025-01-14T14:33:40.718858Z INFO ExtHandler ExtHandler Jan 14 14:33:40.719013 waagent[1896]: 2025-01-14T14:33:40.718971Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 73a560ed-6c74-46f2-a5fe-b8a304bc95ea correlation 9b7f99e9-9e88-4cb5-bd32-15089081fd29 created: 2025-01-14T14:31:50.381500Z] Jan 14 14:33:40.721469 waagent[1896]: 2025-01-14T14:33:40.719446Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 14 14:33:40.721469 waagent[1896]: 2025-01-14T14:33:40.720244Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Jan 14 14:33:40.734377 waagent[1896]: 2025-01-14T14:33:40.734315Z INFO MonitorHandler ExtHandler Network interfaces: Jan 14 14:33:40.734377 waagent[1896]: Executing ['ip', '-a', '-o', 'link']: Jan 14 14:33:40.734377 waagent[1896]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jan 14 14:33:40.734377 waagent[1896]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:b0:5d:97 brd ff:ff:ff:ff:ff:ff Jan 14 14:33:40.734377 waagent[1896]: 3: enP65070s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:b0:5d:97 brd ff:ff:ff:ff:ff:ff\ altname enP65070p0s2 Jan 14 14:33:40.734377 waagent[1896]: Executing ['ip', '-4', '-a', '-o', 'address']: Jan 14 14:33:40.734377 waagent[1896]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jan 14 14:33:40.734377 waagent[1896]: 2: eth0 inet 10.200.8.10/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Jan 14 14:33:40.734377 waagent[1896]: Executing ['ip', '-6', '-a', '-o', 'address']: Jan 14 14:33:40.734377 waagent[1896]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jan 14 14:33:40.734377 waagent[1896]: 2: eth0 inet6 fe80::20d:3aff:feb0:5d97/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 14 14:33:40.734377 waagent[1896]: 3: enP65070s1 inet6 fe80::20d:3aff:feb0:5d97/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 14 14:33:40.757063 waagent[1896]: 2025-01-14T14:33:40.756956Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 0A0B12F4-2A11-405A-B445-29CB36569C7B;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Jan 14 14:33:40.774999 waagent[1896]: 2025-01-14T14:33:40.774944Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Jan 14 14:33:40.774999 waagent[1896]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 14 14:33:40.774999 waagent[1896]: pkts bytes target prot opt in out source destination Jan 14 14:33:40.774999 waagent[1896]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 14 14:33:40.774999 waagent[1896]: pkts bytes target prot opt in out source destination Jan 14 14:33:40.774999 waagent[1896]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 14 14:33:40.774999 waagent[1896]: pkts bytes target prot opt in out source destination Jan 14 14:33:40.774999 waagent[1896]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 14 14:33:40.774999 waagent[1896]: 3 534 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 14 14:33:40.774999 waagent[1896]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 14 14:33:40.778318 waagent[1896]: 2025-01-14T14:33:40.778260Z INFO EnvHandler ExtHandler Current Firewall rules: Jan 14 14:33:40.778318 waagent[1896]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 14 14:33:40.778318 waagent[1896]: pkts bytes target prot opt in out source destination Jan 14 14:33:40.778318 waagent[1896]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 14 14:33:40.778318 waagent[1896]: pkts bytes target prot opt in out source destination Jan 14 14:33:40.778318 waagent[1896]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 14 14:33:40.778318 waagent[1896]: pkts bytes target prot opt in out source destination Jan 14 14:33:40.778318 waagent[1896]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 14 14:33:40.778318 waagent[1896]: 4 586 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 14 14:33:40.778318 waagent[1896]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 14 14:33:40.778722 waagent[1896]: 2025-01-14T14:33:40.778659Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jan 14 14:33:48.613693 update_engine[1666]: I20250114 14:33:48.613574 1666 update_attempter.cc:509] Updating boot flags... Jan 14 14:33:48.658421 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2152) Jan 14 14:33:48.779436 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2151) Jan 14 14:33:49.023056 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Jan 14 14:33:49.390562 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 14 14:33:49.402657 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 14:33:49.501237 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 14:33:49.512727 (kubelet)[2214]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 14:33:49.554098 kubelet[2214]: E0114 14:33:49.554036 2214 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 14:33:49.556667 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 14:33:49.556870 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 14:33:59.640613 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 14 14:33:59.647612 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 14:33:59.764384 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 14:33:59.769347 (kubelet)[2230]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 14:34:00.344403 kubelet[2230]: E0114 14:34:00.344280 2230 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 14:34:00.347307 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 14:34:00.347525 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 14:34:10.390620 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 14 14:34:10.397606 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 14:34:10.496304 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 14:34:10.509763 (kubelet)[2247]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 14:34:11.011884 kubelet[2247]: E0114 14:34:11.011797 2247 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 14:34:11.014474 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 14:34:11.014677 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 14:34:21.140642 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jan 14 14:34:21.147648 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 14:34:21.237116 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 14:34:21.241831 (kubelet)[2264]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 14:34:21.286898 kubelet[2264]: E0114 14:34:21.286813 2264 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 14:34:21.289310 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 14:34:21.289630 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 14:34:27.690845 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 14 14:34:27.695723 systemd[1]: Started sshd@0-10.200.8.10:22-10.200.16.10:58930.service - OpenSSH per-connection server daemon (10.200.16.10:58930). Jan 14 14:34:28.348049 sshd[2274]: Accepted publickey for core from 10.200.16.10 port 58930 ssh2: RSA SHA256:a+2S8eoWQbQVO3JLXgiieN5lEiWvIBRg13en2/CE8M8 Jan 14 14:34:28.349779 sshd[2274]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 14:34:28.354325 systemd-logind[1663]: New session 3 of user core. Jan 14 14:34:28.363596 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 14 14:34:28.984221 systemd[1]: Started sshd@1-10.200.8.10:22-10.200.16.10:58938.service - OpenSSH per-connection server daemon (10.200.16.10:58938). Jan 14 14:34:29.619783 sshd[2279]: Accepted publickey for core from 10.200.16.10 port 58938 ssh2: RSA SHA256:a+2S8eoWQbQVO3JLXgiieN5lEiWvIBRg13en2/CE8M8 Jan 14 14:34:29.621538 sshd[2279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 14:34:29.627068 systemd-logind[1663]: New session 4 of user core. Jan 14 14:34:29.635563 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 14 14:34:30.074464 sshd[2279]: pam_unix(sshd:session): session closed for user core Jan 14 14:34:30.078910 systemd[1]: sshd@1-10.200.8.10:22-10.200.16.10:58938.service: Deactivated successfully. Jan 14 14:34:30.081158 systemd[1]: session-4.scope: Deactivated successfully. Jan 14 14:34:30.081952 systemd-logind[1663]: Session 4 logged out. Waiting for processes to exit. Jan 14 14:34:30.082932 systemd-logind[1663]: Removed session 4. Jan 14 14:34:30.192330 systemd[1]: Started sshd@2-10.200.8.10:22-10.200.16.10:58940.service - OpenSSH per-connection server daemon (10.200.16.10:58940). Jan 14 14:34:30.960641 sshd[2286]: Accepted publickey for core from 10.200.16.10 port 58940 ssh2: RSA SHA256:a+2S8eoWQbQVO3JLXgiieN5lEiWvIBRg13en2/CE8M8 Jan 14 14:34:30.962454 sshd[2286]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 14:34:30.968116 systemd-logind[1663]: New session 5 of user core. Jan 14 14:34:30.977590 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 14 14:34:31.390625 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Jan 14 14:34:31.396989 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 14:34:31.408716 sshd[2286]: pam_unix(sshd:session): session closed for user core Jan 14 14:34:31.415460 systemd-logind[1663]: Session 5 logged out. Waiting for processes to exit. Jan 14 14:34:31.416851 systemd[1]: sshd@2-10.200.8.10:22-10.200.16.10:58940.service: Deactivated successfully. Jan 14 14:34:31.420294 systemd[1]: session-5.scope: Deactivated successfully. Jan 14 14:34:31.422478 systemd-logind[1663]: Removed session 5. Jan 14 14:34:31.501587 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 14:34:31.507470 (kubelet)[2300]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 14:34:31.521769 systemd[1]: Started sshd@3-10.200.8.10:22-10.200.16.10:58952.service - OpenSSH per-connection server daemon (10.200.16.10:58952). Jan 14 14:34:32.056129 kubelet[2300]: E0114 14:34:32.036226 2300 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 14:34:32.038769 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 14:34:32.038941 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 14:34:32.152802 sshd[2306]: Accepted publickey for core from 10.200.16.10 port 58952 ssh2: RSA SHA256:a+2S8eoWQbQVO3JLXgiieN5lEiWvIBRg13en2/CE8M8 Jan 14 14:34:32.154567 sshd[2306]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 14:34:32.160333 systemd-logind[1663]: New session 6 of user core. Jan 14 14:34:32.166572 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 14 14:34:32.608711 sshd[2306]: pam_unix(sshd:session): session closed for user core Jan 14 14:34:32.611841 systemd[1]: sshd@3-10.200.8.10:22-10.200.16.10:58952.service: Deactivated successfully. Jan 14 14:34:32.613812 systemd[1]: session-6.scope: Deactivated successfully. Jan 14 14:34:32.615245 systemd-logind[1663]: Session 6 logged out. Waiting for processes to exit. Jan 14 14:34:32.616305 systemd-logind[1663]: Removed session 6. Jan 14 14:34:32.721586 systemd[1]: Started sshd@4-10.200.8.10:22-10.200.16.10:58958.service - OpenSSH per-connection server daemon (10.200.16.10:58958). Jan 14 14:34:33.362010 sshd[2317]: Accepted publickey for core from 10.200.16.10 port 58958 ssh2: RSA SHA256:a+2S8eoWQbQVO3JLXgiieN5lEiWvIBRg13en2/CE8M8 Jan 14 14:34:33.363869 sshd[2317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 14:34:33.368965 systemd-logind[1663]: New session 7 of user core. Jan 14 14:34:33.379551 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 14 14:34:33.752096 sudo[2320]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 14 14:34:33.752487 sudo[2320]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 14:34:33.770721 sudo[2320]: pam_unix(sudo:session): session closed for user root Jan 14 14:34:33.878022 sshd[2317]: pam_unix(sshd:session): session closed for user core Jan 14 14:34:33.882811 systemd[1]: sshd@4-10.200.8.10:22-10.200.16.10:58958.service: Deactivated successfully. Jan 14 14:34:33.884989 systemd[1]: session-7.scope: Deactivated successfully. Jan 14 14:34:33.886047 systemd-logind[1663]: Session 7 logged out. Waiting for processes to exit. Jan 14 14:34:33.887156 systemd-logind[1663]: Removed session 7. Jan 14 14:34:33.995724 systemd[1]: Started sshd@5-10.200.8.10:22-10.200.16.10:58966.service - OpenSSH per-connection server daemon (10.200.16.10:58966). Jan 14 14:34:34.630589 sshd[2325]: Accepted publickey for core from 10.200.16.10 port 58966 ssh2: RSA SHA256:a+2S8eoWQbQVO3JLXgiieN5lEiWvIBRg13en2/CE8M8 Jan 14 14:34:34.632347 sshd[2325]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 14:34:34.637314 systemd-logind[1663]: New session 8 of user core. Jan 14 14:34:34.643548 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 14 14:34:34.982165 sudo[2329]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 14 14:34:34.982706 sudo[2329]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 14:34:34.986263 sudo[2329]: pam_unix(sudo:session): session closed for user root Jan 14 14:34:34.991323 sudo[2328]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 14 14:34:34.991684 sudo[2328]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 14:34:35.003723 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 14 14:34:35.006480 auditctl[2332]: No rules Jan 14 14:34:35.007614 systemd[1]: audit-rules.service: Deactivated successfully. Jan 14 14:34:35.007865 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 14 14:34:35.009702 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 14 14:34:35.046245 augenrules[2350]: No rules Jan 14 14:34:35.047784 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 14 14:34:35.049609 sudo[2328]: pam_unix(sudo:session): session closed for user root Jan 14 14:34:35.152128 sshd[2325]: pam_unix(sshd:session): session closed for user core Jan 14 14:34:35.155759 systemd[1]: sshd@5-10.200.8.10:22-10.200.16.10:58966.service: Deactivated successfully. Jan 14 14:34:35.158025 systemd[1]: session-8.scope: Deactivated successfully. Jan 14 14:34:35.159770 systemd-logind[1663]: Session 8 logged out. Waiting for processes to exit. Jan 14 14:34:35.160827 systemd-logind[1663]: Removed session 8. Jan 14 14:34:35.268601 systemd[1]: Started sshd@6-10.200.8.10:22-10.200.16.10:58978.service - OpenSSH per-connection server daemon (10.200.16.10:58978). Jan 14 14:34:35.902075 sshd[2358]: Accepted publickey for core from 10.200.16.10 port 58978 ssh2: RSA SHA256:a+2S8eoWQbQVO3JLXgiieN5lEiWvIBRg13en2/CE8M8 Jan 14 14:34:35.903845 sshd[2358]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 14:34:35.908253 systemd-logind[1663]: New session 9 of user core. Jan 14 14:34:35.915574 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 14 14:34:36.253671 sudo[2361]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 14 14:34:36.254084 sudo[2361]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 14:34:36.792779 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 14 14:34:36.793786 (dockerd)[2377]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 14 14:34:37.286827 dockerd[2377]: time="2025-01-14T14:34:37.286551929Z" level=info msg="Starting up" Jan 14 14:34:37.547986 dockerd[2377]: time="2025-01-14T14:34:37.547701883Z" level=info msg="Loading containers: start." Jan 14 14:34:37.657416 kernel: Initializing XFRM netlink socket Jan 14 14:34:37.730759 systemd-networkd[1550]: docker0: Link UP Jan 14 14:34:37.764092 dockerd[2377]: time="2025-01-14T14:34:37.764027285Z" level=info msg="Loading containers: done." Jan 14 14:34:37.794087 dockerd[2377]: time="2025-01-14T14:34:37.793945020Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 14 14:34:37.794087 dockerd[2377]: time="2025-01-14T14:34:37.794064321Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 14 14:34:37.794429 dockerd[2377]: time="2025-01-14T14:34:37.794204622Z" level=info msg="Daemon has completed initialization" Jan 14 14:34:37.846981 dockerd[2377]: time="2025-01-14T14:34:37.846826536Z" level=info msg="API listen on /run/docker.sock" Jan 14 14:34:37.847419 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 14 14:34:39.142679 containerd[1696]: time="2025-01-14T14:34:39.142634933Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Jan 14 14:34:39.868876 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3116209268.mount: Deactivated successfully. Jan 14 14:34:41.845659 containerd[1696]: time="2025-01-14T14:34:41.845600056Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 14:34:41.847674 containerd[1696]: time="2025-01-14T14:34:41.847608267Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.12: active requests=0, bytes read=35139262" Jan 14 14:34:41.849947 containerd[1696]: time="2025-01-14T14:34:41.849888380Z" level=info msg="ImageCreate event name:\"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 14:34:41.854719 containerd[1696]: time="2025-01-14T14:34:41.854653306Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 14:34:41.855877 containerd[1696]: time="2025-01-14T14:34:41.855661312Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.12\" with image id \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\", size \"35136054\" in 2.712982378s" Jan 14 14:34:41.855877 containerd[1696]: time="2025-01-14T14:34:41.855707112Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\"" Jan 14 14:34:41.879126 containerd[1696]: time="2025-01-14T14:34:41.879086540Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Jan 14 14:34:42.140537 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Jan 14 14:34:42.147887 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 14:34:42.248507 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 14:34:42.255752 (kubelet)[2584]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 14:34:42.776263 kubelet[2584]: E0114 14:34:42.776155 2584 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 14:34:42.778985 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 14:34:42.779201 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 14:34:44.748093 containerd[1696]: time="2025-01-14T14:34:44.748020773Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 14:34:44.754380 containerd[1696]: time="2025-01-14T14:34:44.754284908Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.12: active requests=0, bytes read=32217740" Jan 14 14:34:44.759113 containerd[1696]: time="2025-01-14T14:34:44.759035834Z" level=info msg="ImageCreate event name:\"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 14:34:44.767611 containerd[1696]: time="2025-01-14T14:34:44.767554480Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 14:34:44.768940 containerd[1696]: time="2025-01-14T14:34:44.768775587Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.12\" with image id \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\", size \"33662844\" in 2.889647947s" Jan 14 14:34:44.768940 containerd[1696]: time="2025-01-14T14:34:44.768824487Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\"" Jan 14 14:34:44.794623 containerd[1696]: time="2025-01-14T14:34:44.794574029Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Jan 14 14:34:46.269189 containerd[1696]: time="2025-01-14T14:34:46.269131515Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 14:34:46.271226 containerd[1696]: time="2025-01-14T14:34:46.271160426Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.12: active requests=0, bytes read=17332830" Jan 14 14:34:46.278761 containerd[1696]: time="2025-01-14T14:34:46.278695268Z" level=info msg="ImageCreate event name:\"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 14:34:46.283549 containerd[1696]: time="2025-01-14T14:34:46.283476894Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 14:34:46.284594 containerd[1696]: time="2025-01-14T14:34:46.284415099Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.12\" with image id \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\", size \"18777952\" in 1.48979767s" Jan 14 14:34:46.284594 containerd[1696]: time="2025-01-14T14:34:46.284457699Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\"" Jan 14 14:34:46.306314 containerd[1696]: time="2025-01-14T14:34:46.306267419Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Jan 14 14:34:47.462181 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3457859510.mount: Deactivated successfully. Jan 14 14:34:47.906399 containerd[1696]: time="2025-01-14T14:34:47.906322317Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 14:34:47.908238 containerd[1696]: time="2025-01-14T14:34:47.908183131Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=28619966" Jan 14 14:34:47.911721 containerd[1696]: time="2025-01-14T14:34:47.911660857Z" level=info msg="ImageCreate event name:\"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 14:34:47.915603 containerd[1696]: time="2025-01-14T14:34:47.915538985Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 14:34:47.916343 containerd[1696]: time="2025-01-14T14:34:47.916163690Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"28618977\" in 1.609844571s" Jan 14 14:34:47.916343 containerd[1696]: time="2025-01-14T14:34:47.916211990Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Jan 14 14:34:47.939881 containerd[1696]: time="2025-01-14T14:34:47.939837365Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 14 14:34:48.543466 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3067991577.mount: Deactivated successfully. Jan 14 14:34:49.672479 containerd[1696]: time="2025-01-14T14:34:49.672421771Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 14:34:49.676069 containerd[1696]: time="2025-01-14T14:34:49.676000297Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Jan 14 14:34:49.679418 containerd[1696]: time="2025-01-14T14:34:49.679336322Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 14:34:49.684273 containerd[1696]: time="2025-01-14T14:34:49.684201658Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 14:34:49.685420 containerd[1696]: time="2025-01-14T14:34:49.685233966Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.745350301s" Jan 14 14:34:49.685420 containerd[1696]: time="2025-01-14T14:34:49.685278266Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 14 14:34:49.708210 containerd[1696]: time="2025-01-14T14:34:49.708164235Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 14 14:34:50.196858 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2284202583.mount: Deactivated successfully. Jan 14 14:34:50.241303 containerd[1696]: time="2025-01-14T14:34:50.241242975Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 14:34:50.243337 containerd[1696]: time="2025-01-14T14:34:50.243264690Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" Jan 14 14:34:50.248565 containerd[1696]: time="2025-01-14T14:34:50.248508429Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 14:34:50.252807 containerd[1696]: time="2025-01-14T14:34:50.252753860Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 14:34:50.253496 containerd[1696]: time="2025-01-14T14:34:50.253456966Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 545.25013ms" Jan 14 14:34:50.253604 containerd[1696]: time="2025-01-14T14:34:50.253503066Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 14 14:34:50.274919 containerd[1696]: time="2025-01-14T14:34:50.274879924Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jan 14 14:34:50.804913 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2814777107.mount: Deactivated successfully. Jan 14 14:34:52.890905 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Jan 14 14:34:52.900628 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 14:34:53.029238 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 14:34:53.039012 (kubelet)[2729]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 14:34:54.094637 kubelet[2729]: E0114 14:34:54.094571 2729 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 14:34:54.097274 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 14:34:54.097500 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 14:34:54.311648 containerd[1696]: time="2025-01-14T14:34:54.311587560Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 14:34:54.318279 containerd[1696]: time="2025-01-14T14:34:54.318206309Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651633" Jan 14 14:34:54.326318 containerd[1696]: time="2025-01-14T14:34:54.326242269Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 14:34:54.330325 containerd[1696]: time="2025-01-14T14:34:54.330273998Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 14:34:54.331806 containerd[1696]: time="2025-01-14T14:34:54.331346006Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 4.056344481s" Jan 14 14:34:54.331806 containerd[1696]: time="2025-01-14T14:34:54.331404707Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jan 14 14:34:58.395459 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 14:34:58.403689 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 14:34:58.436160 systemd[1]: Reloading requested from client PID 2802 ('systemctl') (unit session-9.scope)... Jan 14 14:34:58.436185 systemd[1]: Reloading... Jan 14 14:34:58.553433 zram_generator::config[2843]: No configuration found. Jan 14 14:34:58.685369 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 14 14:34:58.764720 systemd[1]: Reloading finished in 328 ms. Jan 14 14:34:58.812932 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 14 14:34:58.813130 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 14 14:34:58.813495 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 14:34:58.819736 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 14:34:59.049576 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 14:34:59.064798 (kubelet)[2911]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 14 14:34:59.108979 kubelet[2911]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 14:34:59.108979 kubelet[2911]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 14 14:34:59.108979 kubelet[2911]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 14:34:59.109493 kubelet[2911]: I0114 14:34:59.109086 2911 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 14 14:34:59.653243 kubelet[2911]: I0114 14:34:59.652809 2911 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 14 14:34:59.653243 kubelet[2911]: I0114 14:34:59.652853 2911 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 14 14:34:59.653243 kubelet[2911]: I0114 14:34:59.653150 2911 server.go:919] "Client rotation is on, will bootstrap in background" Jan 14 14:34:59.671261 kubelet[2911]: E0114 14:34:59.671206 2911 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.10:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.10:6443: connect: connection refused Jan 14 14:34:59.672138 kubelet[2911]: I0114 14:34:59.672107 2911 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 14 14:34:59.681618 kubelet[2911]: I0114 14:34:59.681587 2911 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 14 14:34:59.682921 kubelet[2911]: I0114 14:34:59.682889 2911 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 14 14:34:59.683125 kubelet[2911]: I0114 14:34:59.683092 2911 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 14 14:34:59.683670 kubelet[2911]: I0114 14:34:59.683641 2911 topology_manager.go:138] "Creating topology manager with none policy" Jan 14 14:34:59.683670 kubelet[2911]: I0114 14:34:59.683671 2911 container_manager_linux.go:301] "Creating device plugin manager" Jan 14 14:34:59.683821 kubelet[2911]: I0114 14:34:59.683800 2911 state_mem.go:36] "Initialized new in-memory state store" Jan 14 14:34:59.683942 kubelet[2911]: I0114 14:34:59.683927 2911 kubelet.go:396] "Attempting to sync node with API server" Jan 14 14:34:59.683998 kubelet[2911]: I0114 14:34:59.683950 2911 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 14 14:34:59.683998 kubelet[2911]: I0114 14:34:59.683986 2911 kubelet.go:312] "Adding apiserver pod source" Jan 14 14:34:59.684063 kubelet[2911]: I0114 14:34:59.684005 2911 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 14 14:34:59.693966 kubelet[2911]: W0114 14:34:59.692099 2911 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.200.8.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-a739250a79&limit=500&resourceVersion=0": dial tcp 10.200.8.10:6443: connect: connection refused Jan 14 14:34:59.693966 kubelet[2911]: E0114 14:34:59.692173 2911 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-a739250a79&limit=500&resourceVersion=0": dial tcp 10.200.8.10:6443: connect: connection refused Jan 14 14:34:59.693966 kubelet[2911]: W0114 14:34:59.692738 2911 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.200.8.10:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.10:6443: connect: connection refused Jan 14 14:34:59.693966 kubelet[2911]: E0114 14:34:59.692789 2911 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.10:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.10:6443: connect: connection refused Jan 14 14:34:59.695291 kubelet[2911]: I0114 14:34:59.695270 2911 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 14 14:34:59.698590 kubelet[2911]: I0114 14:34:59.698569 2911 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 14 14:34:59.698684 kubelet[2911]: W0114 14:34:59.698640 2911 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 14 14:34:59.699640 kubelet[2911]: I0114 14:34:59.699278 2911 server.go:1256] "Started kubelet" Jan 14 14:34:59.700700 kubelet[2911]: I0114 14:34:59.700463 2911 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 14 14:34:59.706767 kubelet[2911]: I0114 14:34:59.706275 2911 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 14 14:34:59.706767 kubelet[2911]: E0114 14:34:59.706699 2911 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.10:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.10:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.0-a-a739250a79.181a95c9ab46c64d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.0-a-a739250a79,UID:ci-4081.3.0-a-a739250a79,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.0-a-a739250a79,},FirstTimestamp:2025-01-14 14:34:59.699254861 +0000 UTC m=+0.630227885,LastTimestamp:2025-01-14 14:34:59.699254861 +0000 UTC m=+0.630227885,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.0-a-a739250a79,}" Jan 14 14:34:59.707320 kubelet[2911]: I0114 14:34:59.707294 2911 server.go:461] "Adding debug handlers to kubelet server" Jan 14 14:34:59.708752 kubelet[2911]: I0114 14:34:59.708563 2911 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 14 14:34:59.709067 kubelet[2911]: I0114 14:34:59.709047 2911 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 14 14:34:59.709455 kubelet[2911]: I0114 14:34:59.709286 2911 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 14 14:34:59.711885 kubelet[2911]: I0114 14:34:59.711152 2911 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 14 14:34:59.711885 kubelet[2911]: I0114 14:34:59.711213 2911 reconciler_new.go:29] "Reconciler: start to sync state" Jan 14 14:34:59.712266 kubelet[2911]: W0114 14:34:59.712223 2911 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.200.8.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.10:6443: connect: connection refused Jan 14 14:34:59.712375 kubelet[2911]: E0114 14:34:59.712363 2911 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.10:6443: connect: connection refused Jan 14 14:34:59.712593 kubelet[2911]: E0114 14:34:59.712575 2911 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-a739250a79?timeout=10s\": dial tcp 10.200.8.10:6443: connect: connection refused" interval="200ms" Jan 14 14:34:59.713956 kubelet[2911]: I0114 14:34:59.713937 2911 factory.go:221] Registration of the containerd container factory successfully Jan 14 14:34:59.714059 kubelet[2911]: I0114 14:34:59.714049 2911 factory.go:221] Registration of the systemd container factory successfully Jan 14 14:34:59.714210 kubelet[2911]: I0114 14:34:59.714191 2911 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 14 14:34:59.725781 kubelet[2911]: E0114 14:34:59.725748 2911 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 14 14:34:59.728950 kubelet[2911]: I0114 14:34:59.728844 2911 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 14 14:34:59.731557 kubelet[2911]: I0114 14:34:59.731536 2911 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 14 14:34:59.731641 kubelet[2911]: I0114 14:34:59.731568 2911 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 14 14:34:59.731641 kubelet[2911]: I0114 14:34:59.731598 2911 kubelet.go:2329] "Starting kubelet main sync loop" Jan 14 14:34:59.731715 kubelet[2911]: E0114 14:34:59.731646 2911 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 14 14:34:59.737644 kubelet[2911]: W0114 14:34:59.737537 2911 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.200.8.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.10:6443: connect: connection refused Jan 14 14:34:59.737644 kubelet[2911]: E0114 14:34:59.737611 2911 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.10:6443: connect: connection refused Jan 14 14:34:59.755089 kubelet[2911]: I0114 14:34:59.755039 2911 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 14 14:34:59.755089 kubelet[2911]: I0114 14:34:59.755068 2911 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 14 14:34:59.755089 kubelet[2911]: I0114 14:34:59.755090 2911 state_mem.go:36] "Initialized new in-memory state store" Jan 14 14:34:59.760223 kubelet[2911]: I0114 14:34:59.760191 2911 policy_none.go:49] "None policy: Start" Jan 14 14:34:59.760859 kubelet[2911]: I0114 14:34:59.760823 2911 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 14 14:34:59.760859 kubelet[2911]: I0114 14:34:59.760868 2911 state_mem.go:35] "Initializing new in-memory state store" Jan 14 14:34:59.768965 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 14 14:34:59.781355 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 14 14:34:59.784637 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 14 14:34:59.793037 kubelet[2911]: I0114 14:34:59.792122 2911 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 14 14:34:59.793037 kubelet[2911]: I0114 14:34:59.792428 2911 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 14 14:34:59.794212 kubelet[2911]: E0114 14:34:59.794149 2911 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.0-a-a739250a79\" not found" Jan 14 14:34:59.812003 kubelet[2911]: I0114 14:34:59.811974 2911 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-a739250a79" Jan 14 14:34:59.812363 kubelet[2911]: E0114 14:34:59.812343 2911 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.10:6443/api/v1/nodes\": dial tcp 10.200.8.10:6443: connect: connection refused" node="ci-4081.3.0-a-a739250a79" Jan 14 14:34:59.832786 kubelet[2911]: I0114 14:34:59.832720 2911 topology_manager.go:215] "Topology Admit Handler" podUID="e3d569daf38910411a5951db0a487e17" podNamespace="kube-system" podName="kube-apiserver-ci-4081.3.0-a-a739250a79" Jan 14 14:34:59.834900 kubelet[2911]: I0114 14:34:59.834872 2911 topology_manager.go:215] "Topology Admit Handler" podUID="70ef4dceca76911078c0500a239b33e0" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.3.0-a-a739250a79" Jan 14 14:34:59.836643 kubelet[2911]: I0114 14:34:59.836615 2911 topology_manager.go:215] "Topology Admit Handler" podUID="c11ff64f4c96d4ff36fd8c6320729c9b" podNamespace="kube-system" podName="kube-scheduler-ci-4081.3.0-a-a739250a79" Jan 14 14:34:59.844538 systemd[1]: Created slice kubepods-burstable-pode3d569daf38910411a5951db0a487e17.slice - libcontainer container kubepods-burstable-pode3d569daf38910411a5951db0a487e17.slice. Jan 14 14:34:59.858130 systemd[1]: Created slice kubepods-burstable-podc11ff64f4c96d4ff36fd8c6320729c9b.slice - libcontainer container kubepods-burstable-podc11ff64f4c96d4ff36fd8c6320729c9b.slice. Jan 14 14:34:59.863026 systemd[1]: Created slice kubepods-burstable-pod70ef4dceca76911078c0500a239b33e0.slice - libcontainer container kubepods-burstable-pod70ef4dceca76911078c0500a239b33e0.slice. Jan 14 14:34:59.914228 kubelet[2911]: E0114 14:34:59.914108 2911 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-a739250a79?timeout=10s\": dial tcp 10.200.8.10:6443: connect: connection refused" interval="400ms" Jan 14 14:35:00.012726 kubelet[2911]: I0114 14:35:00.012498 2911 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e3d569daf38910411a5951db0a487e17-ca-certs\") pod \"kube-apiserver-ci-4081.3.0-a-a739250a79\" (UID: \"e3d569daf38910411a5951db0a487e17\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-a739250a79" Jan 14 14:35:00.012726 kubelet[2911]: I0114 14:35:00.012575 2911 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c11ff64f4c96d4ff36fd8c6320729c9b-kubeconfig\") pod \"kube-scheduler-ci-4081.3.0-a-a739250a79\" (UID: \"c11ff64f4c96d4ff36fd8c6320729c9b\") " pod="kube-system/kube-scheduler-ci-4081.3.0-a-a739250a79" Jan 14 14:35:00.012726 kubelet[2911]: I0114 14:35:00.012617 2911 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/70ef4dceca76911078c0500a239b33e0-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.0-a-a739250a79\" (UID: \"70ef4dceca76911078c0500a239b33e0\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-a739250a79" Jan 14 14:35:00.012726 kubelet[2911]: I0114 14:35:00.012648 2911 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/70ef4dceca76911078c0500a239b33e0-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-a739250a79\" (UID: \"70ef4dceca76911078c0500a239b33e0\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-a739250a79" Jan 14 14:35:00.012726 kubelet[2911]: I0114 14:35:00.012681 2911 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/70ef4dceca76911078c0500a239b33e0-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.0-a-a739250a79\" (UID: \"70ef4dceca76911078c0500a239b33e0\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-a739250a79" Jan 14 14:35:00.013110 kubelet[2911]: I0114 14:35:00.012720 2911 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/70ef4dceca76911078c0500a239b33e0-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.0-a-a739250a79\" (UID: \"70ef4dceca76911078c0500a239b33e0\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-a739250a79" Jan 14 14:35:00.013110 kubelet[2911]: I0114 14:35:00.012753 2911 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e3d569daf38910411a5951db0a487e17-k8s-certs\") pod \"kube-apiserver-ci-4081.3.0-a-a739250a79\" (UID: \"e3d569daf38910411a5951db0a487e17\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-a739250a79" Jan 14 14:35:00.013110 kubelet[2911]: I0114 14:35:00.012786 2911 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e3d569daf38910411a5951db0a487e17-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.0-a-a739250a79\" (UID: \"e3d569daf38910411a5951db0a487e17\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-a739250a79" Jan 14 14:35:00.013110 kubelet[2911]: I0114 14:35:00.012819 2911 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/70ef4dceca76911078c0500a239b33e0-ca-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-a739250a79\" (UID: \"70ef4dceca76911078c0500a239b33e0\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-a739250a79" Jan 14 14:35:00.014909 kubelet[2911]: I0114 14:35:00.014863 2911 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-a739250a79" Jan 14 14:35:00.015278 kubelet[2911]: E0114 14:35:00.015251 2911 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.10:6443/api/v1/nodes\": dial tcp 10.200.8.10:6443: connect: connection refused" node="ci-4081.3.0-a-a739250a79" Jan 14 14:35:00.156036 containerd[1696]: time="2025-01-14T14:35:00.155969535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.0-a-a739250a79,Uid:e3d569daf38910411a5951db0a487e17,Namespace:kube-system,Attempt:0,}" Jan 14 14:35:00.165726 containerd[1696]: time="2025-01-14T14:35:00.165615519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.0-a-a739250a79,Uid:c11ff64f4c96d4ff36fd8c6320729c9b,Namespace:kube-system,Attempt:0,}" Jan 14 14:35:00.170682 containerd[1696]: time="2025-01-14T14:35:00.170586462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.0-a-a739250a79,Uid:70ef4dceca76911078c0500a239b33e0,Namespace:kube-system,Attempt:0,}" Jan 14 14:35:00.315308 kubelet[2911]: E0114 14:35:00.315261 2911 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-a739250a79?timeout=10s\": dial tcp 10.200.8.10:6443: connect: connection refused" interval="800ms" Jan 14 14:35:00.417788 kubelet[2911]: I0114 14:35:00.417667 2911 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-a739250a79" Jan 14 14:35:00.418118 kubelet[2911]: E0114 14:35:00.418073 2911 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.10:6443/api/v1/nodes\": dial tcp 10.200.8.10:6443: connect: connection refused" node="ci-4081.3.0-a-a739250a79" Jan 14 14:35:00.709045 kubelet[2911]: W0114 14:35:00.708893 2911 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.200.8.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.10:6443: connect: connection refused Jan 14 14:35:00.709045 kubelet[2911]: E0114 14:35:00.708980 2911 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.10:6443: connect: connection refused Jan 14 14:35:00.718554 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1551769402.mount: Deactivated successfully. Jan 14 14:35:00.747810 containerd[1696]: time="2025-01-14T14:35:00.747754985Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 14:35:00.750873 containerd[1696]: time="2025-01-14T14:35:00.750826211Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jan 14 14:35:00.754261 containerd[1696]: time="2025-01-14T14:35:00.754227641Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 14:35:00.757340 containerd[1696]: time="2025-01-14T14:35:00.757306868Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 14:35:00.760008 containerd[1696]: time="2025-01-14T14:35:00.759960691Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 14 14:35:00.763297 containerd[1696]: time="2025-01-14T14:35:00.763266320Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 14:35:00.765677 containerd[1696]: time="2025-01-14T14:35:00.765625740Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 14 14:35:00.770772 containerd[1696]: time="2025-01-14T14:35:00.770723884Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 14:35:00.772012 containerd[1696]: time="2025-01-14T14:35:00.771480291Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 600.808928ms" Jan 14 14:35:00.772974 containerd[1696]: time="2025-01-14T14:35:00.772940804Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 616.884668ms" Jan 14 14:35:00.780762 containerd[1696]: time="2025-01-14T14:35:00.780719871Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 615.027651ms" Jan 14 14:35:00.883582 kubelet[2911]: W0114 14:35:00.883517 2911 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.200.8.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.10:6443: connect: connection refused Jan 14 14:35:00.883582 kubelet[2911]: E0114 14:35:00.883584 2911 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.10:6443: connect: connection refused Jan 14 14:35:01.063598 containerd[1696]: time="2025-01-14T14:35:01.062183521Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 14:35:01.063598 containerd[1696]: time="2025-01-14T14:35:01.062253021Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 14:35:01.063598 containerd[1696]: time="2025-01-14T14:35:01.062293722Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 14:35:01.063598 containerd[1696]: time="2025-01-14T14:35:01.062465123Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 14:35:01.065784 containerd[1696]: time="2025-01-14T14:35:01.065424849Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 14:35:01.065784 containerd[1696]: time="2025-01-14T14:35:01.065501149Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 14:35:01.065784 containerd[1696]: time="2025-01-14T14:35:01.065522850Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 14:35:01.065784 containerd[1696]: time="2025-01-14T14:35:01.065611450Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 14:35:01.070057 containerd[1696]: time="2025-01-14T14:35:01.069980088Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 14:35:01.070170 containerd[1696]: time="2025-01-14T14:35:01.070094189Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 14:35:01.070170 containerd[1696]: time="2025-01-14T14:35:01.070114790Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 14:35:01.070605 containerd[1696]: time="2025-01-14T14:35:01.070267291Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 14:35:01.104589 systemd[1]: Started cri-containerd-793c6462eef0a88017b9f3961ab6b12ffa7036bab3883ce4c7739fe6374d8b09.scope - libcontainer container 793c6462eef0a88017b9f3961ab6b12ffa7036bab3883ce4c7739fe6374d8b09. Jan 14 14:35:01.106869 systemd[1]: Started cri-containerd-8a6b7d260fd56f280be40f0d541191a7849d6aa6614861befc78958f512e8344.scope - libcontainer container 8a6b7d260fd56f280be40f0d541191a7849d6aa6614861befc78958f512e8344. Jan 14 14:35:01.110513 systemd[1]: Started cri-containerd-e7f337c4e91a6167434c73658b563d27fbaf5b7d521e08cd691f8e9c336cb578.scope - libcontainer container e7f337c4e91a6167434c73658b563d27fbaf5b7d521e08cd691f8e9c336cb578. Jan 14 14:35:01.116927 kubelet[2911]: E0114 14:35:01.116825 2911 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-a739250a79?timeout=10s\": dial tcp 10.200.8.10:6443: connect: connection refused" interval="1.6s" Jan 14 14:35:01.170292 containerd[1696]: time="2025-01-14T14:35:01.170242161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.0-a-a739250a79,Uid:e3d569daf38910411a5951db0a487e17,Namespace:kube-system,Attempt:0,} returns sandbox id \"793c6462eef0a88017b9f3961ab6b12ffa7036bab3883ce4c7739fe6374d8b09\"" Jan 14 14:35:01.183920 containerd[1696]: time="2025-01-14T14:35:01.181873262Z" level=info msg="CreateContainer within sandbox \"793c6462eef0a88017b9f3961ab6b12ffa7036bab3883ce4c7739fe6374d8b09\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 14 14:35:01.197596 containerd[1696]: time="2025-01-14T14:35:01.197555699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.0-a-a739250a79,Uid:c11ff64f4c96d4ff36fd8c6320729c9b,Namespace:kube-system,Attempt:0,} returns sandbox id \"e7f337c4e91a6167434c73658b563d27fbaf5b7d521e08cd691f8e9c336cb578\"" Jan 14 14:35:01.203583 containerd[1696]: time="2025-01-14T14:35:01.203539451Z" level=info msg="CreateContainer within sandbox \"e7f337c4e91a6167434c73658b563d27fbaf5b7d521e08cd691f8e9c336cb578\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 14 14:35:01.206472 containerd[1696]: time="2025-01-14T14:35:01.206402576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.0-a-a739250a79,Uid:70ef4dceca76911078c0500a239b33e0,Namespace:kube-system,Attempt:0,} returns sandbox id \"8a6b7d260fd56f280be40f0d541191a7849d6aa6614861befc78958f512e8344\"" Jan 14 14:35:01.208950 containerd[1696]: time="2025-01-14T14:35:01.208914397Z" level=info msg="CreateContainer within sandbox \"8a6b7d260fd56f280be40f0d541191a7849d6aa6614861befc78958f512e8344\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 14 14:35:01.219837 kubelet[2911]: I0114 14:35:01.219770 2911 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-a739250a79" Jan 14 14:35:01.220135 kubelet[2911]: E0114 14:35:01.220115 2911 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.10:6443/api/v1/nodes\": dial tcp 10.200.8.10:6443: connect: connection refused" node="ci-4081.3.0-a-a739250a79" Jan 14 14:35:01.242975 containerd[1696]: time="2025-01-14T14:35:01.242920993Z" level=info msg="CreateContainer within sandbox \"793c6462eef0a88017b9f3961ab6b12ffa7036bab3883ce4c7739fe6374d8b09\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"dc8cc4a67999778fbf31d029a79474a267391b466202d29fbfe1639eb72eff6f\"" Jan 14 14:35:01.243768 containerd[1696]: time="2025-01-14T14:35:01.243724700Z" level=info msg="StartContainer for \"dc8cc4a67999778fbf31d029a79474a267391b466202d29fbfe1639eb72eff6f\"" Jan 14 14:35:01.271590 systemd[1]: Started cri-containerd-dc8cc4a67999778fbf31d029a79474a267391b466202d29fbfe1639eb72eff6f.scope - libcontainer container dc8cc4a67999778fbf31d029a79474a267391b466202d29fbfe1639eb72eff6f. Jan 14 14:35:01.279944 kubelet[2911]: W0114 14:35:01.279878 2911 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.200.8.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-a739250a79&limit=500&resourceVersion=0": dial tcp 10.200.8.10:6443: connect: connection refused Jan 14 14:35:01.280417 kubelet[2911]: E0114 14:35:01.279956 2911 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-a739250a79&limit=500&resourceVersion=0": dial tcp 10.200.8.10:6443: connect: connection refused Jan 14 14:35:01.281323 containerd[1696]: time="2025-01-14T14:35:01.281195426Z" level=info msg="CreateContainer within sandbox \"e7f337c4e91a6167434c73658b563d27fbaf5b7d521e08cd691f8e9c336cb578\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"47fb77f8aa920a1780c39a982b83e998472e58d4e45a273fafc2d78d881d40f6\"" Jan 14 14:35:01.282153 containerd[1696]: time="2025-01-14T14:35:01.281889032Z" level=info msg="StartContainer for \"47fb77f8aa920a1780c39a982b83e998472e58d4e45a273fafc2d78d881d40f6\"" Jan 14 14:35:01.286280 containerd[1696]: time="2025-01-14T14:35:01.286247370Z" level=info msg="CreateContainer within sandbox \"8a6b7d260fd56f280be40f0d541191a7849d6aa6614861befc78958f512e8344\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9714c05d21b8d2427a1527594222cc7a21a50e80525bd9570b9a438f669a412a\"" Jan 14 14:35:01.287259 containerd[1696]: time="2025-01-14T14:35:01.287225779Z" level=info msg="StartContainer for \"9714c05d21b8d2427a1527594222cc7a21a50e80525bd9570b9a438f669a412a\"" Jan 14 14:35:01.291877 kubelet[2911]: W0114 14:35:01.291739 2911 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.200.8.10:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.10:6443: connect: connection refused Jan 14 14:35:01.292220 kubelet[2911]: E0114 14:35:01.292167 2911 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.10:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.10:6443: connect: connection refused Jan 14 14:35:01.324601 systemd[1]: Started cri-containerd-47fb77f8aa920a1780c39a982b83e998472e58d4e45a273fafc2d78d881d40f6.scope - libcontainer container 47fb77f8aa920a1780c39a982b83e998472e58d4e45a273fafc2d78d881d40f6. Jan 14 14:35:01.345739 systemd[1]: Started cri-containerd-9714c05d21b8d2427a1527594222cc7a21a50e80525bd9570b9a438f669a412a.scope - libcontainer container 9714c05d21b8d2427a1527594222cc7a21a50e80525bd9570b9a438f669a412a. Jan 14 14:35:01.362853 containerd[1696]: time="2025-01-14T14:35:01.362731636Z" level=info msg="StartContainer for \"dc8cc4a67999778fbf31d029a79474a267391b466202d29fbfe1639eb72eff6f\" returns successfully" Jan 14 14:35:01.420625 containerd[1696]: time="2025-01-14T14:35:01.420230036Z" level=info msg="StartContainer for \"9714c05d21b8d2427a1527594222cc7a21a50e80525bd9570b9a438f669a412a\" returns successfully" Jan 14 14:35:01.449497 containerd[1696]: time="2025-01-14T14:35:01.448922286Z" level=info msg="StartContainer for \"47fb77f8aa920a1780c39a982b83e998472e58d4e45a273fafc2d78d881d40f6\" returns successfully" Jan 14 14:35:02.824080 kubelet[2911]: I0114 14:35:02.824043 2911 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-a739250a79" Jan 14 14:35:03.547264 kubelet[2911]: E0114 14:35:03.547199 2911 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.0-a-a739250a79\" not found" node="ci-4081.3.0-a-a739250a79" Jan 14 14:35:04.107554 kubelet[2911]: I0114 14:35:04.106231 2911 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.3.0-a-a739250a79" Jan 14 14:35:04.107554 kubelet[2911]: E0114 14:35:04.107324 2911 event.go:346] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081.3.0-a-a739250a79.181a95c9ab46c64d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.0-a-a739250a79,UID:ci-4081.3.0-a-a739250a79,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.0-a-a739250a79,},FirstTimestamp:2025-01-14 14:34:59.699254861 +0000 UTC m=+0.630227885,LastTimestamp:2025-01-14 14:34:59.699254861 +0000 UTC m=+0.630227885,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.0-a-a739250a79,}" Jan 14 14:35:05.104384 kubelet[2911]: I0114 14:35:05.104339 2911 apiserver.go:52] "Watching apiserver" Jan 14 14:35:05.111523 kubelet[2911]: I0114 14:35:05.111436 2911 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 14 14:35:06.509775 kubelet[2911]: W0114 14:35:06.509664 2911 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 14 14:35:07.365537 systemd[1]: Reloading requested from client PID 3186 ('systemctl') (unit session-9.scope)... Jan 14 14:35:07.365554 systemd[1]: Reloading... Jan 14 14:35:07.510470 zram_generator::config[3222]: No configuration found. Jan 14 14:35:07.657423 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 14 14:35:07.757372 systemd[1]: Reloading finished in 391 ms. Jan 14 14:35:07.802526 kubelet[2911]: I0114 14:35:07.802485 2911 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 14 14:35:07.802865 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 14:35:07.809134 systemd[1]: kubelet.service: Deactivated successfully. Jan 14 14:35:07.809621 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 14:35:07.816891 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 14:35:07.919568 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 14:35:07.927777 (kubelet)[3293]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 14 14:35:07.988227 kubelet[3293]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 14:35:07.989122 kubelet[3293]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 14 14:35:07.989122 kubelet[3293]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 14:35:07.989122 kubelet[3293]: I0114 14:35:07.988728 3293 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 14 14:35:08.000124 kubelet[3293]: I0114 14:35:08.000081 3293 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 14 14:35:08.000124 kubelet[3293]: I0114 14:35:08.000111 3293 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 14 14:35:08.000377 kubelet[3293]: I0114 14:35:08.000354 3293 server.go:919] "Client rotation is on, will bootstrap in background" Jan 14 14:35:08.001895 kubelet[3293]: I0114 14:35:08.001868 3293 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 14 14:35:08.003956 kubelet[3293]: I0114 14:35:08.003761 3293 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 14 14:35:08.010971 kubelet[3293]: I0114 14:35:08.010919 3293 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 14 14:35:08.011195 kubelet[3293]: I0114 14:35:08.011176 3293 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 14 14:35:08.011371 kubelet[3293]: I0114 14:35:08.011352 3293 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 14 14:35:08.011533 kubelet[3293]: I0114 14:35:08.011383 3293 topology_manager.go:138] "Creating topology manager with none policy" Jan 14 14:35:08.011533 kubelet[3293]: I0114 14:35:08.011424 3293 container_manager_linux.go:301] "Creating device plugin manager" Jan 14 14:35:08.011533 kubelet[3293]: I0114 14:35:08.011469 3293 state_mem.go:36] "Initialized new in-memory state store" Jan 14 14:35:08.011655 kubelet[3293]: I0114 14:35:08.011603 3293 kubelet.go:396] "Attempting to sync node with API server" Jan 14 14:35:08.011655 kubelet[3293]: I0114 14:35:08.011622 3293 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 14 14:35:08.011655 kubelet[3293]: I0114 14:35:08.011652 3293 kubelet.go:312] "Adding apiserver pod source" Jan 14 14:35:08.016476 kubelet[3293]: I0114 14:35:08.011669 3293 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 14 14:35:08.016476 kubelet[3293]: I0114 14:35:08.016437 3293 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 14 14:35:08.016911 kubelet[3293]: I0114 14:35:08.016653 3293 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 14 14:35:08.017161 kubelet[3293]: I0114 14:35:08.017111 3293 server.go:1256] "Started kubelet" Jan 14 14:35:08.022424 kubelet[3293]: I0114 14:35:08.022354 3293 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 14 14:35:08.029772 kubelet[3293]: I0114 14:35:08.029600 3293 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 14 14:35:08.035414 kubelet[3293]: I0114 14:35:08.033269 3293 server.go:461] "Adding debug handlers to kubelet server" Jan 14 14:35:08.044955 kubelet[3293]: I0114 14:35:08.042719 3293 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 14 14:35:08.044955 kubelet[3293]: I0114 14:35:08.042966 3293 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 14 14:35:08.045961 kubelet[3293]: I0114 14:35:08.045850 3293 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 14 14:35:08.051531 kubelet[3293]: I0114 14:35:08.051503 3293 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 14 14:35:08.051996 kubelet[3293]: I0114 14:35:08.051977 3293 reconciler_new.go:29] "Reconciler: start to sync state" Jan 14 14:35:08.054381 kubelet[3293]: I0114 14:35:08.054360 3293 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 14 14:35:08.055884 kubelet[3293]: I0114 14:35:08.055867 3293 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 14 14:35:08.055988 kubelet[3293]: I0114 14:35:08.055981 3293 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 14 14:35:08.056067 kubelet[3293]: I0114 14:35:08.056060 3293 kubelet.go:2329] "Starting kubelet main sync loop" Jan 14 14:35:08.056168 kubelet[3293]: E0114 14:35:08.056159 3293 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 14 14:35:08.063475 kubelet[3293]: E0114 14:35:08.063027 3293 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 14 14:35:08.067705 kubelet[3293]: I0114 14:35:08.067677 3293 factory.go:221] Registration of the systemd container factory successfully Jan 14 14:35:08.068083 kubelet[3293]: I0114 14:35:08.068055 3293 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 14 14:35:08.070219 kubelet[3293]: I0114 14:35:08.070194 3293 factory.go:221] Registration of the containerd container factory successfully Jan 14 14:35:08.118828 kubelet[3293]: I0114 14:35:08.118793 3293 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 14 14:35:08.118828 kubelet[3293]: I0114 14:35:08.118814 3293 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 14 14:35:08.118828 kubelet[3293]: I0114 14:35:08.118834 3293 state_mem.go:36] "Initialized new in-memory state store" Jan 14 14:35:08.119072 kubelet[3293]: I0114 14:35:08.119015 3293 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 14 14:35:08.119072 kubelet[3293]: I0114 14:35:08.119043 3293 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 14 14:35:08.119072 kubelet[3293]: I0114 14:35:08.119053 3293 policy_none.go:49] "None policy: Start" Jan 14 14:35:08.119794 kubelet[3293]: I0114 14:35:08.119769 3293 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 14 14:35:08.119794 kubelet[3293]: I0114 14:35:08.119798 3293 state_mem.go:35] "Initializing new in-memory state store" Jan 14 14:35:08.119999 kubelet[3293]: I0114 14:35:08.119981 3293 state_mem.go:75] "Updated machine memory state" Jan 14 14:35:08.123930 kubelet[3293]: I0114 14:35:08.123902 3293 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 14 14:35:08.125325 kubelet[3293]: I0114 14:35:08.124757 3293 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 14 14:35:08.151418 kubelet[3293]: I0114 14:35:08.151302 3293 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-a739250a79" Jan 14 14:35:08.157326 kubelet[3293]: I0114 14:35:08.157296 3293 topology_manager.go:215] "Topology Admit Handler" podUID="e3d569daf38910411a5951db0a487e17" podNamespace="kube-system" podName="kube-apiserver-ci-4081.3.0-a-a739250a79" Jan 14 14:35:08.157486 kubelet[3293]: I0114 14:35:08.157448 3293 topology_manager.go:215] "Topology Admit Handler" podUID="70ef4dceca76911078c0500a239b33e0" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.3.0-a-a739250a79" Jan 14 14:35:08.157552 kubelet[3293]: I0114 14:35:08.157513 3293 topology_manager.go:215] "Topology Admit Handler" podUID="c11ff64f4c96d4ff36fd8c6320729c9b" podNamespace="kube-system" podName="kube-scheduler-ci-4081.3.0-a-a739250a79" Jan 14 14:35:08.170764 kubelet[3293]: W0114 14:35:08.170627 3293 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 14 14:35:08.176069 kubelet[3293]: W0114 14:35:08.175635 3293 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 14 14:35:08.176069 kubelet[3293]: I0114 14:35:08.175903 3293 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081.3.0-a-a739250a79" Jan 14 14:35:08.176069 kubelet[3293]: I0114 14:35:08.176004 3293 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.3.0-a-a739250a79" Jan 14 14:35:08.176553 kubelet[3293]: W0114 14:35:08.176522 3293 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 14 14:35:08.176808 kubelet[3293]: E0114 14:35:08.176694 3293 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.3.0-a-a739250a79\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.0-a-a739250a79" Jan 14 14:35:08.252508 kubelet[3293]: I0114 14:35:08.252470 3293 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e3d569daf38910411a5951db0a487e17-k8s-certs\") pod \"kube-apiserver-ci-4081.3.0-a-a739250a79\" (UID: \"e3d569daf38910411a5951db0a487e17\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-a739250a79" Jan 14 14:35:08.252508 kubelet[3293]: I0114 14:35:08.252524 3293 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e3d569daf38910411a5951db0a487e17-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.0-a-a739250a79\" (UID: \"e3d569daf38910411a5951db0a487e17\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-a739250a79" Jan 14 14:35:08.252844 kubelet[3293]: I0114 14:35:08.252587 3293 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/70ef4dceca76911078c0500a239b33e0-ca-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-a739250a79\" (UID: \"70ef4dceca76911078c0500a239b33e0\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-a739250a79" Jan 14 14:35:08.252844 kubelet[3293]: I0114 14:35:08.252628 3293 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/70ef4dceca76911078c0500a239b33e0-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.0-a-a739250a79\" (UID: \"70ef4dceca76911078c0500a239b33e0\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-a739250a79" Jan 14 14:35:08.252844 kubelet[3293]: I0114 14:35:08.252658 3293 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/70ef4dceca76911078c0500a239b33e0-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.0-a-a739250a79\" (UID: \"70ef4dceca76911078c0500a239b33e0\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-a739250a79" Jan 14 14:35:08.252844 kubelet[3293]: I0114 14:35:08.252687 3293 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/70ef4dceca76911078c0500a239b33e0-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.0-a-a739250a79\" (UID: \"70ef4dceca76911078c0500a239b33e0\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-a739250a79" Jan 14 14:35:08.252844 kubelet[3293]: I0114 14:35:08.252719 3293 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/70ef4dceca76911078c0500a239b33e0-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-a739250a79\" (UID: \"70ef4dceca76911078c0500a239b33e0\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-a739250a79" Jan 14 14:35:08.253016 kubelet[3293]: I0114 14:35:08.252747 3293 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c11ff64f4c96d4ff36fd8c6320729c9b-kubeconfig\") pod \"kube-scheduler-ci-4081.3.0-a-a739250a79\" (UID: \"c11ff64f4c96d4ff36fd8c6320729c9b\") " pod="kube-system/kube-scheduler-ci-4081.3.0-a-a739250a79" Jan 14 14:35:08.253016 kubelet[3293]: I0114 14:35:08.252774 3293 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e3d569daf38910411a5951db0a487e17-ca-certs\") pod \"kube-apiserver-ci-4081.3.0-a-a739250a79\" (UID: \"e3d569daf38910411a5951db0a487e17\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-a739250a79" Jan 14 14:35:08.411801 sudo[3325]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 14 14:35:08.412170 sudo[3325]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 14 14:35:08.946449 sudo[3325]: pam_unix(sudo:session): session closed for user root Jan 14 14:35:09.014875 kubelet[3293]: I0114 14:35:09.014596 3293 apiserver.go:52] "Watching apiserver" Jan 14 14:35:09.052029 kubelet[3293]: I0114 14:35:09.051977 3293 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 14 14:35:09.112424 kubelet[3293]: W0114 14:35:09.111283 3293 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 14 14:35:09.112843 kubelet[3293]: E0114 14:35:09.112702 3293 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.3.0-a-a739250a79\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.0-a-a739250a79" Jan 14 14:35:09.222332 kubelet[3293]: I0114 14:35:09.222111 3293 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.0-a-a739250a79" podStartSLOduration=1.22177663 podStartE2EDuration="1.22177663s" podCreationTimestamp="2025-01-14 14:35:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-14 14:35:09.19160851 +0000 UTC m=+1.258694728" watchObservedRunningTime="2025-01-14 14:35:09.22177663 +0000 UTC m=+1.288862748" Jan 14 14:35:09.253599 kubelet[3293]: I0114 14:35:09.253298 3293 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.0-a-a739250a79" podStartSLOduration=3.252472955 podStartE2EDuration="3.252472955s" podCreationTimestamp="2025-01-14 14:35:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-14 14:35:09.252083951 +0000 UTC m=+1.319170069" watchObservedRunningTime="2025-01-14 14:35:09.252472955 +0000 UTC m=+1.319559073" Jan 14 14:35:09.253599 kubelet[3293]: I0114 14:35:09.253442 3293 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.0-a-a739250a79" podStartSLOduration=1.253409865 podStartE2EDuration="1.253409865s" podCreationTimestamp="2025-01-14 14:35:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-14 14:35:09.224228556 +0000 UTC m=+1.291314774" watchObservedRunningTime="2025-01-14 14:35:09.253409865 +0000 UTC m=+1.320496083" Jan 14 14:35:10.487870 sudo[2361]: pam_unix(sudo:session): session closed for user root Jan 14 14:35:10.594475 sshd[2358]: pam_unix(sshd:session): session closed for user core Jan 14 14:35:10.597833 systemd[1]: sshd@6-10.200.8.10:22-10.200.16.10:58978.service: Deactivated successfully. Jan 14 14:35:10.600150 systemd[1]: session-9.scope: Deactivated successfully. Jan 14 14:35:10.600357 systemd[1]: session-9.scope: Consumed 6.367s CPU time, 190.3M memory peak, 0B memory swap peak. Jan 14 14:35:10.602152 systemd-logind[1663]: Session 9 logged out. Waiting for processes to exit. Jan 14 14:35:10.603240 systemd-logind[1663]: Removed session 9. Jan 14 14:35:20.699791 kubelet[3293]: I0114 14:35:20.699576 3293 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 14 14:35:20.700257 containerd[1696]: time="2025-01-14T14:35:20.700003098Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 14 14:35:20.700591 kubelet[3293]: I0114 14:35:20.700306 3293 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 14 14:35:21.365684 kubelet[3293]: I0114 14:35:21.365632 3293 topology_manager.go:215] "Topology Admit Handler" podUID="b64a0210-9e59-449b-a222-ab07af6f95b1" podNamespace="kube-system" podName="cilium-operator-5cc964979-g9pxr" Jan 14 14:35:21.377527 systemd[1]: Created slice kubepods-besteffort-podb64a0210_9e59_449b_a222_ab07af6f95b1.slice - libcontainer container kubepods-besteffort-podb64a0210_9e59_449b_a222_ab07af6f95b1.slice. Jan 14 14:35:21.537070 kubelet[3293]: I0114 14:35:21.537032 3293 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b64a0210-9e59-449b-a222-ab07af6f95b1-cilium-config-path\") pod \"cilium-operator-5cc964979-g9pxr\" (UID: \"b64a0210-9e59-449b-a222-ab07af6f95b1\") " pod="kube-system/cilium-operator-5cc964979-g9pxr" Jan 14 14:35:21.537437 kubelet[3293]: I0114 14:35:21.537347 3293 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7r5t\" (UniqueName: \"kubernetes.io/projected/b64a0210-9e59-449b-a222-ab07af6f95b1-kube-api-access-p7r5t\") pod \"cilium-operator-5cc964979-g9pxr\" (UID: \"b64a0210-9e59-449b-a222-ab07af6f95b1\") " pod="kube-system/cilium-operator-5cc964979-g9pxr" Jan 14 14:35:21.623503 kubelet[3293]: I0114 14:35:21.623356 3293 topology_manager.go:215] "Topology Admit Handler" podUID="52ca8627-2f66-49b1-9ac8-fb4d27ac914b" podNamespace="kube-system" podName="kube-proxy-4kqc7" Jan 14 14:35:21.630085 kubelet[3293]: I0114 14:35:21.630044 3293 topology_manager.go:215] "Topology Admit Handler" podUID="07c7dbe4-af13-45a2-86cb-387d2ea87b87" podNamespace="kube-system" podName="cilium-szr2k" Jan 14 14:35:21.636489 systemd[1]: Created slice kubepods-besteffort-pod52ca8627_2f66_49b1_9ac8_fb4d27ac914b.slice - libcontainer container kubepods-besteffort-pod52ca8627_2f66_49b1_9ac8_fb4d27ac914b.slice. Jan 14 14:35:21.662323 systemd[1]: Created slice kubepods-burstable-pod07c7dbe4_af13_45a2_86cb_387d2ea87b87.slice - libcontainer container kubepods-burstable-pod07c7dbe4_af13_45a2_86cb_387d2ea87b87.slice. Jan 14 14:35:21.739090 kubelet[3293]: I0114 14:35:21.739049 3293 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/07c7dbe4-af13-45a2-86cb-387d2ea87b87-cilium-cgroup\") pod \"cilium-szr2k\" (UID: \"07c7dbe4-af13-45a2-86cb-387d2ea87b87\") " pod="kube-system/cilium-szr2k" Jan 14 14:35:21.740595 kubelet[3293]: I0114 14:35:21.739549 3293 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/07c7dbe4-af13-45a2-86cb-387d2ea87b87-cni-path\") pod \"cilium-szr2k\" (UID: \"07c7dbe4-af13-45a2-86cb-387d2ea87b87\") " pod="kube-system/cilium-szr2k" Jan 14 14:35:21.740595 kubelet[3293]: I0114 14:35:21.739616 3293 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/52ca8627-2f66-49b1-9ac8-fb4d27ac914b-lib-modules\") pod \"kube-proxy-4kqc7\" (UID: \"52ca8627-2f66-49b1-9ac8-fb4d27ac914b\") " pod="kube-system/kube-proxy-4kqc7" Jan 14 14:35:21.741094 kubelet[3293]: I0114 14:35:21.739646 3293 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/07c7dbe4-af13-45a2-86cb-387d2ea87b87-lib-modules\") pod \"cilium-szr2k\" (UID: \"07c7dbe4-af13-45a2-86cb-387d2ea87b87\") " pod="kube-system/cilium-szr2k" Jan 14 14:35:21.741094 kubelet[3293]: I0114 14:35:21.740844 3293 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/07c7dbe4-af13-45a2-86cb-387d2ea87b87-hubble-tls\") pod \"cilium-szr2k\" (UID: \"07c7dbe4-af13-45a2-86cb-387d2ea87b87\") " pod="kube-system/cilium-szr2k" Jan 14 14:35:21.741094 kubelet[3293]: I0114 14:35:21.740900 3293 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/07c7dbe4-af13-45a2-86cb-387d2ea87b87-cilium-run\") pod \"cilium-szr2k\" (UID: \"07c7dbe4-af13-45a2-86cb-387d2ea87b87\") " pod="kube-system/cilium-szr2k" Jan 14 14:35:21.741094 kubelet[3293]: I0114 14:35:21.740928 3293 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/07c7dbe4-af13-45a2-86cb-387d2ea87b87-host-proc-sys-net\") pod \"cilium-szr2k\" (UID: \"07c7dbe4-af13-45a2-86cb-387d2ea87b87\") " pod="kube-system/cilium-szr2k" Jan 14 14:35:21.741094 kubelet[3293]: I0114 14:35:21.740990 3293 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/07c7dbe4-af13-45a2-86cb-387d2ea87b87-clustermesh-secrets\") pod \"cilium-szr2k\" (UID: \"07c7dbe4-af13-45a2-86cb-387d2ea87b87\") " pod="kube-system/cilium-szr2k" Jan 14 14:35:21.741094 kubelet[3293]: I0114 14:35:21.741041 3293 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/07c7dbe4-af13-45a2-86cb-387d2ea87b87-cilium-config-path\") pod \"cilium-szr2k\" (UID: \"07c7dbe4-af13-45a2-86cb-387d2ea87b87\") " pod="kube-system/cilium-szr2k" Jan 14 14:35:21.741379 kubelet[3293]: I0114 14:35:21.741073 3293 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfvmv\" (UniqueName: \"kubernetes.io/projected/52ca8627-2f66-49b1-9ac8-fb4d27ac914b-kube-api-access-sfvmv\") pod \"kube-proxy-4kqc7\" (UID: \"52ca8627-2f66-49b1-9ac8-fb4d27ac914b\") " pod="kube-system/kube-proxy-4kqc7" Jan 14 14:35:21.741782 kubelet[3293]: I0114 14:35:21.741437 3293 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b966s\" (UniqueName: \"kubernetes.io/projected/07c7dbe4-af13-45a2-86cb-387d2ea87b87-kube-api-access-b966s\") pod \"cilium-szr2k\" (UID: \"07c7dbe4-af13-45a2-86cb-387d2ea87b87\") " pod="kube-system/cilium-szr2k" Jan 14 14:35:21.741782 kubelet[3293]: I0114 14:35:21.741544 3293 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/52ca8627-2f66-49b1-9ac8-fb4d27ac914b-xtables-lock\") pod \"kube-proxy-4kqc7\" (UID: \"52ca8627-2f66-49b1-9ac8-fb4d27ac914b\") " pod="kube-system/kube-proxy-4kqc7" Jan 14 14:35:21.741782 kubelet[3293]: I0114 14:35:21.741602 3293 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/52ca8627-2f66-49b1-9ac8-fb4d27ac914b-kube-proxy\") pod \"kube-proxy-4kqc7\" (UID: \"52ca8627-2f66-49b1-9ac8-fb4d27ac914b\") " pod="kube-system/kube-proxy-4kqc7" Jan 14 14:35:21.741782 kubelet[3293]: I0114 14:35:21.741627 3293 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/07c7dbe4-af13-45a2-86cb-387d2ea87b87-etc-cni-netd\") pod \"cilium-szr2k\" (UID: \"07c7dbe4-af13-45a2-86cb-387d2ea87b87\") " pod="kube-system/cilium-szr2k" Jan 14 14:35:21.741782 kubelet[3293]: I0114 14:35:21.741679 3293 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/07c7dbe4-af13-45a2-86cb-387d2ea87b87-xtables-lock\") pod \"cilium-szr2k\" (UID: \"07c7dbe4-af13-45a2-86cb-387d2ea87b87\") " pod="kube-system/cilium-szr2k" Jan 14 14:35:21.742455 kubelet[3293]: I0114 14:35:21.742011 3293 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/07c7dbe4-af13-45a2-86cb-387d2ea87b87-host-proc-sys-kernel\") pod \"cilium-szr2k\" (UID: \"07c7dbe4-af13-45a2-86cb-387d2ea87b87\") " pod="kube-system/cilium-szr2k" Jan 14 14:35:21.742455 kubelet[3293]: I0114 14:35:21.742076 3293 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/07c7dbe4-af13-45a2-86cb-387d2ea87b87-bpf-maps\") pod \"cilium-szr2k\" (UID: \"07c7dbe4-af13-45a2-86cb-387d2ea87b87\") " pod="kube-system/cilium-szr2k" Jan 14 14:35:21.742455 kubelet[3293]: I0114 14:35:21.742102 3293 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/07c7dbe4-af13-45a2-86cb-387d2ea87b87-hostproc\") pod \"cilium-szr2k\" (UID: \"07c7dbe4-af13-45a2-86cb-387d2ea87b87\") " pod="kube-system/cilium-szr2k" Jan 14 14:35:21.956265 containerd[1696]: time="2025-01-14T14:35:21.956220330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4kqc7,Uid:52ca8627-2f66-49b1-9ac8-fb4d27ac914b,Namespace:kube-system,Attempt:0,}" Jan 14 14:35:21.969366 containerd[1696]: time="2025-01-14T14:35:21.969321622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-szr2k,Uid:07c7dbe4-af13-45a2-86cb-387d2ea87b87,Namespace:kube-system,Attempt:0,}" Jan 14 14:35:21.987367 containerd[1696]: time="2025-01-14T14:35:21.987300549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-g9pxr,Uid:b64a0210-9e59-449b-a222-ab07af6f95b1,Namespace:kube-system,Attempt:0,}" Jan 14 14:35:22.018067 containerd[1696]: time="2025-01-14T14:35:22.017795263Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 14:35:22.018067 containerd[1696]: time="2025-01-14T14:35:22.017878364Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 14:35:22.018067 containerd[1696]: time="2025-01-14T14:35:22.017919864Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 14:35:22.018067 containerd[1696]: time="2025-01-14T14:35:22.018012665Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 14:35:22.042648 systemd[1]: Started cri-containerd-a31a58e5c05e0816a5f462cbcaadb1027d0e3220de5929a017bc7b931f4e9483.scope - libcontainer container a31a58e5c05e0816a5f462cbcaadb1027d0e3220de5929a017bc7b931f4e9483. Jan 14 14:35:22.051297 containerd[1696]: time="2025-01-14T14:35:22.051145097Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 14:35:22.054240 containerd[1696]: time="2025-01-14T14:35:22.051446500Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 14:35:22.054240 containerd[1696]: time="2025-01-14T14:35:22.051486400Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 14:35:22.054240 containerd[1696]: time="2025-01-14T14:35:22.052335606Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 14:35:22.082983 systemd[1]: Started cri-containerd-65fd2a73a6ea74157fda95e80f83d4ffd023d8aa763fad2f8b6358903f680a68.scope - libcontainer container 65fd2a73a6ea74157fda95e80f83d4ffd023d8aa763fad2f8b6358903f680a68. Jan 14 14:35:22.086119 containerd[1696]: time="2025-01-14T14:35:22.086026243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4kqc7,Uid:52ca8627-2f66-49b1-9ac8-fb4d27ac914b,Namespace:kube-system,Attempt:0,} returns sandbox id \"a31a58e5c05e0816a5f462cbcaadb1027d0e3220de5929a017bc7b931f4e9483\"" Jan 14 14:35:22.093866 containerd[1696]: time="2025-01-14T14:35:22.093714497Z" level=info msg="CreateContainer within sandbox \"a31a58e5c05e0816a5f462cbcaadb1027d0e3220de5929a017bc7b931f4e9483\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 14 14:35:22.113646 containerd[1696]: time="2025-01-14T14:35:22.112295927Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 14:35:22.113646 containerd[1696]: time="2025-01-14T14:35:22.112375528Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 14:35:22.113646 containerd[1696]: time="2025-01-14T14:35:22.112417828Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 14:35:22.113646 containerd[1696]: time="2025-01-14T14:35:22.112510829Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 14:35:22.128647 containerd[1696]: time="2025-01-14T14:35:22.128379040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-szr2k,Uid:07c7dbe4-af13-45a2-86cb-387d2ea87b87,Namespace:kube-system,Attempt:0,} returns sandbox id \"65fd2a73a6ea74157fda95e80f83d4ffd023d8aa763fad2f8b6358903f680a68\"" Jan 14 14:35:22.131491 containerd[1696]: time="2025-01-14T14:35:22.131282161Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 14 14:35:22.141608 systemd[1]: Started cri-containerd-5586e4e79625afd7cdb741b101b3ca0f94550fecdef2b2e00d9a36cf9fd880f0.scope - libcontainer container 5586e4e79625afd7cdb741b101b3ca0f94550fecdef2b2e00d9a36cf9fd880f0. Jan 14 14:35:22.164289 containerd[1696]: time="2025-01-14T14:35:22.164090192Z" level=info msg="CreateContainer within sandbox \"a31a58e5c05e0816a5f462cbcaadb1027d0e3220de5929a017bc7b931f4e9483\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c30d18891b8acebd2b852c0b1c7b03366031df19b9e9e2ecc9df38c109d86903\"" Jan 14 14:35:22.165742 containerd[1696]: time="2025-01-14T14:35:22.165712703Z" level=info msg="StartContainer for \"c30d18891b8acebd2b852c0b1c7b03366031df19b9e9e2ecc9df38c109d86903\"" Jan 14 14:35:22.201578 containerd[1696]: time="2025-01-14T14:35:22.201411654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-g9pxr,Uid:b64a0210-9e59-449b-a222-ab07af6f95b1,Namespace:kube-system,Attempt:0,} returns sandbox id \"5586e4e79625afd7cdb741b101b3ca0f94550fecdef2b2e00d9a36cf9fd880f0\"" Jan 14 14:35:22.209574 systemd[1]: Started cri-containerd-c30d18891b8acebd2b852c0b1c7b03366031df19b9e9e2ecc9df38c109d86903.scope - libcontainer container c30d18891b8acebd2b852c0b1c7b03366031df19b9e9e2ecc9df38c109d86903. Jan 14 14:35:22.240245 containerd[1696]: time="2025-01-14T14:35:22.240181826Z" level=info msg="StartContainer for \"c30d18891b8acebd2b852c0b1c7b03366031df19b9e9e2ecc9df38c109d86903\" returns successfully" Jan 14 14:35:23.155180 kubelet[3293]: I0114 14:35:23.155100 3293 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-4kqc7" podStartSLOduration=2.155031858 podStartE2EDuration="2.155031858s" podCreationTimestamp="2025-01-14 14:35:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-14 14:35:23.153185245 +0000 UTC m=+15.220271463" watchObservedRunningTime="2025-01-14 14:35:23.155031858 +0000 UTC m=+15.222118076" Jan 14 14:35:29.158652 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1666903724.mount: Deactivated successfully. Jan 14 14:35:31.268633 containerd[1696]: time="2025-01-14T14:35:31.268573043Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 14:35:31.270485 containerd[1696]: time="2025-01-14T14:35:31.270417860Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166735383" Jan 14 14:35:31.274445 containerd[1696]: time="2025-01-14T14:35:31.274404195Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 14:35:31.276128 containerd[1696]: time="2025-01-14T14:35:31.275945709Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 9.144519547s" Jan 14 14:35:31.276128 containerd[1696]: time="2025-01-14T14:35:31.275990509Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 14 14:35:31.277582 containerd[1696]: time="2025-01-14T14:35:31.277355522Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 14 14:35:31.278669 containerd[1696]: time="2025-01-14T14:35:31.278542832Z" level=info msg="CreateContainer within sandbox \"65fd2a73a6ea74157fda95e80f83d4ffd023d8aa763fad2f8b6358903f680a68\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 14 14:35:31.304490 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3949247475.mount: Deactivated successfully. Jan 14 14:35:31.312077 containerd[1696]: time="2025-01-14T14:35:31.312021531Z" level=info msg="CreateContainer within sandbox \"65fd2a73a6ea74157fda95e80f83d4ffd023d8aa763fad2f8b6358903f680a68\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8e02d3810d0aef7e540b63e736899e9bea0f1a44cfe72706bfefff79466e4eb2\"" Jan 14 14:35:31.312953 containerd[1696]: time="2025-01-14T14:35:31.312719737Z" level=info msg="StartContainer for \"8e02d3810d0aef7e540b63e736899e9bea0f1a44cfe72706bfefff79466e4eb2\"" Jan 14 14:35:31.346568 systemd[1]: Started cri-containerd-8e02d3810d0aef7e540b63e736899e9bea0f1a44cfe72706bfefff79466e4eb2.scope - libcontainer container 8e02d3810d0aef7e540b63e736899e9bea0f1a44cfe72706bfefff79466e4eb2. Jan 14 14:35:31.371425 containerd[1696]: time="2025-01-14T14:35:31.371279761Z" level=info msg="StartContainer for \"8e02d3810d0aef7e540b63e736899e9bea0f1a44cfe72706bfefff79466e4eb2\" returns successfully" Jan 14 14:35:31.382107 systemd[1]: cri-containerd-8e02d3810d0aef7e540b63e736899e9bea0f1a44cfe72706bfefff79466e4eb2.scope: Deactivated successfully. Jan 14 14:35:32.300826 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8e02d3810d0aef7e540b63e736899e9bea0f1a44cfe72706bfefff79466e4eb2-rootfs.mount: Deactivated successfully. Jan 14 14:35:35.096509 containerd[1696]: time="2025-01-14T14:35:35.096424077Z" level=info msg="shim disconnected" id=8e02d3810d0aef7e540b63e736899e9bea0f1a44cfe72706bfefff79466e4eb2 namespace=k8s.io Jan 14 14:35:35.096509 containerd[1696]: time="2025-01-14T14:35:35.096501277Z" level=warning msg="cleaning up after shim disconnected" id=8e02d3810d0aef7e540b63e736899e9bea0f1a44cfe72706bfefff79466e4eb2 namespace=k8s.io Jan 14 14:35:35.096509 containerd[1696]: time="2025-01-14T14:35:35.096512177Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 14 14:35:35.165070 containerd[1696]: time="2025-01-14T14:35:35.165014573Z" level=info msg="CreateContainer within sandbox \"65fd2a73a6ea74157fda95e80f83d4ffd023d8aa763fad2f8b6358903f680a68\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 14 14:35:35.206601 containerd[1696]: time="2025-01-14T14:35:35.206548834Z" level=info msg="CreateContainer within sandbox \"65fd2a73a6ea74157fda95e80f83d4ffd023d8aa763fad2f8b6358903f680a68\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"724d4ef6ab3ccd0c40c0940a3e98ee9836f717c58d63963157abc1467f76e12b\"" Jan 14 14:35:35.207426 containerd[1696]: time="2025-01-14T14:35:35.207268241Z" level=info msg="StartContainer for \"724d4ef6ab3ccd0c40c0940a3e98ee9836f717c58d63963157abc1467f76e12b\"" Jan 14 14:35:35.242572 systemd[1]: Started cri-containerd-724d4ef6ab3ccd0c40c0940a3e98ee9836f717c58d63963157abc1467f76e12b.scope - libcontainer container 724d4ef6ab3ccd0c40c0940a3e98ee9836f717c58d63963157abc1467f76e12b. Jan 14 14:35:35.271301 containerd[1696]: time="2025-01-14T14:35:35.271246597Z" level=info msg="StartContainer for \"724d4ef6ab3ccd0c40c0940a3e98ee9836f717c58d63963157abc1467f76e12b\" returns successfully" Jan 14 14:35:35.281348 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 14 14:35:35.282047 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 14 14:35:35.282138 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 14 14:35:35.288955 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 14 14:35:35.289229 systemd[1]: cri-containerd-724d4ef6ab3ccd0c40c0940a3e98ee9836f717c58d63963157abc1467f76e12b.scope: Deactivated successfully. Jan 14 14:35:35.315490 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-724d4ef6ab3ccd0c40c0940a3e98ee9836f717c58d63963157abc1467f76e12b-rootfs.mount: Deactivated successfully. Jan 14 14:35:35.317435 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 14 14:35:35.331454 containerd[1696]: time="2025-01-14T14:35:35.331336220Z" level=info msg="shim disconnected" id=724d4ef6ab3ccd0c40c0940a3e98ee9836f717c58d63963157abc1467f76e12b namespace=k8s.io Jan 14 14:35:35.331454 containerd[1696]: time="2025-01-14T14:35:35.331453321Z" level=warning msg="cleaning up after shim disconnected" id=724d4ef6ab3ccd0c40c0940a3e98ee9836f717c58d63963157abc1467f76e12b namespace=k8s.io Jan 14 14:35:35.331778 containerd[1696]: time="2025-01-14T14:35:35.331467321Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 14 14:35:36.169353 containerd[1696]: time="2025-01-14T14:35:36.169302307Z" level=info msg="CreateContainer within sandbox \"65fd2a73a6ea74157fda95e80f83d4ffd023d8aa763fad2f8b6358903f680a68\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 14 14:35:36.217326 containerd[1696]: time="2025-01-14T14:35:36.217273624Z" level=info msg="CreateContainer within sandbox \"65fd2a73a6ea74157fda95e80f83d4ffd023d8aa763fad2f8b6358903f680a68\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8cc5a5324d20b0f9291d07d6afee6e29717a400cda613a400db8b34181a221e8\"" Jan 14 14:35:36.217939 containerd[1696]: time="2025-01-14T14:35:36.217879530Z" level=info msg="StartContainer for \"8cc5a5324d20b0f9291d07d6afee6e29717a400cda613a400db8b34181a221e8\"" Jan 14 14:35:36.253576 systemd[1]: Started cri-containerd-8cc5a5324d20b0f9291d07d6afee6e29717a400cda613a400db8b34181a221e8.scope - libcontainer container 8cc5a5324d20b0f9291d07d6afee6e29717a400cda613a400db8b34181a221e8. Jan 14 14:35:36.283943 systemd[1]: cri-containerd-8cc5a5324d20b0f9291d07d6afee6e29717a400cda613a400db8b34181a221e8.scope: Deactivated successfully. Jan 14 14:35:36.285588 containerd[1696]: time="2025-01-14T14:35:36.285457117Z" level=info msg="StartContainer for \"8cc5a5324d20b0f9291d07d6afee6e29717a400cda613a400db8b34181a221e8\" returns successfully" Jan 14 14:35:36.308529 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8cc5a5324d20b0f9291d07d6afee6e29717a400cda613a400db8b34181a221e8-rootfs.mount: Deactivated successfully. Jan 14 14:35:36.318479 containerd[1696]: time="2025-01-14T14:35:36.318403904Z" level=info msg="shim disconnected" id=8cc5a5324d20b0f9291d07d6afee6e29717a400cda613a400db8b34181a221e8 namespace=k8s.io Jan 14 14:35:36.318479 containerd[1696]: time="2025-01-14T14:35:36.318472305Z" level=warning msg="cleaning up after shim disconnected" id=8cc5a5324d20b0f9291d07d6afee6e29717a400cda613a400db8b34181a221e8 namespace=k8s.io Jan 14 14:35:36.318479 containerd[1696]: time="2025-01-14T14:35:36.318486005Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 14 14:35:37.172607 containerd[1696]: time="2025-01-14T14:35:37.172557332Z" level=info msg="CreateContainer within sandbox \"65fd2a73a6ea74157fda95e80f83d4ffd023d8aa763fad2f8b6358903f680a68\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 14 14:35:37.206255 containerd[1696]: time="2025-01-14T14:35:37.206205625Z" level=info msg="CreateContainer within sandbox \"65fd2a73a6ea74157fda95e80f83d4ffd023d8aa763fad2f8b6358903f680a68\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1064f82c47497064a70f1a51d301ec3e9176e5b8a01f72feb533c3770623ecf5\"" Jan 14 14:35:37.206945 containerd[1696]: time="2025-01-14T14:35:37.206868831Z" level=info msg="StartContainer for \"1064f82c47497064a70f1a51d301ec3e9176e5b8a01f72feb533c3770623ecf5\"" Jan 14 14:35:37.241561 systemd[1]: Started cri-containerd-1064f82c47497064a70f1a51d301ec3e9176e5b8a01f72feb533c3770623ecf5.scope - libcontainer container 1064f82c47497064a70f1a51d301ec3e9176e5b8a01f72feb533c3770623ecf5. Jan 14 14:35:37.264058 systemd[1]: cri-containerd-1064f82c47497064a70f1a51d301ec3e9176e5b8a01f72feb533c3770623ecf5.scope: Deactivated successfully. Jan 14 14:35:37.269260 containerd[1696]: time="2025-01-14T14:35:37.268321065Z" level=info msg="StartContainer for \"1064f82c47497064a70f1a51d301ec3e9176e5b8a01f72feb533c3770623ecf5\" returns successfully" Jan 14 14:35:37.287513 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1064f82c47497064a70f1a51d301ec3e9176e5b8a01f72feb533c3770623ecf5-rootfs.mount: Deactivated successfully. Jan 14 14:35:37.302416 containerd[1696]: time="2025-01-14T14:35:37.302320861Z" level=info msg="shim disconnected" id=1064f82c47497064a70f1a51d301ec3e9176e5b8a01f72feb533c3770623ecf5 namespace=k8s.io Jan 14 14:35:37.302416 containerd[1696]: time="2025-01-14T14:35:37.302421862Z" level=warning msg="cleaning up after shim disconnected" id=1064f82c47497064a70f1a51d301ec3e9176e5b8a01f72feb533c3770623ecf5 namespace=k8s.io Jan 14 14:35:37.302830 containerd[1696]: time="2025-01-14T14:35:37.302435162Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 14 14:35:38.178766 containerd[1696]: time="2025-01-14T14:35:38.178573782Z" level=info msg="CreateContainer within sandbox \"65fd2a73a6ea74157fda95e80f83d4ffd023d8aa763fad2f8b6358903f680a68\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 14 14:35:38.221464 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3018294995.mount: Deactivated successfully. Jan 14 14:35:38.227406 containerd[1696]: time="2025-01-14T14:35:38.227333306Z" level=info msg="CreateContainer within sandbox \"65fd2a73a6ea74157fda95e80f83d4ffd023d8aa763fad2f8b6358903f680a68\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e54364e83a5b07b8f3215d6e1d83a9ed7843c42f20bfb2cf39d4547986f5a82f\"" Jan 14 14:35:38.229227 containerd[1696]: time="2025-01-14T14:35:38.228043712Z" level=info msg="StartContainer for \"e54364e83a5b07b8f3215d6e1d83a9ed7843c42f20bfb2cf39d4547986f5a82f\"" Jan 14 14:35:38.260363 systemd[1]: run-containerd-runc-k8s.io-e54364e83a5b07b8f3215d6e1d83a9ed7843c42f20bfb2cf39d4547986f5a82f-runc.NXoIux.mount: Deactivated successfully. Jan 14 14:35:38.268617 systemd[1]: Started cri-containerd-e54364e83a5b07b8f3215d6e1d83a9ed7843c42f20bfb2cf39d4547986f5a82f.scope - libcontainer container e54364e83a5b07b8f3215d6e1d83a9ed7843c42f20bfb2cf39d4547986f5a82f. Jan 14 14:35:38.304881 containerd[1696]: time="2025-01-14T14:35:38.304688678Z" level=info msg="StartContainer for \"e54364e83a5b07b8f3215d6e1d83a9ed7843c42f20bfb2cf39d4547986f5a82f\" returns successfully" Jan 14 14:35:38.468558 kubelet[3293]: I0114 14:35:38.467855 3293 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 14 14:35:38.512441 kubelet[3293]: I0114 14:35:38.510022 3293 topology_manager.go:215] "Topology Admit Handler" podUID="0b9ad94d-88a0-49a0-a5ef-05a9fe8d4a4f" podNamespace="kube-system" podName="coredns-76f75df574-g7fnk" Jan 14 14:35:38.524372 kubelet[3293]: I0114 14:35:38.523518 3293 topology_manager.go:215] "Topology Admit Handler" podUID="ce27c917-9b86-4b4c-aedf-fa0f1b9e5dc8" podNamespace="kube-system" podName="coredns-76f75df574-rwvzt" Jan 14 14:35:38.527302 systemd[1]: Created slice kubepods-burstable-pod0b9ad94d_88a0_49a0_a5ef_05a9fe8d4a4f.slice - libcontainer container kubepods-burstable-pod0b9ad94d_88a0_49a0_a5ef_05a9fe8d4a4f.slice. Jan 14 14:35:38.544805 systemd[1]: Created slice kubepods-burstable-podce27c917_9b86_4b4c_aedf_fa0f1b9e5dc8.slice - libcontainer container kubepods-burstable-podce27c917_9b86_4b4c_aedf_fa0f1b9e5dc8.slice. Jan 14 14:35:38.566163 kubelet[3293]: I0114 14:35:38.566118 3293 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ce27c917-9b86-4b4c-aedf-fa0f1b9e5dc8-config-volume\") pod \"coredns-76f75df574-rwvzt\" (UID: \"ce27c917-9b86-4b4c-aedf-fa0f1b9e5dc8\") " pod="kube-system/coredns-76f75df574-rwvzt" Jan 14 14:35:38.566347 kubelet[3293]: I0114 14:35:38.566177 3293 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0b9ad94d-88a0-49a0-a5ef-05a9fe8d4a4f-config-volume\") pod \"coredns-76f75df574-g7fnk\" (UID: \"0b9ad94d-88a0-49a0-a5ef-05a9fe8d4a4f\") " pod="kube-system/coredns-76f75df574-g7fnk" Jan 14 14:35:38.566347 kubelet[3293]: I0114 14:35:38.566203 3293 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-984mm\" (UniqueName: \"kubernetes.io/projected/ce27c917-9b86-4b4c-aedf-fa0f1b9e5dc8-kube-api-access-984mm\") pod \"coredns-76f75df574-rwvzt\" (UID: \"ce27c917-9b86-4b4c-aedf-fa0f1b9e5dc8\") " pod="kube-system/coredns-76f75df574-rwvzt" Jan 14 14:35:38.566347 kubelet[3293]: I0114 14:35:38.566232 3293 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4td8z\" (UniqueName: \"kubernetes.io/projected/0b9ad94d-88a0-49a0-a5ef-05a9fe8d4a4f-kube-api-access-4td8z\") pod \"coredns-76f75df574-g7fnk\" (UID: \"0b9ad94d-88a0-49a0-a5ef-05a9fe8d4a4f\") " pod="kube-system/coredns-76f75df574-g7fnk" Jan 14 14:35:38.840936 containerd[1696]: time="2025-01-14T14:35:38.840432038Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-g7fnk,Uid:0b9ad94d-88a0-49a0-a5ef-05a9fe8d4a4f,Namespace:kube-system,Attempt:0,}" Jan 14 14:35:38.851574 containerd[1696]: time="2025-01-14T14:35:38.851527634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-rwvzt,Uid:ce27c917-9b86-4b4c-aedf-fa0f1b9e5dc8,Namespace:kube-system,Attempt:0,}" Jan 14 14:35:45.672934 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1854021245.mount: Deactivated successfully. Jan 14 14:35:46.539639 containerd[1696]: time="2025-01-14T14:35:46.539583771Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 14:35:46.542936 containerd[1696]: time="2025-01-14T14:35:46.542860997Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18906601" Jan 14 14:35:46.546221 containerd[1696]: time="2025-01-14T14:35:46.546159824Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 14:35:46.547687 containerd[1696]: time="2025-01-14T14:35:46.547636636Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 15.270218014s" Jan 14 14:35:46.547943 containerd[1696]: time="2025-01-14T14:35:46.547826137Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 14 14:35:46.550256 containerd[1696]: time="2025-01-14T14:35:46.550219857Z" level=info msg="CreateContainer within sandbox \"5586e4e79625afd7cdb741b101b3ca0f94550fecdef2b2e00d9a36cf9fd880f0\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 14 14:35:46.579958 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2297266695.mount: Deactivated successfully. Jan 14 14:35:46.586077 containerd[1696]: time="2025-01-14T14:35:46.586035147Z" level=info msg="CreateContainer within sandbox \"5586e4e79625afd7cdb741b101b3ca0f94550fecdef2b2e00d9a36cf9fd880f0\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"4c0e592bea7f7d3c2d6fe2e8f4f1a8e2688f46942a84015d5c75964a0bc9ae98\"" Jan 14 14:35:46.587497 containerd[1696]: time="2025-01-14T14:35:46.586680752Z" level=info msg="StartContainer for \"4c0e592bea7f7d3c2d6fe2e8f4f1a8e2688f46942a84015d5c75964a0bc9ae98\"" Jan 14 14:35:46.617583 systemd[1]: Started cri-containerd-4c0e592bea7f7d3c2d6fe2e8f4f1a8e2688f46942a84015d5c75964a0bc9ae98.scope - libcontainer container 4c0e592bea7f7d3c2d6fe2e8f4f1a8e2688f46942a84015d5c75964a0bc9ae98. Jan 14 14:35:46.645290 containerd[1696]: time="2025-01-14T14:35:46.645245226Z" level=info msg="StartContainer for \"4c0e592bea7f7d3c2d6fe2e8f4f1a8e2688f46942a84015d5c75964a0bc9ae98\" returns successfully" Jan 14 14:35:47.308402 kubelet[3293]: I0114 14:35:47.307771 3293 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-szr2k" podStartSLOduration=17.161667129 podStartE2EDuration="26.307717488s" podCreationTimestamp="2025-01-14 14:35:21 +0000 UTC" firstStartedPulling="2025-01-14 14:35:22.130578756 +0000 UTC m=+14.197664874" lastFinishedPulling="2025-01-14 14:35:31.276629115 +0000 UTC m=+23.343715233" observedRunningTime="2025-01-14 14:35:39.200460269 +0000 UTC m=+31.267546487" watchObservedRunningTime="2025-01-14 14:35:47.307717488 +0000 UTC m=+39.374803606" Jan 14 14:35:50.408791 systemd-networkd[1550]: cilium_host: Link UP Jan 14 14:35:50.411558 systemd-networkd[1550]: cilium_net: Link UP Jan 14 14:35:50.411776 systemd-networkd[1550]: cilium_net: Gained carrier Jan 14 14:35:50.411978 systemd-networkd[1550]: cilium_host: Gained carrier Jan 14 14:35:50.538534 systemd-networkd[1550]: cilium_vxlan: Link UP Jan 14 14:35:50.538719 systemd-networkd[1550]: cilium_vxlan: Gained carrier Jan 14 14:35:50.764420 kernel: NET: Registered PF_ALG protocol family Jan 14 14:35:50.955556 systemd-networkd[1550]: cilium_net: Gained IPv6LL Jan 14 14:35:51.211597 systemd-networkd[1550]: cilium_host: Gained IPv6LL Jan 14 14:35:51.438813 systemd-networkd[1550]: lxc_health: Link UP Jan 14 14:35:51.451566 systemd-networkd[1550]: lxc_health: Gained carrier Jan 14 14:35:51.947721 kernel: eth0: renamed from tmp968cd Jan 14 14:35:51.941971 systemd-networkd[1550]: lxc70094e69cb35: Link UP Jan 14 14:35:51.959540 systemd-networkd[1550]: lxc70094e69cb35: Gained carrier Jan 14 14:35:51.999924 systemd-networkd[1550]: lxc95e39af1a017: Link UP Jan 14 14:35:52.006656 kernel: eth0: renamed from tmp6e93a Jan 14 14:35:52.011460 kubelet[3293]: I0114 14:35:52.010168 3293 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-g9pxr" podStartSLOduration=6.666585752 podStartE2EDuration="31.010106716s" podCreationTimestamp="2025-01-14 14:35:21 +0000 UTC" firstStartedPulling="2025-01-14 14:35:22.204758077 +0000 UTC m=+14.271844195" lastFinishedPulling="2025-01-14 14:35:46.548278941 +0000 UTC m=+38.615365159" observedRunningTime="2025-01-14 14:35:47.311545019 +0000 UTC m=+39.378631137" watchObservedRunningTime="2025-01-14 14:35:52.010106716 +0000 UTC m=+44.077192934" Jan 14 14:35:52.016559 systemd-networkd[1550]: lxc95e39af1a017: Gained carrier Jan 14 14:35:52.107587 systemd-networkd[1550]: cilium_vxlan: Gained IPv6LL Jan 14 14:35:53.131611 systemd-networkd[1550]: lxc_health: Gained IPv6LL Jan 14 14:35:53.451556 systemd-networkd[1550]: lxc95e39af1a017: Gained IPv6LL Jan 14 14:35:53.451928 systemd-networkd[1550]: lxc70094e69cb35: Gained IPv6LL Jan 14 14:35:55.921799 containerd[1696]: time="2025-01-14T14:35:55.920903061Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 14:35:55.921799 containerd[1696]: time="2025-01-14T14:35:55.921173763Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 14:35:55.921799 containerd[1696]: time="2025-01-14T14:35:55.921191463Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 14:35:55.921799 containerd[1696]: time="2025-01-14T14:35:55.921464865Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 14:35:55.958777 systemd[1]: Started cri-containerd-6e93a0499c032e88c7d995677c7d67b3337a5bf40a43174d1a9114158239de81.scope - libcontainer container 6e93a0499c032e88c7d995677c7d67b3337a5bf40a43174d1a9114158239de81. Jan 14 14:35:55.975436 containerd[1696]: time="2025-01-14T14:35:55.975019274Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 14:35:55.976309 containerd[1696]: time="2025-01-14T14:35:55.975100074Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 14:35:55.976309 containerd[1696]: time="2025-01-14T14:35:55.975171975Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 14:35:55.977433 containerd[1696]: time="2025-01-14T14:35:55.976606786Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 14:35:56.020971 systemd[1]: run-containerd-runc-k8s.io-968cd05f49eab19dd7dadac26ffc0f4b7c1fe03130fef2b93d660eeee483888f-runc.Ki96gU.mount: Deactivated successfully. Jan 14 14:35:56.034860 systemd[1]: Started cri-containerd-968cd05f49eab19dd7dadac26ffc0f4b7c1fe03130fef2b93d660eeee483888f.scope - libcontainer container 968cd05f49eab19dd7dadac26ffc0f4b7c1fe03130fef2b93d660eeee483888f. Jan 14 14:35:56.073572 containerd[1696]: time="2025-01-14T14:35:56.073500725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-rwvzt,Uid:ce27c917-9b86-4b4c-aedf-fa0f1b9e5dc8,Namespace:kube-system,Attempt:0,} returns sandbox id \"6e93a0499c032e88c7d995677c7d67b3337a5bf40a43174d1a9114158239de81\"" Jan 14 14:35:56.078478 containerd[1696]: time="2025-01-14T14:35:56.078276462Z" level=info msg="CreateContainer within sandbox \"6e93a0499c032e88c7d995677c7d67b3337a5bf40a43174d1a9114158239de81\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 14 14:35:56.116676 containerd[1696]: time="2025-01-14T14:35:56.116556754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-g7fnk,Uid:0b9ad94d-88a0-49a0-a5ef-05a9fe8d4a4f,Namespace:kube-system,Attempt:0,} returns sandbox id \"968cd05f49eab19dd7dadac26ffc0f4b7c1fe03130fef2b93d660eeee483888f\"" Jan 14 14:35:56.120700 containerd[1696]: time="2025-01-14T14:35:56.120638785Z" level=info msg="CreateContainer within sandbox \"968cd05f49eab19dd7dadac26ffc0f4b7c1fe03130fef2b93d660eeee483888f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 14 14:35:56.127478 containerd[1696]: time="2025-01-14T14:35:56.127428637Z" level=info msg="CreateContainer within sandbox \"6e93a0499c032e88c7d995677c7d67b3337a5bf40a43174d1a9114158239de81\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7ec98d46cbb42dd83235c396915bc79c5765f3667bfb1fd9deb57b7c0f58728b\"" Jan 14 14:35:56.131422 containerd[1696]: time="2025-01-14T14:35:56.129793755Z" level=info msg="StartContainer for \"7ec98d46cbb42dd83235c396915bc79c5765f3667bfb1fd9deb57b7c0f58728b\"" Jan 14 14:35:56.164049 containerd[1696]: time="2025-01-14T14:35:56.163491312Z" level=info msg="CreateContainer within sandbox \"968cd05f49eab19dd7dadac26ffc0f4b7c1fe03130fef2b93d660eeee483888f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ec03f3e20efdbf91de56dee47b047606c86faeee87a068756d30b9790885ecf0\"" Jan 14 14:35:56.170231 containerd[1696]: time="2025-01-14T14:35:56.167919046Z" level=info msg="StartContainer for \"ec03f3e20efdbf91de56dee47b047606c86faeee87a068756d30b9790885ecf0\"" Jan 14 14:35:56.173046 systemd[1]: Started cri-containerd-7ec98d46cbb42dd83235c396915bc79c5765f3667bfb1fd9deb57b7c0f58728b.scope - libcontainer container 7ec98d46cbb42dd83235c396915bc79c5765f3667bfb1fd9deb57b7c0f58728b. Jan 14 14:35:56.212623 systemd[1]: Started cri-containerd-ec03f3e20efdbf91de56dee47b047606c86faeee87a068756d30b9790885ecf0.scope - libcontainer container ec03f3e20efdbf91de56dee47b047606c86faeee87a068756d30b9790885ecf0. Jan 14 14:35:56.223420 containerd[1696]: time="2025-01-14T14:35:56.222745364Z" level=info msg="StartContainer for \"7ec98d46cbb42dd83235c396915bc79c5765f3667bfb1fd9deb57b7c0f58728b\" returns successfully" Jan 14 14:35:56.262674 containerd[1696]: time="2025-01-14T14:35:56.262542368Z" level=info msg="StartContainer for \"ec03f3e20efdbf91de56dee47b047606c86faeee87a068756d30b9790885ecf0\" returns successfully" Jan 14 14:35:57.253813 kubelet[3293]: I0114 14:35:57.253196 3293 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-g7fnk" podStartSLOduration=36.253142527 podStartE2EDuration="36.253142527s" podCreationTimestamp="2025-01-14 14:35:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-14 14:35:57.252535523 +0000 UTC m=+49.319621741" watchObservedRunningTime="2025-01-14 14:35:57.253142527 +0000 UTC m=+49.320228645" Jan 14 14:35:57.272029 kubelet[3293]: I0114 14:35:57.271669 3293 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-rwvzt" podStartSLOduration=36.271615168 podStartE2EDuration="36.271615168s" podCreationTimestamp="2025-01-14 14:35:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-14 14:35:57.269574853 +0000 UTC m=+49.336660971" watchObservedRunningTime="2025-01-14 14:35:57.271615168 +0000 UTC m=+49.338701286" Jan 14 14:37:42.173727 systemd[1]: Started sshd@7-10.200.8.10:22-10.200.16.10:50342.service - OpenSSH per-connection server daemon (10.200.16.10:50342). Jan 14 14:37:42.804093 sshd[4672]: Accepted publickey for core from 10.200.16.10 port 50342 ssh2: RSA SHA256:a+2S8eoWQbQVO3JLXgiieN5lEiWvIBRg13en2/CE8M8 Jan 14 14:37:42.805827 sshd[4672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 14:37:42.810477 systemd-logind[1663]: New session 10 of user core. Jan 14 14:37:42.814760 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 14 14:37:43.609339 sshd[4672]: pam_unix(sshd:session): session closed for user core Jan 14 14:37:43.612786 systemd[1]: sshd@7-10.200.8.10:22-10.200.16.10:50342.service: Deactivated successfully. Jan 14 14:37:43.615975 systemd[1]: session-10.scope: Deactivated successfully. Jan 14 14:37:43.617668 systemd-logind[1663]: Session 10 logged out. Waiting for processes to exit. Jan 14 14:37:43.618748 systemd-logind[1663]: Removed session 10. Jan 14 14:37:48.727742 systemd[1]: Started sshd@8-10.200.8.10:22-10.200.16.10:47604.service - OpenSSH per-connection server daemon (10.200.16.10:47604). Jan 14 14:37:49.358594 sshd[4686]: Accepted publickey for core from 10.200.16.10 port 47604 ssh2: RSA SHA256:a+2S8eoWQbQVO3JLXgiieN5lEiWvIBRg13en2/CE8M8 Jan 14 14:37:49.360326 sshd[4686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 14:37:49.364449 systemd-logind[1663]: New session 11 of user core. Jan 14 14:37:49.369575 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 14 14:37:49.875076 sshd[4686]: pam_unix(sshd:session): session closed for user core Jan 14 14:37:49.878821 systemd[1]: sshd@8-10.200.8.10:22-10.200.16.10:47604.service: Deactivated successfully. Jan 14 14:37:49.880808 systemd[1]: session-11.scope: Deactivated successfully. Jan 14 14:37:49.882025 systemd-logind[1663]: Session 11 logged out. Waiting for processes to exit. Jan 14 14:37:49.883352 systemd-logind[1663]: Removed session 11. Jan 14 14:37:54.988697 systemd[1]: Started sshd@9-10.200.8.10:22-10.200.16.10:47614.service - OpenSSH per-connection server daemon (10.200.16.10:47614). Jan 14 14:37:55.625679 sshd[4702]: Accepted publickey for core from 10.200.16.10 port 47614 ssh2: RSA SHA256:a+2S8eoWQbQVO3JLXgiieN5lEiWvIBRg13en2/CE8M8 Jan 14 14:37:55.626502 sshd[4702]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 14:37:55.631726 systemd-logind[1663]: New session 12 of user core. Jan 14 14:37:55.638572 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 14 14:37:56.142217 sshd[4702]: pam_unix(sshd:session): session closed for user core Jan 14 14:37:56.145249 systemd[1]: sshd@9-10.200.8.10:22-10.200.16.10:47614.service: Deactivated successfully. Jan 14 14:37:56.147956 systemd[1]: session-12.scope: Deactivated successfully. Jan 14 14:37:56.150220 systemd-logind[1663]: Session 12 logged out. Waiting for processes to exit. Jan 14 14:37:56.151487 systemd-logind[1663]: Removed session 12. Jan 14 14:38:01.261753 systemd[1]: Started sshd@10-10.200.8.10:22-10.200.16.10:37686.service - OpenSSH per-connection server daemon (10.200.16.10:37686). Jan 14 14:38:01.892648 sshd[4717]: Accepted publickey for core from 10.200.16.10 port 37686 ssh2: RSA SHA256:a+2S8eoWQbQVO3JLXgiieN5lEiWvIBRg13en2/CE8M8 Jan 14 14:38:01.894625 sshd[4717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 14:38:01.903610 systemd-logind[1663]: New session 13 of user core. Jan 14 14:38:01.905651 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 14 14:38:02.408149 sshd[4717]: pam_unix(sshd:session): session closed for user core Jan 14 14:38:02.411572 systemd[1]: sshd@10-10.200.8.10:22-10.200.16.10:37686.service: Deactivated successfully. Jan 14 14:38:02.413931 systemd[1]: session-13.scope: Deactivated successfully. Jan 14 14:38:02.415627 systemd-logind[1663]: Session 13 logged out. Waiting for processes to exit. Jan 14 14:38:02.417060 systemd-logind[1663]: Removed session 13. Jan 14 14:38:07.521860 systemd[1]: Started sshd@11-10.200.8.10:22-10.200.16.10:52128.service - OpenSSH per-connection server daemon (10.200.16.10:52128). Jan 14 14:38:08.160950 sshd[4734]: Accepted publickey for core from 10.200.16.10 port 52128 ssh2: RSA SHA256:a+2S8eoWQbQVO3JLXgiieN5lEiWvIBRg13en2/CE8M8 Jan 14 14:38:08.162482 sshd[4734]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 14:38:08.167440 systemd-logind[1663]: New session 14 of user core. Jan 14 14:38:08.176588 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 14 14:38:08.671840 sshd[4734]: pam_unix(sshd:session): session closed for user core Jan 14 14:38:08.676270 systemd[1]: sshd@11-10.200.8.10:22-10.200.16.10:52128.service: Deactivated successfully. Jan 14 14:38:08.679079 systemd[1]: session-14.scope: Deactivated successfully. Jan 14 14:38:08.680071 systemd-logind[1663]: Session 14 logged out. Waiting for processes to exit. Jan 14 14:38:08.681171 systemd-logind[1663]: Removed session 14. Jan 14 14:38:13.785981 systemd[1]: Started sshd@12-10.200.8.10:22-10.200.16.10:52130.service - OpenSSH per-connection server daemon (10.200.16.10:52130). Jan 14 14:38:14.421966 sshd[4750]: Accepted publickey for core from 10.200.16.10 port 52130 ssh2: RSA SHA256:a+2S8eoWQbQVO3JLXgiieN5lEiWvIBRg13en2/CE8M8 Jan 14 14:38:14.423666 sshd[4750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 14:38:14.428479 systemd-logind[1663]: New session 15 of user core. Jan 14 14:38:14.435371 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 14 14:38:14.935523 sshd[4750]: pam_unix(sshd:session): session closed for user core Jan 14 14:38:14.939804 systemd[1]: sshd@12-10.200.8.10:22-10.200.16.10:52130.service: Deactivated successfully. Jan 14 14:38:14.943162 systemd[1]: session-15.scope: Deactivated successfully. Jan 14 14:38:14.944322 systemd-logind[1663]: Session 15 logged out. Waiting for processes to exit. Jan 14 14:38:14.945800 systemd-logind[1663]: Removed session 15. Jan 14 14:38:20.053805 systemd[1]: Started sshd@13-10.200.8.10:22-10.200.16.10:58880.service - OpenSSH per-connection server daemon (10.200.16.10:58880). Jan 14 14:38:20.693881 sshd[4765]: Accepted publickey for core from 10.200.16.10 port 58880 ssh2: RSA SHA256:a+2S8eoWQbQVO3JLXgiieN5lEiWvIBRg13en2/CE8M8 Jan 14 14:38:20.695803 sshd[4765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 14:38:20.700180 systemd-logind[1663]: New session 16 of user core. Jan 14 14:38:20.703603 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 14 14:38:21.227682 sshd[4765]: pam_unix(sshd:session): session closed for user core Jan 14 14:38:21.230835 systemd-logind[1663]: Session 16 logged out. Waiting for processes to exit. Jan 14 14:38:21.231303 systemd[1]: sshd@13-10.200.8.10:22-10.200.16.10:58880.service: Deactivated successfully. Jan 14 14:38:21.233827 systemd[1]: session-16.scope: Deactivated successfully. Jan 14 14:38:21.236259 systemd-logind[1663]: Removed session 16. Jan 14 14:38:26.343965 systemd[1]: Started sshd@14-10.200.8.10:22-10.200.16.10:50690.service - OpenSSH per-connection server daemon (10.200.16.10:50690). Jan 14 14:38:26.978734 sshd[4781]: Accepted publickey for core from 10.200.16.10 port 50690 ssh2: RSA SHA256:a+2S8eoWQbQVO3JLXgiieN5lEiWvIBRg13en2/CE8M8 Jan 14 14:38:26.980518 sshd[4781]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 14:38:26.984696 systemd-logind[1663]: New session 17 of user core. Jan 14 14:38:26.990619 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 14 14:38:27.490232 sshd[4781]: pam_unix(sshd:session): session closed for user core Jan 14 14:38:27.493883 systemd[1]: sshd@14-10.200.8.10:22-10.200.16.10:50690.service: Deactivated successfully. Jan 14 14:38:27.496280 systemd[1]: session-17.scope: Deactivated successfully. Jan 14 14:38:27.498044 systemd-logind[1663]: Session 17 logged out. Waiting for processes to exit. Jan 14 14:38:27.499270 systemd-logind[1663]: Removed session 17. Jan 14 14:38:27.608166 systemd[1]: Started sshd@15-10.200.8.10:22-10.200.16.10:50700.service - OpenSSH per-connection server daemon (10.200.16.10:50700). Jan 14 14:38:28.259312 sshd[4795]: Accepted publickey for core from 10.200.16.10 port 50700 ssh2: RSA SHA256:a+2S8eoWQbQVO3JLXgiieN5lEiWvIBRg13en2/CE8M8 Jan 14 14:38:28.262076 sshd[4795]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 14:38:28.267314 systemd-logind[1663]: New session 18 of user core. Jan 14 14:38:28.272598 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 14 14:38:28.817212 sshd[4795]: pam_unix(sshd:session): session closed for user core Jan 14 14:38:28.822051 systemd[1]: sshd@15-10.200.8.10:22-10.200.16.10:50700.service: Deactivated successfully. Jan 14 14:38:28.824503 systemd[1]: session-18.scope: Deactivated successfully. Jan 14 14:38:28.825597 systemd-logind[1663]: Session 18 logged out. Waiting for processes to exit. Jan 14 14:38:28.826629 systemd-logind[1663]: Removed session 18. Jan 14 14:38:28.937453 systemd[1]: Started sshd@16-10.200.8.10:22-10.200.16.10:50702.service - OpenSSH per-connection server daemon (10.200.16.10:50702). Jan 14 14:38:29.571923 sshd[4806]: Accepted publickey for core from 10.200.16.10 port 50702 ssh2: RSA SHA256:a+2S8eoWQbQVO3JLXgiieN5lEiWvIBRg13en2/CE8M8 Jan 14 14:38:29.573532 sshd[4806]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 14:38:29.578812 systemd-logind[1663]: New session 19 of user core. Jan 14 14:38:29.586635 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 14 14:38:30.082929 sshd[4806]: pam_unix(sshd:session): session closed for user core Jan 14 14:38:30.086461 systemd[1]: sshd@16-10.200.8.10:22-10.200.16.10:50702.service: Deactivated successfully. Jan 14 14:38:30.088690 systemd[1]: session-19.scope: Deactivated successfully. Jan 14 14:38:30.090161 systemd-logind[1663]: Session 19 logged out. Waiting for processes to exit. Jan 14 14:38:30.091264 systemd-logind[1663]: Removed session 19. Jan 14 14:38:35.199878 systemd[1]: Started sshd@17-10.200.8.10:22-10.200.16.10:50718.service - OpenSSH per-connection server daemon (10.200.16.10:50718). Jan 14 14:38:35.832293 sshd[4819]: Accepted publickey for core from 10.200.16.10 port 50718 ssh2: RSA SHA256:a+2S8eoWQbQVO3JLXgiieN5lEiWvIBRg13en2/CE8M8 Jan 14 14:38:35.833910 sshd[4819]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 14:38:35.838819 systemd-logind[1663]: New session 20 of user core. Jan 14 14:38:35.849660 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 14 14:38:36.343613 sshd[4819]: pam_unix(sshd:session): session closed for user core Jan 14 14:38:36.348761 systemd[1]: sshd@17-10.200.8.10:22-10.200.16.10:50718.service: Deactivated successfully. Jan 14 14:38:36.351419 systemd[1]: session-20.scope: Deactivated successfully. Jan 14 14:38:36.352981 systemd-logind[1663]: Session 20 logged out. Waiting for processes to exit. Jan 14 14:38:36.354662 systemd-logind[1663]: Removed session 20. Jan 14 14:38:41.471696 systemd[1]: Started sshd@18-10.200.8.10:22-10.200.16.10:34220.service - OpenSSH per-connection server daemon (10.200.16.10:34220). Jan 14 14:38:42.115280 sshd[4836]: Accepted publickey for core from 10.200.16.10 port 34220 ssh2: RSA SHA256:a+2S8eoWQbQVO3JLXgiieN5lEiWvIBRg13en2/CE8M8 Jan 14 14:38:42.116964 sshd[4836]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 14:38:42.121700 systemd-logind[1663]: New session 21 of user core. Jan 14 14:38:42.129600 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 14 14:38:42.635807 sshd[4836]: pam_unix(sshd:session): session closed for user core Jan 14 14:38:42.639061 systemd[1]: sshd@18-10.200.8.10:22-10.200.16.10:34220.service: Deactivated successfully. Jan 14 14:38:42.641445 systemd[1]: session-21.scope: Deactivated successfully. Jan 14 14:38:42.643246 systemd-logind[1663]: Session 21 logged out. Waiting for processes to exit. Jan 14 14:38:42.644517 systemd-logind[1663]: Removed session 21. Jan 14 14:38:42.754850 systemd[1]: Started sshd@19-10.200.8.10:22-10.200.16.10:34232.service - OpenSSH per-connection server daemon (10.200.16.10:34232). Jan 14 14:38:43.385954 sshd[4849]: Accepted publickey for core from 10.200.16.10 port 34232 ssh2: RSA SHA256:a+2S8eoWQbQVO3JLXgiieN5lEiWvIBRg13en2/CE8M8 Jan 14 14:38:43.387627 sshd[4849]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 14:38:43.392772 systemd-logind[1663]: New session 22 of user core. Jan 14 14:38:43.397636 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 14 14:38:43.968606 sshd[4849]: pam_unix(sshd:session): session closed for user core Jan 14 14:38:43.972038 systemd[1]: sshd@19-10.200.8.10:22-10.200.16.10:34232.service: Deactivated successfully. Jan 14 14:38:43.974809 systemd[1]: session-22.scope: Deactivated successfully. Jan 14 14:38:43.977162 systemd-logind[1663]: Session 22 logged out. Waiting for processes to exit. Jan 14 14:38:43.978867 systemd-logind[1663]: Removed session 22. Jan 14 14:38:44.086829 systemd[1]: Started sshd@20-10.200.8.10:22-10.200.16.10:34238.service - OpenSSH per-connection server daemon (10.200.16.10:34238). Jan 14 14:38:44.718191 sshd[4860]: Accepted publickey for core from 10.200.16.10 port 34238 ssh2: RSA SHA256:a+2S8eoWQbQVO3JLXgiieN5lEiWvIBRg13en2/CE8M8 Jan 14 14:38:44.718990 sshd[4860]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 14:38:44.724612 systemd-logind[1663]: New session 23 of user core. Jan 14 14:38:44.727582 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 14 14:38:46.551511 sshd[4860]: pam_unix(sshd:session): session closed for user core Jan 14 14:38:46.554977 systemd[1]: sshd@20-10.200.8.10:22-10.200.16.10:34238.service: Deactivated successfully. Jan 14 14:38:46.557959 systemd[1]: session-23.scope: Deactivated successfully. Jan 14 14:38:46.559900 systemd-logind[1663]: Session 23 logged out. Waiting for processes to exit. Jan 14 14:38:46.560949 systemd-logind[1663]: Removed session 23. Jan 14 14:38:46.673789 systemd[1]: Started sshd@21-10.200.8.10:22-10.200.16.10:56936.service - OpenSSH per-connection server daemon (10.200.16.10:56936). Jan 14 14:38:47.305475 sshd[4878]: Accepted publickey for core from 10.200.16.10 port 56936 ssh2: RSA SHA256:a+2S8eoWQbQVO3JLXgiieN5lEiWvIBRg13en2/CE8M8 Jan 14 14:38:47.307264 sshd[4878]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 14:38:47.312117 systemd-logind[1663]: New session 24 of user core. Jan 14 14:38:47.317600 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 14 14:38:47.923076 sshd[4878]: pam_unix(sshd:session): session closed for user core Jan 14 14:38:47.926769 systemd[1]: sshd@21-10.200.8.10:22-10.200.16.10:56936.service: Deactivated successfully. Jan 14 14:38:47.930235 systemd[1]: session-24.scope: Deactivated successfully. Jan 14 14:38:47.932452 systemd-logind[1663]: Session 24 logged out. Waiting for processes to exit. Jan 14 14:38:47.933844 systemd-logind[1663]: Removed session 24. Jan 14 14:38:48.043779 systemd[1]: Started sshd@22-10.200.8.10:22-10.200.16.10:56940.service - OpenSSH per-connection server daemon (10.200.16.10:56940). Jan 14 14:38:48.675303 sshd[4889]: Accepted publickey for core from 10.200.16.10 port 56940 ssh2: RSA SHA256:a+2S8eoWQbQVO3JLXgiieN5lEiWvIBRg13en2/CE8M8 Jan 14 14:38:48.676880 sshd[4889]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 14:38:48.681943 systemd-logind[1663]: New session 25 of user core. Jan 14 14:38:48.685544 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 14 14:38:49.185564 sshd[4889]: pam_unix(sshd:session): session closed for user core Jan 14 14:38:49.189152 systemd[1]: sshd@22-10.200.8.10:22-10.200.16.10:56940.service: Deactivated successfully. Jan 14 14:38:49.191951 systemd[1]: session-25.scope: Deactivated successfully. Jan 14 14:38:49.194279 systemd-logind[1663]: Session 25 logged out. Waiting for processes to exit. Jan 14 14:38:49.195991 systemd-logind[1663]: Removed session 25. Jan 14 14:38:54.303726 systemd[1]: Started sshd@23-10.200.8.10:22-10.200.16.10:56950.service - OpenSSH per-connection server daemon (10.200.16.10:56950). Jan 14 14:38:54.936194 sshd[4904]: Accepted publickey for core from 10.200.16.10 port 56950 ssh2: RSA SHA256:a+2S8eoWQbQVO3JLXgiieN5lEiWvIBRg13en2/CE8M8 Jan 14 14:38:54.938085 sshd[4904]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 14:38:54.942751 systemd-logind[1663]: New session 26 of user core. Jan 14 14:38:54.948728 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 14 14:38:55.465285 sshd[4904]: pam_unix(sshd:session): session closed for user core Jan 14 14:38:55.468878 systemd[1]: sshd@23-10.200.8.10:22-10.200.16.10:56950.service: Deactivated successfully. Jan 14 14:38:55.471797 systemd[1]: session-26.scope: Deactivated successfully. Jan 14 14:38:55.473675 systemd-logind[1663]: Session 26 logged out. Waiting for processes to exit. Jan 14 14:38:55.477779 systemd-logind[1663]: Removed session 26. Jan 14 14:39:00.583763 systemd[1]: Started sshd@24-10.200.8.10:22-10.200.16.10:52158.service - OpenSSH per-connection server daemon (10.200.16.10:52158). Jan 14 14:39:01.222561 sshd[4920]: Accepted publickey for core from 10.200.16.10 port 52158 ssh2: RSA SHA256:a+2S8eoWQbQVO3JLXgiieN5lEiWvIBRg13en2/CE8M8 Jan 14 14:39:01.224116 sshd[4920]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 14:39:01.228808 systemd-logind[1663]: New session 27 of user core. Jan 14 14:39:01.235613 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 14 14:39:01.730630 sshd[4920]: pam_unix(sshd:session): session closed for user core Jan 14 14:39:01.734209 systemd[1]: sshd@24-10.200.8.10:22-10.200.16.10:52158.service: Deactivated successfully. Jan 14 14:39:01.736879 systemd[1]: session-27.scope: Deactivated successfully. Jan 14 14:39:01.738760 systemd-logind[1663]: Session 27 logged out. Waiting for processes to exit. Jan 14 14:39:01.739871 systemd-logind[1663]: Removed session 27. Jan 14 14:39:06.847744 systemd[1]: Started sshd@25-10.200.8.10:22-10.200.16.10:38234.service - OpenSSH per-connection server daemon (10.200.16.10:38234). Jan 14 14:39:07.479220 sshd[4933]: Accepted publickey for core from 10.200.16.10 port 38234 ssh2: RSA SHA256:a+2S8eoWQbQVO3JLXgiieN5lEiWvIBRg13en2/CE8M8 Jan 14 14:39:07.481228 sshd[4933]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 14:39:07.485643 systemd-logind[1663]: New session 28 of user core. Jan 14 14:39:07.491558 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 14 14:39:07.989302 sshd[4933]: pam_unix(sshd:session): session closed for user core Jan 14 14:39:07.992920 systemd[1]: sshd@25-10.200.8.10:22-10.200.16.10:38234.service: Deactivated successfully. Jan 14 14:39:07.995937 systemd[1]: session-28.scope: Deactivated successfully. Jan 14 14:39:07.997874 systemd-logind[1663]: Session 28 logged out. Waiting for processes to exit. Jan 14 14:39:07.998912 systemd-logind[1663]: Removed session 28. Jan 14 14:39:13.102634 systemd[1]: Started sshd@26-10.200.8.10:22-10.200.16.10:38250.service - OpenSSH per-connection server daemon (10.200.16.10:38250). Jan 14 14:39:13.738907 sshd[4948]: Accepted publickey for core from 10.200.16.10 port 38250 ssh2: RSA SHA256:a+2S8eoWQbQVO3JLXgiieN5lEiWvIBRg13en2/CE8M8 Jan 14 14:39:13.740511 sshd[4948]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 14:39:13.745652 systemd-logind[1663]: New session 29 of user core. Jan 14 14:39:13.754604 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 14 14:39:14.248953 sshd[4948]: pam_unix(sshd:session): session closed for user core Jan 14 14:39:14.253058 systemd[1]: sshd@26-10.200.8.10:22-10.200.16.10:38250.service: Deactivated successfully. Jan 14 14:39:14.255166 systemd[1]: session-29.scope: Deactivated successfully. Jan 14 14:39:14.255993 systemd-logind[1663]: Session 29 logged out. Waiting for processes to exit. Jan 14 14:39:14.257418 systemd-logind[1663]: Removed session 29. Jan 14 14:39:14.361560 systemd[1]: Started sshd@27-10.200.8.10:22-10.200.16.10:38256.service - OpenSSH per-connection server daemon (10.200.16.10:38256). Jan 14 14:39:15.002363 sshd[4961]: Accepted publickey for core from 10.200.16.10 port 38256 ssh2: RSA SHA256:a+2S8eoWQbQVO3JLXgiieN5lEiWvIBRg13en2/CE8M8 Jan 14 14:39:15.004164 sshd[4961]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 14:39:15.009145 systemd-logind[1663]: New session 30 of user core. Jan 14 14:39:15.014603 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 14 14:39:16.748580 containerd[1696]: time="2025-01-14T14:39:16.748378674Z" level=info msg="StopContainer for \"4c0e592bea7f7d3c2d6fe2e8f4f1a8e2688f46942a84015d5c75964a0bc9ae98\" with timeout 30 (s)" Jan 14 14:39:16.750349 containerd[1696]: time="2025-01-14T14:39:16.749776882Z" level=info msg="Stop container \"4c0e592bea7f7d3c2d6fe2e8f4f1a8e2688f46942a84015d5c75964a0bc9ae98\" with signal terminated" Jan 14 14:39:16.768869 systemd[1]: run-containerd-runc-k8s.io-e54364e83a5b07b8f3215d6e1d83a9ed7843c42f20bfb2cf39d4547986f5a82f-runc.WhZPF9.mount: Deactivated successfully. Jan 14 14:39:16.788071 systemd[1]: cri-containerd-4c0e592bea7f7d3c2d6fe2e8f4f1a8e2688f46942a84015d5c75964a0bc9ae98.scope: Deactivated successfully. Jan 14 14:39:16.789065 containerd[1696]: time="2025-01-14T14:39:16.788938304Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 14 14:39:16.799729 containerd[1696]: time="2025-01-14T14:39:16.799635865Z" level=info msg="StopContainer for \"e54364e83a5b07b8f3215d6e1d83a9ed7843c42f20bfb2cf39d4547986f5a82f\" with timeout 2 (s)" Jan 14 14:39:16.800302 containerd[1696]: time="2025-01-14T14:39:16.800262268Z" level=info msg="Stop container \"e54364e83a5b07b8f3215d6e1d83a9ed7843c42f20bfb2cf39d4547986f5a82f\" with signal terminated" Jan 14 14:39:16.812662 systemd-networkd[1550]: lxc_health: Link DOWN Jan 14 14:39:16.812673 systemd-networkd[1550]: lxc_health: Lost carrier Jan 14 14:39:16.826153 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4c0e592bea7f7d3c2d6fe2e8f4f1a8e2688f46942a84015d5c75964a0bc9ae98-rootfs.mount: Deactivated successfully. Jan 14 14:39:16.839722 systemd[1]: cri-containerd-e54364e83a5b07b8f3215d6e1d83a9ed7843c42f20bfb2cf39d4547986f5a82f.scope: Deactivated successfully. Jan 14 14:39:16.840310 systemd[1]: cri-containerd-e54364e83a5b07b8f3215d6e1d83a9ed7843c42f20bfb2cf39d4547986f5a82f.scope: Consumed 7.611s CPU time. Jan 14 14:39:16.854214 containerd[1696]: time="2025-01-14T14:39:16.853939872Z" level=info msg="shim disconnected" id=4c0e592bea7f7d3c2d6fe2e8f4f1a8e2688f46942a84015d5c75964a0bc9ae98 namespace=k8s.io Jan 14 14:39:16.854214 containerd[1696]: time="2025-01-14T14:39:16.854018273Z" level=warning msg="cleaning up after shim disconnected" id=4c0e592bea7f7d3c2d6fe2e8f4f1a8e2688f46942a84015d5c75964a0bc9ae98 namespace=k8s.io Jan 14 14:39:16.854214 containerd[1696]: time="2025-01-14T14:39:16.854032173Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 14 14:39:16.869579 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e54364e83a5b07b8f3215d6e1d83a9ed7843c42f20bfb2cf39d4547986f5a82f-rootfs.mount: Deactivated successfully. Jan 14 14:39:16.879080 containerd[1696]: time="2025-01-14T14:39:16.879036215Z" level=warning msg="cleanup warnings time=\"2025-01-14T14:39:16Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 14 14:39:16.885273 containerd[1696]: time="2025-01-14T14:39:16.885229150Z" level=info msg="StopContainer for \"4c0e592bea7f7d3c2d6fe2e8f4f1a8e2688f46942a84015d5c75964a0bc9ae98\" returns successfully" Jan 14 14:39:16.886236 containerd[1696]: time="2025-01-14T14:39:16.886209055Z" level=info msg="StopPodSandbox for \"5586e4e79625afd7cdb741b101b3ca0f94550fecdef2b2e00d9a36cf9fd880f0\"" Jan 14 14:39:16.886376 containerd[1696]: time="2025-01-14T14:39:16.886356556Z" level=info msg="Container to stop \"4c0e592bea7f7d3c2d6fe2e8f4f1a8e2688f46942a84015d5c75964a0bc9ae98\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 14 14:39:16.887190 containerd[1696]: time="2025-01-14T14:39:16.887083960Z" level=info msg="shim disconnected" id=e54364e83a5b07b8f3215d6e1d83a9ed7843c42f20bfb2cf39d4547986f5a82f namespace=k8s.io Jan 14 14:39:16.887190 containerd[1696]: time="2025-01-14T14:39:16.887143960Z" level=warning msg="cleaning up after shim disconnected" id=e54364e83a5b07b8f3215d6e1d83a9ed7843c42f20bfb2cf39d4547986f5a82f namespace=k8s.io Jan 14 14:39:16.887190 containerd[1696]: time="2025-01-14T14:39:16.887156261Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 14 14:39:16.891898 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5586e4e79625afd7cdb741b101b3ca0f94550fecdef2b2e00d9a36cf9fd880f0-shm.mount: Deactivated successfully. Jan 14 14:39:16.904224 systemd[1]: cri-containerd-5586e4e79625afd7cdb741b101b3ca0f94550fecdef2b2e00d9a36cf9fd880f0.scope: Deactivated successfully. Jan 14 14:39:16.921961 containerd[1696]: time="2025-01-14T14:39:16.921914558Z" level=info msg="StopContainer for \"e54364e83a5b07b8f3215d6e1d83a9ed7843c42f20bfb2cf39d4547986f5a82f\" returns successfully" Jan 14 14:39:16.922970 containerd[1696]: time="2025-01-14T14:39:16.922849163Z" level=info msg="StopPodSandbox for \"65fd2a73a6ea74157fda95e80f83d4ffd023d8aa763fad2f8b6358903f680a68\"" Jan 14 14:39:16.922970 containerd[1696]: time="2025-01-14T14:39:16.922892163Z" level=info msg="Container to stop \"1064f82c47497064a70f1a51d301ec3e9176e5b8a01f72feb533c3770623ecf5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 14 14:39:16.923472 containerd[1696]: time="2025-01-14T14:39:16.923233265Z" level=info msg="Container to stop \"8e02d3810d0aef7e540b63e736899e9bea0f1a44cfe72706bfefff79466e4eb2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 14 14:39:16.923472 containerd[1696]: time="2025-01-14T14:39:16.923263065Z" level=info msg="Container to stop \"724d4ef6ab3ccd0c40c0940a3e98ee9836f717c58d63963157abc1467f76e12b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 14 14:39:16.923472 containerd[1696]: time="2025-01-14T14:39:16.923282165Z" level=info msg="Container to stop \"8cc5a5324d20b0f9291d07d6afee6e29717a400cda613a400db8b34181a221e8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 14 14:39:16.923472 containerd[1696]: time="2025-01-14T14:39:16.923298265Z" level=info msg="Container to stop \"e54364e83a5b07b8f3215d6e1d83a9ed7843c42f20bfb2cf39d4547986f5a82f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 14 14:39:16.932811 systemd[1]: cri-containerd-65fd2a73a6ea74157fda95e80f83d4ffd023d8aa763fad2f8b6358903f680a68.scope: Deactivated successfully. Jan 14 14:39:16.957741 containerd[1696]: time="2025-01-14T14:39:16.957581260Z" level=info msg="shim disconnected" id=5586e4e79625afd7cdb741b101b3ca0f94550fecdef2b2e00d9a36cf9fd880f0 namespace=k8s.io Jan 14 14:39:16.957741 containerd[1696]: time="2025-01-14T14:39:16.957646260Z" level=warning msg="cleaning up after shim disconnected" id=5586e4e79625afd7cdb741b101b3ca0f94550fecdef2b2e00d9a36cf9fd880f0 namespace=k8s.io Jan 14 14:39:16.957741 containerd[1696]: time="2025-01-14T14:39:16.957661360Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 14 14:39:16.975735 containerd[1696]: time="2025-01-14T14:39:16.975656462Z" level=info msg="shim disconnected" id=65fd2a73a6ea74157fda95e80f83d4ffd023d8aa763fad2f8b6358903f680a68 namespace=k8s.io Jan 14 14:39:16.975735 containerd[1696]: time="2025-01-14T14:39:16.975715063Z" level=warning msg="cleaning up after shim disconnected" id=65fd2a73a6ea74157fda95e80f83d4ffd023d8aa763fad2f8b6358903f680a68 namespace=k8s.io Jan 14 14:39:16.975735 containerd[1696]: time="2025-01-14T14:39:16.975727763Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 14 14:39:16.982663 containerd[1696]: time="2025-01-14T14:39:16.982457301Z" level=info msg="TearDown network for sandbox \"5586e4e79625afd7cdb741b101b3ca0f94550fecdef2b2e00d9a36cf9fd880f0\" successfully" Jan 14 14:39:16.982663 containerd[1696]: time="2025-01-14T14:39:16.982493701Z" level=info msg="StopPodSandbox for \"5586e4e79625afd7cdb741b101b3ca0f94550fecdef2b2e00d9a36cf9fd880f0\" returns successfully" Jan 14 14:39:16.995681 containerd[1696]: time="2025-01-14T14:39:16.995456174Z" level=warning msg="cleanup warnings time=\"2025-01-14T14:39:16Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 14 14:39:16.997073 containerd[1696]: time="2025-01-14T14:39:16.997036983Z" level=info msg="TearDown network for sandbox \"65fd2a73a6ea74157fda95e80f83d4ffd023d8aa763fad2f8b6358903f680a68\" successfully" Jan 14 14:39:16.997073 containerd[1696]: time="2025-01-14T14:39:16.997065684Z" level=info msg="StopPodSandbox for \"65fd2a73a6ea74157fda95e80f83d4ffd023d8aa763fad2f8b6358903f680a68\" returns successfully" Jan 14 14:39:17.009581 kubelet[3293]: I0114 14:39:17.008055 3293 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7r5t\" (UniqueName: \"kubernetes.io/projected/b64a0210-9e59-449b-a222-ab07af6f95b1-kube-api-access-p7r5t\") pod \"b64a0210-9e59-449b-a222-ab07af6f95b1\" (UID: \"b64a0210-9e59-449b-a222-ab07af6f95b1\") " Jan 14 14:39:17.009581 kubelet[3293]: I0114 14:39:17.008107 3293 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b64a0210-9e59-449b-a222-ab07af6f95b1-cilium-config-path\") pod \"b64a0210-9e59-449b-a222-ab07af6f95b1\" (UID: \"b64a0210-9e59-449b-a222-ab07af6f95b1\") " Jan 14 14:39:17.012221 kubelet[3293]: I0114 14:39:17.012121 3293 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b64a0210-9e59-449b-a222-ab07af6f95b1-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b64a0210-9e59-449b-a222-ab07af6f95b1" (UID: "b64a0210-9e59-449b-a222-ab07af6f95b1"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 14 14:39:17.014145 kubelet[3293]: I0114 14:39:17.014113 3293 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b64a0210-9e59-449b-a222-ab07af6f95b1-kube-api-access-p7r5t" (OuterVolumeSpecName: "kube-api-access-p7r5t") pod "b64a0210-9e59-449b-a222-ab07af6f95b1" (UID: "b64a0210-9e59-449b-a222-ab07af6f95b1"). InnerVolumeSpecName "kube-api-access-p7r5t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 14 14:39:17.109459 kubelet[3293]: I0114 14:39:17.109041 3293 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/07c7dbe4-af13-45a2-86cb-387d2ea87b87-cilium-cgroup\") pod \"07c7dbe4-af13-45a2-86cb-387d2ea87b87\" (UID: \"07c7dbe4-af13-45a2-86cb-387d2ea87b87\") " Jan 14 14:39:17.109459 kubelet[3293]: I0114 14:39:17.109103 3293 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/07c7dbe4-af13-45a2-86cb-387d2ea87b87-cilium-run\") pod \"07c7dbe4-af13-45a2-86cb-387d2ea87b87\" (UID: \"07c7dbe4-af13-45a2-86cb-387d2ea87b87\") " Jan 14 14:39:17.109459 kubelet[3293]: I0114 14:39:17.109128 3293 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/07c7dbe4-af13-45a2-86cb-387d2ea87b87-lib-modules\") pod \"07c7dbe4-af13-45a2-86cb-387d2ea87b87\" (UID: \"07c7dbe4-af13-45a2-86cb-387d2ea87b87\") " Jan 14 14:39:17.109459 kubelet[3293]: I0114 14:39:17.109116 3293 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/07c7dbe4-af13-45a2-86cb-387d2ea87b87-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "07c7dbe4-af13-45a2-86cb-387d2ea87b87" (UID: "07c7dbe4-af13-45a2-86cb-387d2ea87b87"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 14 14:39:17.109459 kubelet[3293]: I0114 14:39:17.109166 3293 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b966s\" (UniqueName: \"kubernetes.io/projected/07c7dbe4-af13-45a2-86cb-387d2ea87b87-kube-api-access-b966s\") pod \"07c7dbe4-af13-45a2-86cb-387d2ea87b87\" (UID: \"07c7dbe4-af13-45a2-86cb-387d2ea87b87\") " Jan 14 14:39:17.109459 kubelet[3293]: I0114 14:39:17.109180 3293 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/07c7dbe4-af13-45a2-86cb-387d2ea87b87-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "07c7dbe4-af13-45a2-86cb-387d2ea87b87" (UID: "07c7dbe4-af13-45a2-86cb-387d2ea87b87"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 14 14:39:17.109841 kubelet[3293]: I0114 14:39:17.109193 3293 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/07c7dbe4-af13-45a2-86cb-387d2ea87b87-cni-path\") pod \"07c7dbe4-af13-45a2-86cb-387d2ea87b87\" (UID: \"07c7dbe4-af13-45a2-86cb-387d2ea87b87\") " Jan 14 14:39:17.109841 kubelet[3293]: I0114 14:39:17.109203 3293 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/07c7dbe4-af13-45a2-86cb-387d2ea87b87-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "07c7dbe4-af13-45a2-86cb-387d2ea87b87" (UID: "07c7dbe4-af13-45a2-86cb-387d2ea87b87"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 14 14:39:17.109841 kubelet[3293]: I0114 14:39:17.109220 3293 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/07c7dbe4-af13-45a2-86cb-387d2ea87b87-etc-cni-netd\") pod \"07c7dbe4-af13-45a2-86cb-387d2ea87b87\" (UID: \"07c7dbe4-af13-45a2-86cb-387d2ea87b87\") " Jan 14 14:39:17.109841 kubelet[3293]: I0114 14:39:17.109244 3293 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/07c7dbe4-af13-45a2-86cb-387d2ea87b87-hubble-tls\") pod \"07c7dbe4-af13-45a2-86cb-387d2ea87b87\" (UID: \"07c7dbe4-af13-45a2-86cb-387d2ea87b87\") " Jan 14 14:39:17.109841 kubelet[3293]: I0114 14:39:17.109267 3293 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/07c7dbe4-af13-45a2-86cb-387d2ea87b87-xtables-lock\") pod \"07c7dbe4-af13-45a2-86cb-387d2ea87b87\" (UID: \"07c7dbe4-af13-45a2-86cb-387d2ea87b87\") " Jan 14 14:39:17.109841 kubelet[3293]: I0114 14:39:17.109291 3293 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/07c7dbe4-af13-45a2-86cb-387d2ea87b87-host-proc-sys-kernel\") pod \"07c7dbe4-af13-45a2-86cb-387d2ea87b87\" (UID: \"07c7dbe4-af13-45a2-86cb-387d2ea87b87\") " Jan 14 14:39:17.110086 kubelet[3293]: I0114 14:39:17.109317 3293 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/07c7dbe4-af13-45a2-86cb-387d2ea87b87-cilium-config-path\") pod \"07c7dbe4-af13-45a2-86cb-387d2ea87b87\" (UID: \"07c7dbe4-af13-45a2-86cb-387d2ea87b87\") " Jan 14 14:39:17.110086 kubelet[3293]: I0114 14:39:17.109337 3293 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/07c7dbe4-af13-45a2-86cb-387d2ea87b87-bpf-maps\") pod \"07c7dbe4-af13-45a2-86cb-387d2ea87b87\" (UID: \"07c7dbe4-af13-45a2-86cb-387d2ea87b87\") " Jan 14 14:39:17.110086 kubelet[3293]: I0114 14:39:17.109379 3293 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/07c7dbe4-af13-45a2-86cb-387d2ea87b87-host-proc-sys-net\") pod \"07c7dbe4-af13-45a2-86cb-387d2ea87b87\" (UID: \"07c7dbe4-af13-45a2-86cb-387d2ea87b87\") " Jan 14 14:39:17.110920 kubelet[3293]: I0114 14:39:17.110275 3293 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/07c7dbe4-af13-45a2-86cb-387d2ea87b87-clustermesh-secrets\") pod \"07c7dbe4-af13-45a2-86cb-387d2ea87b87\" (UID: \"07c7dbe4-af13-45a2-86cb-387d2ea87b87\") " Jan 14 14:39:17.110920 kubelet[3293]: I0114 14:39:17.110319 3293 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/07c7dbe4-af13-45a2-86cb-387d2ea87b87-hostproc\") pod \"07c7dbe4-af13-45a2-86cb-387d2ea87b87\" (UID: \"07c7dbe4-af13-45a2-86cb-387d2ea87b87\") " Jan 14 14:39:17.110920 kubelet[3293]: I0114 14:39:17.110381 3293 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/07c7dbe4-af13-45a2-86cb-387d2ea87b87-cilium-cgroup\") on node \"ci-4081.3.0-a-a739250a79\" DevicePath \"\"" Jan 14 14:39:17.110920 kubelet[3293]: I0114 14:39:17.110422 3293 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/07c7dbe4-af13-45a2-86cb-387d2ea87b87-cilium-run\") on node \"ci-4081.3.0-a-a739250a79\" DevicePath \"\"" Jan 14 14:39:17.110920 kubelet[3293]: I0114 14:39:17.110437 3293 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/07c7dbe4-af13-45a2-86cb-387d2ea87b87-lib-modules\") on node \"ci-4081.3.0-a-a739250a79\" DevicePath \"\"" Jan 14 14:39:17.110920 kubelet[3293]: I0114 14:39:17.110453 3293 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-p7r5t\" (UniqueName: \"kubernetes.io/projected/b64a0210-9e59-449b-a222-ab07af6f95b1-kube-api-access-p7r5t\") on node \"ci-4081.3.0-a-a739250a79\" DevicePath \"\"" Jan 14 14:39:17.110920 kubelet[3293]: I0114 14:39:17.110470 3293 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b64a0210-9e59-449b-a222-ab07af6f95b1-cilium-config-path\") on node \"ci-4081.3.0-a-a739250a79\" DevicePath \"\"" Jan 14 14:39:17.111335 kubelet[3293]: I0114 14:39:17.110522 3293 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/07c7dbe4-af13-45a2-86cb-387d2ea87b87-hostproc" (OuterVolumeSpecName: "hostproc") pod "07c7dbe4-af13-45a2-86cb-387d2ea87b87" (UID: "07c7dbe4-af13-45a2-86cb-387d2ea87b87"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 14 14:39:17.111335 kubelet[3293]: I0114 14:39:17.110554 3293 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/07c7dbe4-af13-45a2-86cb-387d2ea87b87-cni-path" (OuterVolumeSpecName: "cni-path") pod "07c7dbe4-af13-45a2-86cb-387d2ea87b87" (UID: "07c7dbe4-af13-45a2-86cb-387d2ea87b87"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 14 14:39:17.111335 kubelet[3293]: I0114 14:39:17.110576 3293 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/07c7dbe4-af13-45a2-86cb-387d2ea87b87-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "07c7dbe4-af13-45a2-86cb-387d2ea87b87" (UID: "07c7dbe4-af13-45a2-86cb-387d2ea87b87"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 14 14:39:17.112299 kubelet[3293]: I0114 14:39:17.111894 3293 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/07c7dbe4-af13-45a2-86cb-387d2ea87b87-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "07c7dbe4-af13-45a2-86cb-387d2ea87b87" (UID: "07c7dbe4-af13-45a2-86cb-387d2ea87b87"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 14 14:39:17.112299 kubelet[3293]: I0114 14:39:17.111952 3293 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/07c7dbe4-af13-45a2-86cb-387d2ea87b87-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "07c7dbe4-af13-45a2-86cb-387d2ea87b87" (UID: "07c7dbe4-af13-45a2-86cb-387d2ea87b87"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 14 14:39:17.112299 kubelet[3293]: I0114 14:39:17.112014 3293 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/07c7dbe4-af13-45a2-86cb-387d2ea87b87-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "07c7dbe4-af13-45a2-86cb-387d2ea87b87" (UID: "07c7dbe4-af13-45a2-86cb-387d2ea87b87"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 14 14:39:17.112299 kubelet[3293]: I0114 14:39:17.112044 3293 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/07c7dbe4-af13-45a2-86cb-387d2ea87b87-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "07c7dbe4-af13-45a2-86cb-387d2ea87b87" (UID: "07c7dbe4-af13-45a2-86cb-387d2ea87b87"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 14 14:39:17.115620 kubelet[3293]: I0114 14:39:17.115585 3293 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07c7dbe4-af13-45a2-86cb-387d2ea87b87-kube-api-access-b966s" (OuterVolumeSpecName: "kube-api-access-b966s") pod "07c7dbe4-af13-45a2-86cb-387d2ea87b87" (UID: "07c7dbe4-af13-45a2-86cb-387d2ea87b87"). InnerVolumeSpecName "kube-api-access-b966s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 14 14:39:17.116702 kubelet[3293]: I0114 14:39:17.116667 3293 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07c7dbe4-af13-45a2-86cb-387d2ea87b87-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "07c7dbe4-af13-45a2-86cb-387d2ea87b87" (UID: "07c7dbe4-af13-45a2-86cb-387d2ea87b87"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 14 14:39:17.118121 kubelet[3293]: I0114 14:39:17.118093 3293 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07c7dbe4-af13-45a2-86cb-387d2ea87b87-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "07c7dbe4-af13-45a2-86cb-387d2ea87b87" (UID: "07c7dbe4-af13-45a2-86cb-387d2ea87b87"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 14 14:39:17.118218 kubelet[3293]: I0114 14:39:17.118145 3293 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07c7dbe4-af13-45a2-86cb-387d2ea87b87-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "07c7dbe4-af13-45a2-86cb-387d2ea87b87" (UID: "07c7dbe4-af13-45a2-86cb-387d2ea87b87"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 14 14:39:17.211670 kubelet[3293]: I0114 14:39:17.211622 3293 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/07c7dbe4-af13-45a2-86cb-387d2ea87b87-clustermesh-secrets\") on node \"ci-4081.3.0-a-a739250a79\" DevicePath \"\"" Jan 14 14:39:17.211862 kubelet[3293]: I0114 14:39:17.211668 3293 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/07c7dbe4-af13-45a2-86cb-387d2ea87b87-hostproc\") on node \"ci-4081.3.0-a-a739250a79\" DevicePath \"\"" Jan 14 14:39:17.211862 kubelet[3293]: I0114 14:39:17.211712 3293 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/07c7dbe4-af13-45a2-86cb-387d2ea87b87-host-proc-sys-net\") on node \"ci-4081.3.0-a-a739250a79\" DevicePath \"\"" Jan 14 14:39:17.211862 kubelet[3293]: I0114 14:39:17.211727 3293 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/07c7dbe4-af13-45a2-86cb-387d2ea87b87-cni-path\") on node \"ci-4081.3.0-a-a739250a79\" DevicePath \"\"" Jan 14 14:39:17.211862 kubelet[3293]: I0114 14:39:17.211742 3293 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-b966s\" (UniqueName: \"kubernetes.io/projected/07c7dbe4-af13-45a2-86cb-387d2ea87b87-kube-api-access-b966s\") on node \"ci-4081.3.0-a-a739250a79\" DevicePath \"\"" Jan 14 14:39:17.211862 kubelet[3293]: I0114 14:39:17.211756 3293 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/07c7dbe4-af13-45a2-86cb-387d2ea87b87-etc-cni-netd\") on node \"ci-4081.3.0-a-a739250a79\" DevicePath \"\"" Jan 14 14:39:17.211862 kubelet[3293]: I0114 14:39:17.211769 3293 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/07c7dbe4-af13-45a2-86cb-387d2ea87b87-hubble-tls\") on node \"ci-4081.3.0-a-a739250a79\" DevicePath \"\"" Jan 14 14:39:17.211862 kubelet[3293]: I0114 14:39:17.211781 3293 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/07c7dbe4-af13-45a2-86cb-387d2ea87b87-xtables-lock\") on node \"ci-4081.3.0-a-a739250a79\" DevicePath \"\"" Jan 14 14:39:17.211862 kubelet[3293]: I0114 14:39:17.211792 3293 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/07c7dbe4-af13-45a2-86cb-387d2ea87b87-host-proc-sys-kernel\") on node \"ci-4081.3.0-a-a739250a79\" DevicePath \"\"" Jan 14 14:39:17.212076 kubelet[3293]: I0114 14:39:17.211807 3293 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/07c7dbe4-af13-45a2-86cb-387d2ea87b87-bpf-maps\") on node \"ci-4081.3.0-a-a739250a79\" DevicePath \"\"" Jan 14 14:39:17.212076 kubelet[3293]: I0114 14:39:17.211820 3293 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/07c7dbe4-af13-45a2-86cb-387d2ea87b87-cilium-config-path\") on node \"ci-4081.3.0-a-a739250a79\" DevicePath \"\"" Jan 14 14:39:17.653302 kubelet[3293]: I0114 14:39:17.653261 3293 scope.go:117] "RemoveContainer" containerID="e54364e83a5b07b8f3215d6e1d83a9ed7843c42f20bfb2cf39d4547986f5a82f" Jan 14 14:39:17.657824 containerd[1696]: time="2025-01-14T14:39:17.657769028Z" level=info msg="RemoveContainer for \"e54364e83a5b07b8f3215d6e1d83a9ed7843c42f20bfb2cf39d4547986f5a82f\"" Jan 14 14:39:17.661148 systemd[1]: Removed slice kubepods-burstable-pod07c7dbe4_af13_45a2_86cb_387d2ea87b87.slice - libcontainer container kubepods-burstable-pod07c7dbe4_af13_45a2_86cb_387d2ea87b87.slice. Jan 14 14:39:17.661541 systemd[1]: kubepods-burstable-pod07c7dbe4_af13_45a2_86cb_387d2ea87b87.slice: Consumed 7.693s CPU time. Jan 14 14:39:17.666999 systemd[1]: Removed slice kubepods-besteffort-podb64a0210_9e59_449b_a222_ab07af6f95b1.slice - libcontainer container kubepods-besteffort-podb64a0210_9e59_449b_a222_ab07af6f95b1.slice. Jan 14 14:39:17.673302 containerd[1696]: time="2025-01-14T14:39:17.673260716Z" level=info msg="RemoveContainer for \"e54364e83a5b07b8f3215d6e1d83a9ed7843c42f20bfb2cf39d4547986f5a82f\" returns successfully" Jan 14 14:39:17.673618 kubelet[3293]: I0114 14:39:17.673592 3293 scope.go:117] "RemoveContainer" containerID="1064f82c47497064a70f1a51d301ec3e9176e5b8a01f72feb533c3770623ecf5" Jan 14 14:39:17.675887 containerd[1696]: time="2025-01-14T14:39:17.675596829Z" level=info msg="RemoveContainer for \"1064f82c47497064a70f1a51d301ec3e9176e5b8a01f72feb533c3770623ecf5\"" Jan 14 14:39:17.684295 containerd[1696]: time="2025-01-14T14:39:17.684251978Z" level=info msg="RemoveContainer for \"1064f82c47497064a70f1a51d301ec3e9176e5b8a01f72feb533c3770623ecf5\" returns successfully" Jan 14 14:39:17.685038 kubelet[3293]: I0114 14:39:17.684881 3293 scope.go:117] "RemoveContainer" containerID="8cc5a5324d20b0f9291d07d6afee6e29717a400cda613a400db8b34181a221e8" Jan 14 14:39:17.686953 containerd[1696]: time="2025-01-14T14:39:17.686574892Z" level=info msg="RemoveContainer for \"8cc5a5324d20b0f9291d07d6afee6e29717a400cda613a400db8b34181a221e8\"" Jan 14 14:39:17.695718 containerd[1696]: time="2025-01-14T14:39:17.695664543Z" level=info msg="RemoveContainer for \"8cc5a5324d20b0f9291d07d6afee6e29717a400cda613a400db8b34181a221e8\" returns successfully" Jan 14 14:39:17.696591 kubelet[3293]: I0114 14:39:17.696469 3293 scope.go:117] "RemoveContainer" containerID="724d4ef6ab3ccd0c40c0940a3e98ee9836f717c58d63963157abc1467f76e12b" Jan 14 14:39:17.700420 containerd[1696]: time="2025-01-14T14:39:17.699435965Z" level=info msg="RemoveContainer for \"724d4ef6ab3ccd0c40c0940a3e98ee9836f717c58d63963157abc1467f76e12b\"" Jan 14 14:39:17.708003 containerd[1696]: time="2025-01-14T14:39:17.707960313Z" level=info msg="RemoveContainer for \"724d4ef6ab3ccd0c40c0940a3e98ee9836f717c58d63963157abc1467f76e12b\" returns successfully" Jan 14 14:39:17.708368 kubelet[3293]: I0114 14:39:17.708339 3293 scope.go:117] "RemoveContainer" containerID="8e02d3810d0aef7e540b63e736899e9bea0f1a44cfe72706bfefff79466e4eb2" Jan 14 14:39:17.709583 containerd[1696]: time="2025-01-14T14:39:17.709546922Z" level=info msg="RemoveContainer for \"8e02d3810d0aef7e540b63e736899e9bea0f1a44cfe72706bfefff79466e4eb2\"" Jan 14 14:39:17.719297 containerd[1696]: time="2025-01-14T14:39:17.719254777Z" level=info msg="RemoveContainer for \"8e02d3810d0aef7e540b63e736899e9bea0f1a44cfe72706bfefff79466e4eb2\" returns successfully" Jan 14 14:39:17.719591 kubelet[3293]: I0114 14:39:17.719566 3293 scope.go:117] "RemoveContainer" containerID="e54364e83a5b07b8f3215d6e1d83a9ed7843c42f20bfb2cf39d4547986f5a82f" Jan 14 14:39:17.719894 containerd[1696]: time="2025-01-14T14:39:17.719852780Z" level=error msg="ContainerStatus for \"e54364e83a5b07b8f3215d6e1d83a9ed7843c42f20bfb2cf39d4547986f5a82f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e54364e83a5b07b8f3215d6e1d83a9ed7843c42f20bfb2cf39d4547986f5a82f\": not found" Jan 14 14:39:17.720242 kubelet[3293]: E0114 14:39:17.720049 3293 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e54364e83a5b07b8f3215d6e1d83a9ed7843c42f20bfb2cf39d4547986f5a82f\": not found" containerID="e54364e83a5b07b8f3215d6e1d83a9ed7843c42f20bfb2cf39d4547986f5a82f" Jan 14 14:39:17.720242 kubelet[3293]: I0114 14:39:17.720144 3293 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e54364e83a5b07b8f3215d6e1d83a9ed7843c42f20bfb2cf39d4547986f5a82f"} err="failed to get container status \"e54364e83a5b07b8f3215d6e1d83a9ed7843c42f20bfb2cf39d4547986f5a82f\": rpc error: code = NotFound desc = an error occurred when try to find container \"e54364e83a5b07b8f3215d6e1d83a9ed7843c42f20bfb2cf39d4547986f5a82f\": not found" Jan 14 14:39:17.720242 kubelet[3293]: I0114 14:39:17.720157 3293 scope.go:117] "RemoveContainer" containerID="1064f82c47497064a70f1a51d301ec3e9176e5b8a01f72feb533c3770623ecf5" Jan 14 14:39:17.720475 containerd[1696]: time="2025-01-14T14:39:17.720365683Z" level=error msg="ContainerStatus for \"1064f82c47497064a70f1a51d301ec3e9176e5b8a01f72feb533c3770623ecf5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1064f82c47497064a70f1a51d301ec3e9176e5b8a01f72feb533c3770623ecf5\": not found" Jan 14 14:39:17.720673 kubelet[3293]: E0114 14:39:17.720645 3293 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1064f82c47497064a70f1a51d301ec3e9176e5b8a01f72feb533c3770623ecf5\": not found" containerID="1064f82c47497064a70f1a51d301ec3e9176e5b8a01f72feb533c3770623ecf5" Jan 14 14:39:17.720752 kubelet[3293]: I0114 14:39:17.720680 3293 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1064f82c47497064a70f1a51d301ec3e9176e5b8a01f72feb533c3770623ecf5"} err="failed to get container status \"1064f82c47497064a70f1a51d301ec3e9176e5b8a01f72feb533c3770623ecf5\": rpc error: code = NotFound desc = an error occurred when try to find container \"1064f82c47497064a70f1a51d301ec3e9176e5b8a01f72feb533c3770623ecf5\": not found" Jan 14 14:39:17.720752 kubelet[3293]: I0114 14:39:17.720694 3293 scope.go:117] "RemoveContainer" containerID="8cc5a5324d20b0f9291d07d6afee6e29717a400cda613a400db8b34181a221e8" Jan 14 14:39:17.720934 containerd[1696]: time="2025-01-14T14:39:17.720882686Z" level=error msg="ContainerStatus for \"8cc5a5324d20b0f9291d07d6afee6e29717a400cda613a400db8b34181a221e8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8cc5a5324d20b0f9291d07d6afee6e29717a400cda613a400db8b34181a221e8\": not found" Jan 14 14:39:17.721039 kubelet[3293]: E0114 14:39:17.721020 3293 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8cc5a5324d20b0f9291d07d6afee6e29717a400cda613a400db8b34181a221e8\": not found" containerID="8cc5a5324d20b0f9291d07d6afee6e29717a400cda613a400db8b34181a221e8" Jan 14 14:39:17.721133 kubelet[3293]: I0114 14:39:17.721058 3293 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8cc5a5324d20b0f9291d07d6afee6e29717a400cda613a400db8b34181a221e8"} err="failed to get container status \"8cc5a5324d20b0f9291d07d6afee6e29717a400cda613a400db8b34181a221e8\": rpc error: code = NotFound desc = an error occurred when try to find container \"8cc5a5324d20b0f9291d07d6afee6e29717a400cda613a400db8b34181a221e8\": not found" Jan 14 14:39:17.721133 kubelet[3293]: I0114 14:39:17.721073 3293 scope.go:117] "RemoveContainer" containerID="724d4ef6ab3ccd0c40c0940a3e98ee9836f717c58d63963157abc1467f76e12b" Jan 14 14:39:17.721313 containerd[1696]: time="2025-01-14T14:39:17.721281888Z" level=error msg="ContainerStatus for \"724d4ef6ab3ccd0c40c0940a3e98ee9836f717c58d63963157abc1467f76e12b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"724d4ef6ab3ccd0c40c0940a3e98ee9836f717c58d63963157abc1467f76e12b\": not found" Jan 14 14:39:17.721521 kubelet[3293]: E0114 14:39:17.721442 3293 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"724d4ef6ab3ccd0c40c0940a3e98ee9836f717c58d63963157abc1467f76e12b\": not found" containerID="724d4ef6ab3ccd0c40c0940a3e98ee9836f717c58d63963157abc1467f76e12b" Jan 14 14:39:17.721521 kubelet[3293]: I0114 14:39:17.721474 3293 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"724d4ef6ab3ccd0c40c0940a3e98ee9836f717c58d63963157abc1467f76e12b"} err="failed to get container status \"724d4ef6ab3ccd0c40c0940a3e98ee9836f717c58d63963157abc1467f76e12b\": rpc error: code = NotFound desc = an error occurred when try to find container \"724d4ef6ab3ccd0c40c0940a3e98ee9836f717c58d63963157abc1467f76e12b\": not found" Jan 14 14:39:17.721521 kubelet[3293]: I0114 14:39:17.721488 3293 scope.go:117] "RemoveContainer" containerID="8e02d3810d0aef7e540b63e736899e9bea0f1a44cfe72706bfefff79466e4eb2" Jan 14 14:39:17.721845 kubelet[3293]: E0114 14:39:17.721807 3293 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8e02d3810d0aef7e540b63e736899e9bea0f1a44cfe72706bfefff79466e4eb2\": not found" containerID="8e02d3810d0aef7e540b63e736899e9bea0f1a44cfe72706bfefff79466e4eb2" Jan 14 14:39:17.721845 kubelet[3293]: I0114 14:39:17.721837 3293 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8e02d3810d0aef7e540b63e736899e9bea0f1a44cfe72706bfefff79466e4eb2"} err="failed to get container status \"8e02d3810d0aef7e540b63e736899e9bea0f1a44cfe72706bfefff79466e4eb2\": rpc error: code = NotFound desc = an error occurred when try to find container \"8e02d3810d0aef7e540b63e736899e9bea0f1a44cfe72706bfefff79466e4eb2\": not found" Jan 14 14:39:17.721979 containerd[1696]: time="2025-01-14T14:39:17.721651290Z" level=error msg="ContainerStatus for \"8e02d3810d0aef7e540b63e736899e9bea0f1a44cfe72706bfefff79466e4eb2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8e02d3810d0aef7e540b63e736899e9bea0f1a44cfe72706bfefff79466e4eb2\": not found" Jan 14 14:39:17.722135 kubelet[3293]: I0114 14:39:17.721850 3293 scope.go:117] "RemoveContainer" containerID="4c0e592bea7f7d3c2d6fe2e8f4f1a8e2688f46942a84015d5c75964a0bc9ae98" Jan 14 14:39:17.723061 containerd[1696]: time="2025-01-14T14:39:17.723035898Z" level=info msg="RemoveContainer for \"4c0e592bea7f7d3c2d6fe2e8f4f1a8e2688f46942a84015d5c75964a0bc9ae98\"" Jan 14 14:39:17.731444 containerd[1696]: time="2025-01-14T14:39:17.731377946Z" level=info msg="RemoveContainer for \"4c0e592bea7f7d3c2d6fe2e8f4f1a8e2688f46942a84015d5c75964a0bc9ae98\" returns successfully" Jan 14 14:39:17.731666 kubelet[3293]: I0114 14:39:17.731643 3293 scope.go:117] "RemoveContainer" containerID="4c0e592bea7f7d3c2d6fe2e8f4f1a8e2688f46942a84015d5c75964a0bc9ae98" Jan 14 14:39:17.732023 containerd[1696]: time="2025-01-14T14:39:17.731885348Z" level=error msg="ContainerStatus for \"4c0e592bea7f7d3c2d6fe2e8f4f1a8e2688f46942a84015d5c75964a0bc9ae98\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4c0e592bea7f7d3c2d6fe2e8f4f1a8e2688f46942a84015d5c75964a0bc9ae98\": not found" Jan 14 14:39:17.732121 kubelet[3293]: E0114 14:39:17.732096 3293 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4c0e592bea7f7d3c2d6fe2e8f4f1a8e2688f46942a84015d5c75964a0bc9ae98\": not found" containerID="4c0e592bea7f7d3c2d6fe2e8f4f1a8e2688f46942a84015d5c75964a0bc9ae98" Jan 14 14:39:17.732177 kubelet[3293]: I0114 14:39:17.732148 3293 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4c0e592bea7f7d3c2d6fe2e8f4f1a8e2688f46942a84015d5c75964a0bc9ae98"} err="failed to get container status \"4c0e592bea7f7d3c2d6fe2e8f4f1a8e2688f46942a84015d5c75964a0bc9ae98\": rpc error: code = NotFound desc = an error occurred when try to find container \"4c0e592bea7f7d3c2d6fe2e8f4f1a8e2688f46942a84015d5c75964a0bc9ae98\": not found" Jan 14 14:39:17.755927 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5586e4e79625afd7cdb741b101b3ca0f94550fecdef2b2e00d9a36cf9fd880f0-rootfs.mount: Deactivated successfully. Jan 14 14:39:17.756039 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-65fd2a73a6ea74157fda95e80f83d4ffd023d8aa763fad2f8b6358903f680a68-rootfs.mount: Deactivated successfully. Jan 14 14:39:17.756119 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-65fd2a73a6ea74157fda95e80f83d4ffd023d8aa763fad2f8b6358903f680a68-shm.mount: Deactivated successfully. Jan 14 14:39:17.756199 systemd[1]: var-lib-kubelet-pods-07c7dbe4\x2daf13\x2d45a2\x2d86cb\x2d387d2ea87b87-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2db966s.mount: Deactivated successfully. Jan 14 14:39:17.756412 systemd[1]: var-lib-kubelet-pods-07c7dbe4\x2daf13\x2d45a2\x2d86cb\x2d387d2ea87b87-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 14 14:39:17.756565 systemd[1]: var-lib-kubelet-pods-07c7dbe4\x2daf13\x2d45a2\x2d86cb\x2d387d2ea87b87-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 14 14:39:17.756723 systemd[1]: var-lib-kubelet-pods-b64a0210\x2d9e59\x2d449b\x2da222\x2dab07af6f95b1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dp7r5t.mount: Deactivated successfully. Jan 14 14:39:18.062602 kubelet[3293]: I0114 14:39:18.060104 3293 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="07c7dbe4-af13-45a2-86cb-387d2ea87b87" path="/var/lib/kubelet/pods/07c7dbe4-af13-45a2-86cb-387d2ea87b87/volumes" Jan 14 14:39:18.062602 kubelet[3293]: I0114 14:39:18.062042 3293 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="b64a0210-9e59-449b-a222-ab07af6f95b1" path="/var/lib/kubelet/pods/b64a0210-9e59-449b-a222-ab07af6f95b1/volumes" Jan 14 14:39:18.194557 kubelet[3293]: E0114 14:39:18.194523 3293 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 14 14:39:18.787260 sshd[4961]: pam_unix(sshd:session): session closed for user core Jan 14 14:39:18.791254 systemd[1]: sshd@27-10.200.8.10:22-10.200.16.10:38256.service: Deactivated successfully. Jan 14 14:39:18.794122 systemd[1]: session-30.scope: Deactivated successfully. Jan 14 14:39:18.799907 systemd-logind[1663]: Session 30 logged out. Waiting for processes to exit. Jan 14 14:39:18.801205 systemd-logind[1663]: Removed session 30. Jan 14 14:39:18.904728 systemd[1]: Started sshd@28-10.200.8.10:22-10.200.16.10:52772.service - OpenSSH per-connection server daemon (10.200.16.10:52772). Jan 14 14:39:19.539938 sshd[5124]: Accepted publickey for core from 10.200.16.10 port 52772 ssh2: RSA SHA256:a+2S8eoWQbQVO3JLXgiieN5lEiWvIBRg13en2/CE8M8 Jan 14 14:39:19.541561 sshd[5124]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 14:39:19.546939 systemd-logind[1663]: New session 31 of user core. Jan 14 14:39:19.557690 systemd[1]: Started session-31.scope - Session 31 of User core. Jan 14 14:39:20.469619 kubelet[3293]: I0114 14:39:20.469558 3293 topology_manager.go:215] "Topology Admit Handler" podUID="a03116f2-b42c-42f2-9a68-e9c3b1239576" podNamespace="kube-system" podName="cilium-wf9wv" Jan 14 14:39:20.470128 kubelet[3293]: E0114 14:39:20.469672 3293 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="07c7dbe4-af13-45a2-86cb-387d2ea87b87" containerName="mount-cgroup" Jan 14 14:39:20.470128 kubelet[3293]: E0114 14:39:20.469685 3293 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="07c7dbe4-af13-45a2-86cb-387d2ea87b87" containerName="apply-sysctl-overwrites" Jan 14 14:39:20.470128 kubelet[3293]: E0114 14:39:20.469695 3293 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="07c7dbe4-af13-45a2-86cb-387d2ea87b87" containerName="clean-cilium-state" Jan 14 14:39:20.470128 kubelet[3293]: E0114 14:39:20.469703 3293 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="07c7dbe4-af13-45a2-86cb-387d2ea87b87" containerName="cilium-agent" Jan 14 14:39:20.470128 kubelet[3293]: E0114 14:39:20.469710 3293 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b64a0210-9e59-449b-a222-ab07af6f95b1" containerName="cilium-operator" Jan 14 14:39:20.470128 kubelet[3293]: E0114 14:39:20.469720 3293 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="07c7dbe4-af13-45a2-86cb-387d2ea87b87" containerName="mount-bpf-fs" Jan 14 14:39:20.470128 kubelet[3293]: I0114 14:39:20.469763 3293 memory_manager.go:354] "RemoveStaleState removing state" podUID="07c7dbe4-af13-45a2-86cb-387d2ea87b87" containerName="cilium-agent" Jan 14 14:39:20.470128 kubelet[3293]: I0114 14:39:20.469776 3293 memory_manager.go:354] "RemoveStaleState removing state" podUID="b64a0210-9e59-449b-a222-ab07af6f95b1" containerName="cilium-operator" Jan 14 14:39:20.484470 systemd[1]: Created slice kubepods-burstable-poda03116f2_b42c_42f2_9a68_e9c3b1239576.slice - libcontainer container kubepods-burstable-poda03116f2_b42c_42f2_9a68_e9c3b1239576.slice. Jan 14 14:39:20.537688 kubelet[3293]: I0114 14:39:20.537626 3293 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a03116f2-b42c-42f2-9a68-e9c3b1239576-cilium-cgroup\") pod \"cilium-wf9wv\" (UID: \"a03116f2-b42c-42f2-9a68-e9c3b1239576\") " pod="kube-system/cilium-wf9wv" Jan 14 14:39:20.538317 kubelet[3293]: I0114 14:39:20.538009 3293 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a03116f2-b42c-42f2-9a68-e9c3b1239576-cilium-config-path\") pod \"cilium-wf9wv\" (UID: \"a03116f2-b42c-42f2-9a68-e9c3b1239576\") " pod="kube-system/cilium-wf9wv" Jan 14 14:39:20.538793 kubelet[3293]: I0114 14:39:20.538722 3293 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a03116f2-b42c-42f2-9a68-e9c3b1239576-cilium-run\") pod \"cilium-wf9wv\" (UID: \"a03116f2-b42c-42f2-9a68-e9c3b1239576\") " pod="kube-system/cilium-wf9wv" Jan 14 14:39:20.538885 kubelet[3293]: I0114 14:39:20.538827 3293 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a03116f2-b42c-42f2-9a68-e9c3b1239576-cilium-ipsec-secrets\") pod \"cilium-wf9wv\" (UID: \"a03116f2-b42c-42f2-9a68-e9c3b1239576\") " pod="kube-system/cilium-wf9wv" Jan 14 14:39:20.538885 kubelet[3293]: I0114 14:39:20.538860 3293 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a03116f2-b42c-42f2-9a68-e9c3b1239576-host-proc-sys-kernel\") pod \"cilium-wf9wv\" (UID: \"a03116f2-b42c-42f2-9a68-e9c3b1239576\") " pod="kube-system/cilium-wf9wv" Jan 14 14:39:20.538969 kubelet[3293]: I0114 14:39:20.538891 3293 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a03116f2-b42c-42f2-9a68-e9c3b1239576-hubble-tls\") pod \"cilium-wf9wv\" (UID: \"a03116f2-b42c-42f2-9a68-e9c3b1239576\") " pod="kube-system/cilium-wf9wv" Jan 14 14:39:20.538969 kubelet[3293]: I0114 14:39:20.538923 3293 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a03116f2-b42c-42f2-9a68-e9c3b1239576-hostproc\") pod \"cilium-wf9wv\" (UID: \"a03116f2-b42c-42f2-9a68-e9c3b1239576\") " pod="kube-system/cilium-wf9wv" Jan 14 14:39:20.538969 kubelet[3293]: I0114 14:39:20.538952 3293 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a03116f2-b42c-42f2-9a68-e9c3b1239576-host-proc-sys-net\") pod \"cilium-wf9wv\" (UID: \"a03116f2-b42c-42f2-9a68-e9c3b1239576\") " pod="kube-system/cilium-wf9wv" Jan 14 14:39:20.539104 kubelet[3293]: I0114 14:39:20.538981 3293 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a03116f2-b42c-42f2-9a68-e9c3b1239576-clustermesh-secrets\") pod \"cilium-wf9wv\" (UID: \"a03116f2-b42c-42f2-9a68-e9c3b1239576\") " pod="kube-system/cilium-wf9wv" Jan 14 14:39:20.539104 kubelet[3293]: I0114 14:39:20.539011 3293 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a03116f2-b42c-42f2-9a68-e9c3b1239576-bpf-maps\") pod \"cilium-wf9wv\" (UID: \"a03116f2-b42c-42f2-9a68-e9c3b1239576\") " pod="kube-system/cilium-wf9wv" Jan 14 14:39:20.539104 kubelet[3293]: I0114 14:39:20.539046 3293 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a03116f2-b42c-42f2-9a68-e9c3b1239576-cni-path\") pod \"cilium-wf9wv\" (UID: \"a03116f2-b42c-42f2-9a68-e9c3b1239576\") " pod="kube-system/cilium-wf9wv" Jan 14 14:39:20.539104 kubelet[3293]: I0114 14:39:20.539073 3293 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a03116f2-b42c-42f2-9a68-e9c3b1239576-etc-cni-netd\") pod \"cilium-wf9wv\" (UID: \"a03116f2-b42c-42f2-9a68-e9c3b1239576\") " pod="kube-system/cilium-wf9wv" Jan 14 14:39:20.539104 kubelet[3293]: I0114 14:39:20.539100 3293 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a03116f2-b42c-42f2-9a68-e9c3b1239576-lib-modules\") pod \"cilium-wf9wv\" (UID: \"a03116f2-b42c-42f2-9a68-e9c3b1239576\") " pod="kube-system/cilium-wf9wv" Jan 14 14:39:20.539291 kubelet[3293]: I0114 14:39:20.539127 3293 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a03116f2-b42c-42f2-9a68-e9c3b1239576-xtables-lock\") pod \"cilium-wf9wv\" (UID: \"a03116f2-b42c-42f2-9a68-e9c3b1239576\") " pod="kube-system/cilium-wf9wv" Jan 14 14:39:20.539291 kubelet[3293]: I0114 14:39:20.539155 3293 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-np2rf\" (UniqueName: \"kubernetes.io/projected/a03116f2-b42c-42f2-9a68-e9c3b1239576-kube-api-access-np2rf\") pod \"cilium-wf9wv\" (UID: \"a03116f2-b42c-42f2-9a68-e9c3b1239576\") " pod="kube-system/cilium-wf9wv" Jan 14 14:39:20.542106 sshd[5124]: pam_unix(sshd:session): session closed for user core Jan 14 14:39:20.546509 systemd[1]: sshd@28-10.200.8.10:22-10.200.16.10:52772.service: Deactivated successfully. Jan 14 14:39:20.549296 systemd[1]: session-31.scope: Deactivated successfully. Jan 14 14:39:20.553651 systemd-logind[1663]: Session 31 logged out. Waiting for processes to exit. Jan 14 14:39:20.555469 systemd-logind[1663]: Removed session 31. Jan 14 14:39:20.678735 systemd[1]: Started sshd@29-10.200.8.10:22-10.200.16.10:52776.service - OpenSSH per-connection server daemon (10.200.16.10:52776). Jan 14 14:39:20.789517 containerd[1696]: time="2025-01-14T14:39:20.788920363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wf9wv,Uid:a03116f2-b42c-42f2-9a68-e9c3b1239576,Namespace:kube-system,Attempt:0,}" Jan 14 14:39:20.839658 containerd[1696]: time="2025-01-14T14:39:20.839309985Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 14:39:20.839658 containerd[1696]: time="2025-01-14T14:39:20.839401485Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 14:39:20.839658 containerd[1696]: time="2025-01-14T14:39:20.839444186Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 14:39:20.839658 containerd[1696]: time="2025-01-14T14:39:20.839546486Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 14:39:20.861590 systemd[1]: Started cri-containerd-542cf07513958d1ea7c3a9e69e846a835ae4ca62405444b0a7102b5a59312294.scope - libcontainer container 542cf07513958d1ea7c3a9e69e846a835ae4ca62405444b0a7102b5a59312294. Jan 14 14:39:20.885324 containerd[1696]: time="2025-01-14T14:39:20.885268678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wf9wv,Uid:a03116f2-b42c-42f2-9a68-e9c3b1239576,Namespace:kube-system,Attempt:0,} returns sandbox id \"542cf07513958d1ea7c3a9e69e846a835ae4ca62405444b0a7102b5a59312294\"" Jan 14 14:39:20.889315 containerd[1696]: time="2025-01-14T14:39:20.889164603Z" level=info msg="CreateContainer within sandbox \"542cf07513958d1ea7c3a9e69e846a835ae4ca62405444b0a7102b5a59312294\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 14 14:39:20.931472 containerd[1696]: time="2025-01-14T14:39:20.931347572Z" level=info msg="CreateContainer within sandbox \"542cf07513958d1ea7c3a9e69e846a835ae4ca62405444b0a7102b5a59312294\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f4533c5cd0a893de162ea07e422be4489446cee01fa94107a15558170d84d873\"" Jan 14 14:39:20.932645 containerd[1696]: time="2025-01-14T14:39:20.932606680Z" level=info msg="StartContainer for \"f4533c5cd0a893de162ea07e422be4489446cee01fa94107a15558170d84d873\"" Jan 14 14:39:20.966564 systemd[1]: Started cri-containerd-f4533c5cd0a893de162ea07e422be4489446cee01fa94107a15558170d84d873.scope - libcontainer container f4533c5cd0a893de162ea07e422be4489446cee01fa94107a15558170d84d873. Jan 14 14:39:20.995983 containerd[1696]: time="2025-01-14T14:39:20.995923484Z" level=info msg="StartContainer for \"f4533c5cd0a893de162ea07e422be4489446cee01fa94107a15558170d84d873\" returns successfully" Jan 14 14:39:21.000946 systemd[1]: cri-containerd-f4533c5cd0a893de162ea07e422be4489446cee01fa94107a15558170d84d873.scope: Deactivated successfully. Jan 14 14:39:21.067471 containerd[1696]: time="2025-01-14T14:39:21.066717836Z" level=info msg="shim disconnected" id=f4533c5cd0a893de162ea07e422be4489446cee01fa94107a15558170d84d873 namespace=k8s.io Jan 14 14:39:21.067471 containerd[1696]: time="2025-01-14T14:39:21.066771736Z" level=warning msg="cleaning up after shim disconnected" id=f4533c5cd0a893de162ea07e422be4489446cee01fa94107a15558170d84d873 namespace=k8s.io Jan 14 14:39:21.067471 containerd[1696]: time="2025-01-14T14:39:21.066781436Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 14 14:39:21.086509 containerd[1696]: time="2025-01-14T14:39:21.086443262Z" level=warning msg="cleanup warnings time=\"2025-01-14T14:39:21Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 14 14:39:21.329819 sshd[5141]: Accepted publickey for core from 10.200.16.10 port 52776 ssh2: RSA SHA256:a+2S8eoWQbQVO3JLXgiieN5lEiWvIBRg13en2/CE8M8 Jan 14 14:39:21.331692 sshd[5141]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 14:39:21.336320 systemd-logind[1663]: New session 32 of user core. Jan 14 14:39:21.339590 systemd[1]: Started session-32.scope - Session 32 of User core. Jan 14 14:39:21.691170 containerd[1696]: time="2025-01-14T14:39:21.691114520Z" level=info msg="CreateContainer within sandbox \"542cf07513958d1ea7c3a9e69e846a835ae4ca62405444b0a7102b5a59312294\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 14 14:39:21.727288 containerd[1696]: time="2025-01-14T14:39:21.727237851Z" level=info msg="CreateContainer within sandbox \"542cf07513958d1ea7c3a9e69e846a835ae4ca62405444b0a7102b5a59312294\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0e36bff0e3e8eaebd50fb88ffd639c8b669a373730dffc9e22097a638cbc81f6\"" Jan 14 14:39:21.728011 containerd[1696]: time="2025-01-14T14:39:21.727931155Z" level=info msg="StartContainer for \"0e36bff0e3e8eaebd50fb88ffd639c8b669a373730dffc9e22097a638cbc81f6\"" Jan 14 14:39:21.778616 systemd[1]: Started cri-containerd-0e36bff0e3e8eaebd50fb88ffd639c8b669a373730dffc9e22097a638cbc81f6.scope - libcontainer container 0e36bff0e3e8eaebd50fb88ffd639c8b669a373730dffc9e22097a638cbc81f6. Jan 14 14:39:21.780487 sshd[5141]: pam_unix(sshd:session): session closed for user core Jan 14 14:39:21.786767 systemd[1]: sshd@29-10.200.8.10:22-10.200.16.10:52776.service: Deactivated successfully. Jan 14 14:39:21.791697 systemd[1]: session-32.scope: Deactivated successfully. Jan 14 14:39:21.793054 systemd-logind[1663]: Session 32 logged out. Waiting for processes to exit. Jan 14 14:39:21.796348 systemd-logind[1663]: Removed session 32. Jan 14 14:39:21.820221 containerd[1696]: time="2025-01-14T14:39:21.820162244Z" level=info msg="StartContainer for \"0e36bff0e3e8eaebd50fb88ffd639c8b669a373730dffc9e22097a638cbc81f6\" returns successfully" Jan 14 14:39:21.823942 systemd[1]: cri-containerd-0e36bff0e3e8eaebd50fb88ffd639c8b669a373730dffc9e22097a638cbc81f6.scope: Deactivated successfully. Jan 14 14:39:21.863679 containerd[1696]: time="2025-01-14T14:39:21.863607521Z" level=info msg="shim disconnected" id=0e36bff0e3e8eaebd50fb88ffd639c8b669a373730dffc9e22097a638cbc81f6 namespace=k8s.io Jan 14 14:39:21.863679 containerd[1696]: time="2025-01-14T14:39:21.863664121Z" level=warning msg="cleaning up after shim disconnected" id=0e36bff0e3e8eaebd50fb88ffd639c8b669a373730dffc9e22097a638cbc81f6 namespace=k8s.io Jan 14 14:39:21.863679 containerd[1696]: time="2025-01-14T14:39:21.863676221Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 14 14:39:21.900048 systemd[1]: Started sshd@30-10.200.8.10:22-10.200.16.10:52780.service - OpenSSH per-connection server daemon (10.200.16.10:52780). Jan 14 14:39:22.532942 sshd[5313]: Accepted publickey for core from 10.200.16.10 port 52780 ssh2: RSA SHA256:a+2S8eoWQbQVO3JLXgiieN5lEiWvIBRg13en2/CE8M8 Jan 14 14:39:22.534566 sshd[5313]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 14:39:22.539519 systemd-logind[1663]: New session 33 of user core. Jan 14 14:39:22.546621 systemd[1]: Started session-33.scope - Session 33 of User core. Jan 14 14:39:22.657718 systemd[1]: run-containerd-runc-k8s.io-0e36bff0e3e8eaebd50fb88ffd639c8b669a373730dffc9e22097a638cbc81f6-runc.fqdnMG.mount: Deactivated successfully. Jan 14 14:39:22.657848 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0e36bff0e3e8eaebd50fb88ffd639c8b669a373730dffc9e22097a638cbc81f6-rootfs.mount: Deactivated successfully. Jan 14 14:39:22.699067 containerd[1696]: time="2025-01-14T14:39:22.698852551Z" level=info msg="CreateContainer within sandbox \"542cf07513958d1ea7c3a9e69e846a835ae4ca62405444b0a7102b5a59312294\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 14 14:39:22.735425 containerd[1696]: time="2025-01-14T14:39:22.735360784Z" level=info msg="CreateContainer within sandbox \"542cf07513958d1ea7c3a9e69e846a835ae4ca62405444b0a7102b5a59312294\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"bd07bb091355d7f6d031e9b5a0c596fc9d317a0f7694865e16e69cb1615eb64e\"" Jan 14 14:39:22.736083 containerd[1696]: time="2025-01-14T14:39:22.735991588Z" level=info msg="StartContainer for \"bd07bb091355d7f6d031e9b5a0c596fc9d317a0f7694865e16e69cb1615eb64e\"" Jan 14 14:39:22.786618 systemd[1]: Started cri-containerd-bd07bb091355d7f6d031e9b5a0c596fc9d317a0f7694865e16e69cb1615eb64e.scope - libcontainer container bd07bb091355d7f6d031e9b5a0c596fc9d317a0f7694865e16e69cb1615eb64e. Jan 14 14:39:22.818963 systemd[1]: cri-containerd-bd07bb091355d7f6d031e9b5a0c596fc9d317a0f7694865e16e69cb1615eb64e.scope: Deactivated successfully. Jan 14 14:39:22.819846 containerd[1696]: time="2025-01-14T14:39:22.819810122Z" level=info msg="StartContainer for \"bd07bb091355d7f6d031e9b5a0c596fc9d317a0f7694865e16e69cb1615eb64e\" returns successfully" Jan 14 14:39:22.866848 containerd[1696]: time="2025-01-14T14:39:22.866781522Z" level=info msg="shim disconnected" id=bd07bb091355d7f6d031e9b5a0c596fc9d317a0f7694865e16e69cb1615eb64e namespace=k8s.io Jan 14 14:39:22.866848 containerd[1696]: time="2025-01-14T14:39:22.866842023Z" level=warning msg="cleaning up after shim disconnected" id=bd07bb091355d7f6d031e9b5a0c596fc9d317a0f7694865e16e69cb1615eb64e namespace=k8s.io Jan 14 14:39:22.866848 containerd[1696]: time="2025-01-14T14:39:22.866854923Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 14 14:39:22.883433 containerd[1696]: time="2025-01-14T14:39:22.883144727Z" level=warning msg="cleanup warnings time=\"2025-01-14T14:39:22Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 14 14:39:23.195407 kubelet[3293]: E0114 14:39:23.195351 3293 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 14 14:39:23.659040 systemd[1]: run-containerd-runc-k8s.io-bd07bb091355d7f6d031e9b5a0c596fc9d317a0f7694865e16e69cb1615eb64e-runc.BRK46B.mount: Deactivated successfully. Jan 14 14:39:23.659194 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bd07bb091355d7f6d031e9b5a0c596fc9d317a0f7694865e16e69cb1615eb64e-rootfs.mount: Deactivated successfully. Jan 14 14:39:23.700187 containerd[1696]: time="2025-01-14T14:39:23.699230034Z" level=info msg="CreateContainer within sandbox \"542cf07513958d1ea7c3a9e69e846a835ae4ca62405444b0a7102b5a59312294\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 14 14:39:23.735220 containerd[1696]: time="2025-01-14T14:39:23.735168763Z" level=info msg="CreateContainer within sandbox \"542cf07513958d1ea7c3a9e69e846a835ae4ca62405444b0a7102b5a59312294\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"07eeb284f3bc9d9ce2a4b984ddb1ff8a9c7cc864b45f760f953c7a6d5f692d98\"" Jan 14 14:39:23.735796 containerd[1696]: time="2025-01-14T14:39:23.735763667Z" level=info msg="StartContainer for \"07eeb284f3bc9d9ce2a4b984ddb1ff8a9c7cc864b45f760f953c7a6d5f692d98\"" Jan 14 14:39:23.770567 systemd[1]: Started cri-containerd-07eeb284f3bc9d9ce2a4b984ddb1ff8a9c7cc864b45f760f953c7a6d5f692d98.scope - libcontainer container 07eeb284f3bc9d9ce2a4b984ddb1ff8a9c7cc864b45f760f953c7a6d5f692d98. Jan 14 14:39:23.796346 systemd[1]: cri-containerd-07eeb284f3bc9d9ce2a4b984ddb1ff8a9c7cc864b45f760f953c7a6d5f692d98.scope: Deactivated successfully. Jan 14 14:39:23.802925 containerd[1696]: time="2025-01-14T14:39:23.802837695Z" level=info msg="StartContainer for \"07eeb284f3bc9d9ce2a4b984ddb1ff8a9c7cc864b45f760f953c7a6d5f692d98\" returns successfully" Jan 14 14:39:23.832847 containerd[1696]: time="2025-01-14T14:39:23.832774086Z" level=info msg="shim disconnected" id=07eeb284f3bc9d9ce2a4b984ddb1ff8a9c7cc864b45f760f953c7a6d5f692d98 namespace=k8s.io Jan 14 14:39:23.832847 containerd[1696]: time="2025-01-14T14:39:23.832831287Z" level=warning msg="cleaning up after shim disconnected" id=07eeb284f3bc9d9ce2a4b984ddb1ff8a9c7cc864b45f760f953c7a6d5f692d98 namespace=k8s.io Jan 14 14:39:23.832847 containerd[1696]: time="2025-01-14T14:39:23.832842287Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 14 14:39:23.870870 kubelet[3293]: I0114 14:39:23.869777 3293 setters.go:568] "Node became not ready" node="ci-4081.3.0-a-a739250a79" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-14T14:39:23Z","lastTransitionTime":"2025-01-14T14:39:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 14 14:39:24.657808 systemd[1]: run-containerd-runc-k8s.io-07eeb284f3bc9d9ce2a4b984ddb1ff8a9c7cc864b45f760f953c7a6d5f692d98-runc.T9z0uq.mount: Deactivated successfully. Jan 14 14:39:24.657930 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-07eeb284f3bc9d9ce2a4b984ddb1ff8a9c7cc864b45f760f953c7a6d5f692d98-rootfs.mount: Deactivated successfully. Jan 14 14:39:24.704751 containerd[1696]: time="2025-01-14T14:39:24.704706250Z" level=info msg="CreateContainer within sandbox \"542cf07513958d1ea7c3a9e69e846a835ae4ca62405444b0a7102b5a59312294\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 14 14:39:24.745336 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount797882145.mount: Deactivated successfully. Jan 14 14:39:24.753593 containerd[1696]: time="2025-01-14T14:39:24.753542562Z" level=info msg="CreateContainer within sandbox \"542cf07513958d1ea7c3a9e69e846a835ae4ca62405444b0a7102b5a59312294\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e9ac091ff6970f89dfc11931cb80cecd29504d6e1ece1653c49104df6f12ae3f\"" Jan 14 14:39:24.755535 containerd[1696]: time="2025-01-14T14:39:24.754103565Z" level=info msg="StartContainer for \"e9ac091ff6970f89dfc11931cb80cecd29504d6e1ece1653c49104df6f12ae3f\"" Jan 14 14:39:24.791583 systemd[1]: Started cri-containerd-e9ac091ff6970f89dfc11931cb80cecd29504d6e1ece1653c49104df6f12ae3f.scope - libcontainer container e9ac091ff6970f89dfc11931cb80cecd29504d6e1ece1653c49104df6f12ae3f. Jan 14 14:39:24.822613 containerd[1696]: time="2025-01-14T14:39:24.822544602Z" level=info msg="StartContainer for \"e9ac091ff6970f89dfc11931cb80cecd29504d6e1ece1653c49104df6f12ae3f\" returns successfully" Jan 14 14:39:25.199267 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 14 14:39:25.722649 kubelet[3293]: I0114 14:39:25.722593 3293 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-wf9wv" podStartSLOduration=5.722532845 podStartE2EDuration="5.722532845s" podCreationTimestamp="2025-01-14 14:39:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-14 14:39:25.722055342 +0000 UTC m=+257.789141560" watchObservedRunningTime="2025-01-14 14:39:25.722532845 +0000 UTC m=+257.789619063" Jan 14 14:39:27.939799 systemd-networkd[1550]: lxc_health: Link UP Jan 14 14:39:27.955440 systemd-networkd[1550]: lxc_health: Gained carrier Jan 14 14:39:29.963602 systemd-networkd[1550]: lxc_health: Gained IPv6LL Jan 14 14:39:33.813624 systemd[1]: run-containerd-runc-k8s.io-e9ac091ff6970f89dfc11931cb80cecd29504d6e1ece1653c49104df6f12ae3f-runc.Kcoe0U.mount: Deactivated successfully. Jan 14 14:39:35.989709 systemd[1]: run-containerd-runc-k8s.io-e9ac091ff6970f89dfc11931cb80cecd29504d6e1ece1653c49104df6f12ae3f-runc.U3zCzr.mount: Deactivated successfully. Jan 14 14:39:44.410599 systemd[1]: run-containerd-runc-k8s.io-e9ac091ff6970f89dfc11931cb80cecd29504d6e1ece1653c49104df6f12ae3f-runc.R6Mlax.mount: Deactivated successfully. Jan 14 14:39:48.630524 systemd[1]: run-containerd-runc-k8s.io-e9ac091ff6970f89dfc11931cb80cecd29504d6e1ece1653c49104df6f12ae3f-runc.u3tXyt.mount: Deactivated successfully. Jan 14 14:39:50.798335 systemd[1]: run-containerd-runc-k8s.io-e9ac091ff6970f89dfc11931cb80cecd29504d6e1ece1653c49104df6f12ae3f-runc.Gni5Qi.mount: Deactivated successfully. Jan 14 14:39:55.453994 sshd[5313]: pam_unix(sshd:session): session closed for user core Jan 14 14:39:55.458615 systemd[1]: sshd@30-10.200.8.10:22-10.200.16.10:52780.service: Deactivated successfully. Jan 14 14:39:55.460767 systemd[1]: session-33.scope: Deactivated successfully. Jan 14 14:39:55.461617 systemd-logind[1663]: Session 33 logged out. Waiting for processes to exit. Jan 14 14:39:55.462979 systemd-logind[1663]: Removed session 33.