Mar 17 18:38:55.823657 kernel: Linux version 5.15.179-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Mar 17 17:12:34 -00 2025 Mar 17 18:38:55.823675 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=249ccd113f901380672c0d31e18f792e8e0344094c0e39eedc449f039418b31a Mar 17 18:38:55.823684 kernel: BIOS-provided physical RAM map: Mar 17 18:38:55.823690 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 17 18:38:55.823695 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Mar 17 18:38:55.823701 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Mar 17 18:38:55.823707 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Mar 17 18:38:55.823713 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Mar 17 18:38:55.823719 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Mar 17 18:38:55.823725 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Mar 17 18:38:55.823731 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Mar 17 18:38:55.823736 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Mar 17 18:38:55.823742 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Mar 17 18:38:55.823748 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Mar 17 18:38:55.823755 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Mar 17 18:38:55.823762 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Mar 17 18:38:55.823767 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Mar 17 18:38:55.823773 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 17 18:38:55.823779 kernel: NX (Execute Disable) protection: active Mar 17 18:38:55.823785 kernel: e820: update [mem 0x9b475018-0x9b47ec57] usable ==> usable Mar 17 18:38:55.823791 kernel: e820: update [mem 0x9b475018-0x9b47ec57] usable ==> usable Mar 17 18:38:55.823797 kernel: e820: update [mem 0x9b438018-0x9b474e57] usable ==> usable Mar 17 18:38:55.823803 kernel: e820: update [mem 0x9b438018-0x9b474e57] usable ==> usable Mar 17 18:38:55.823808 kernel: extended physical RAM map: Mar 17 18:38:55.823814 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 17 18:38:55.823821 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Mar 17 18:38:55.823827 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Mar 17 18:38:55.823833 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Mar 17 18:38:55.823839 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Mar 17 18:38:55.823844 kernel: reserve setup_data: [mem 0x000000000080c000-0x000000000080ffff] usable Mar 17 18:38:55.823850 kernel: reserve setup_data: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Mar 17 18:38:55.823856 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b438017] usable Mar 17 18:38:55.823862 kernel: reserve setup_data: [mem 0x000000009b438018-0x000000009b474e57] usable Mar 17 18:38:55.823868 kernel: reserve setup_data: [mem 0x000000009b474e58-0x000000009b475017] usable Mar 17 18:38:55.823874 kernel: reserve setup_data: [mem 0x000000009b475018-0x000000009b47ec57] usable Mar 17 18:38:55.823879 kernel: reserve setup_data: [mem 0x000000009b47ec58-0x000000009c8eefff] usable Mar 17 18:38:55.823886 kernel: reserve setup_data: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Mar 17 18:38:55.823892 kernel: reserve setup_data: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Mar 17 18:38:55.823898 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Mar 17 18:38:55.823904 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Mar 17 18:38:55.823912 kernel: reserve setup_data: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Mar 17 18:38:55.823919 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Mar 17 18:38:55.823925 kernel: reserve setup_data: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 17 18:38:55.823932 kernel: efi: EFI v2.70 by EDK II Mar 17 18:38:55.823938 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b673018 RNG=0x9cb73018 Mar 17 18:38:55.823945 kernel: random: crng init done Mar 17 18:38:55.823951 kernel: SMBIOS 2.8 present. Mar 17 18:38:55.823958 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Mar 17 18:38:55.823964 kernel: Hypervisor detected: KVM Mar 17 18:38:55.823970 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 17 18:38:55.823977 kernel: kvm-clock: cpu 0, msr 2e19a001, primary cpu clock Mar 17 18:38:55.823983 kernel: kvm-clock: using sched offset of 4018956009 cycles Mar 17 18:38:55.823991 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 17 18:38:55.823998 kernel: tsc: Detected 2794.748 MHz processor Mar 17 18:38:55.824005 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 17 18:38:55.824011 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 17 18:38:55.824018 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Mar 17 18:38:55.824024 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 17 18:38:55.824031 kernel: Using GB pages for direct mapping Mar 17 18:38:55.824037 kernel: Secure boot disabled Mar 17 18:38:55.824044 kernel: ACPI: Early table checksum verification disabled Mar 17 18:38:55.824051 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Mar 17 18:38:55.824058 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Mar 17 18:38:55.824064 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:38:55.824071 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:38:55.824077 kernel: ACPI: FACS 0x000000009CBDD000 000040 Mar 17 18:38:55.824084 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:38:55.824090 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:38:55.824097 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:38:55.824103 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:38:55.824111 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Mar 17 18:38:55.824117 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Mar 17 18:38:55.824124 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Mar 17 18:38:55.824132 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Mar 17 18:38:55.824139 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Mar 17 18:38:55.824146 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Mar 17 18:38:55.824155 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Mar 17 18:38:55.824161 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Mar 17 18:38:55.824168 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Mar 17 18:38:55.824175 kernel: No NUMA configuration found Mar 17 18:38:55.824182 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Mar 17 18:38:55.824188 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Mar 17 18:38:55.824213 kernel: Zone ranges: Mar 17 18:38:55.824219 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 17 18:38:55.824226 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Mar 17 18:38:55.824232 kernel: Normal empty Mar 17 18:38:55.824239 kernel: Movable zone start for each node Mar 17 18:38:55.824245 kernel: Early memory node ranges Mar 17 18:38:55.824253 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Mar 17 18:38:55.824260 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Mar 17 18:38:55.824266 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Mar 17 18:38:55.824273 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Mar 17 18:38:55.824279 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Mar 17 18:38:55.824285 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Mar 17 18:38:55.824292 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Mar 17 18:38:55.824298 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 17 18:38:55.824305 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Mar 17 18:38:55.824311 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Mar 17 18:38:55.824319 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 17 18:38:55.824325 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Mar 17 18:38:55.824332 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Mar 17 18:38:55.824338 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Mar 17 18:38:55.824345 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 17 18:38:55.824351 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 17 18:38:55.824358 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 17 18:38:55.824364 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 17 18:38:55.824371 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 17 18:38:55.824379 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 17 18:38:55.824385 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 17 18:38:55.824392 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 17 18:38:55.824398 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 17 18:38:55.824405 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 17 18:38:55.824411 kernel: TSC deadline timer available Mar 17 18:38:55.824418 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Mar 17 18:38:55.824424 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 17 18:38:55.824431 kernel: kvm-guest: setup PV sched yield Mar 17 18:38:55.824438 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Mar 17 18:38:55.824445 kernel: Booting paravirtualized kernel on KVM Mar 17 18:38:55.824455 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 17 18:38:55.824463 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Mar 17 18:38:55.824470 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 Mar 17 18:38:55.824477 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 Mar 17 18:38:55.824484 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 17 18:38:55.824490 kernel: kvm-guest: setup async PF for cpu 0 Mar 17 18:38:55.824497 kernel: kvm-guest: stealtime: cpu 0, msr 9b21c0c0 Mar 17 18:38:55.824504 kernel: kvm-guest: PV spinlocks enabled Mar 17 18:38:55.824511 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 17 18:38:55.824518 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Mar 17 18:38:55.824526 kernel: Policy zone: DMA32 Mar 17 18:38:55.824534 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=249ccd113f901380672c0d31e18f792e8e0344094c0e39eedc449f039418b31a Mar 17 18:38:55.824541 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 17 18:38:55.824548 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 17 18:38:55.824555 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 17 18:38:55.824562 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 17 18:38:55.824570 kernel: Memory: 2397432K/2567000K available (12294K kernel code, 2278K rwdata, 13724K rodata, 47472K init, 4108K bss, 169308K reserved, 0K cma-reserved) Mar 17 18:38:55.824577 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 17 18:38:55.824584 kernel: ftrace: allocating 34580 entries in 136 pages Mar 17 18:38:55.824591 kernel: ftrace: allocated 136 pages with 2 groups Mar 17 18:38:55.824597 kernel: rcu: Hierarchical RCU implementation. Mar 17 18:38:55.824605 kernel: rcu: RCU event tracing is enabled. Mar 17 18:38:55.824612 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 17 18:38:55.824620 kernel: Rude variant of Tasks RCU enabled. Mar 17 18:38:55.824627 kernel: Tracing variant of Tasks RCU enabled. Mar 17 18:38:55.824640 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 17 18:38:55.824647 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 17 18:38:55.824654 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 17 18:38:55.824662 kernel: Console: colour dummy device 80x25 Mar 17 18:38:55.824668 kernel: printk: console [ttyS0] enabled Mar 17 18:38:55.824675 kernel: ACPI: Core revision 20210730 Mar 17 18:38:55.824682 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 17 18:38:55.824690 kernel: APIC: Switch to symmetric I/O mode setup Mar 17 18:38:55.824697 kernel: x2apic enabled Mar 17 18:38:55.824704 kernel: Switched APIC routing to physical x2apic. Mar 17 18:38:55.824711 kernel: kvm-guest: setup PV IPIs Mar 17 18:38:55.824718 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 17 18:38:55.824724 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 17 18:38:55.824731 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Mar 17 18:38:55.824738 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 17 18:38:55.824745 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 17 18:38:55.824753 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 17 18:38:55.824760 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 17 18:38:55.824767 kernel: Spectre V2 : Mitigation: Retpolines Mar 17 18:38:55.824774 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Mar 17 18:38:55.824781 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Mar 17 18:38:55.824788 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Mar 17 18:38:55.824794 kernel: RETBleed: Mitigation: untrained return thunk Mar 17 18:38:55.824801 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Mar 17 18:38:55.824808 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Mar 17 18:38:55.824817 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 17 18:38:55.824824 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 17 18:38:55.824830 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 17 18:38:55.824837 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 17 18:38:55.824844 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Mar 17 18:38:55.824851 kernel: Freeing SMP alternatives memory: 32K Mar 17 18:38:55.824858 kernel: pid_max: default: 32768 minimum: 301 Mar 17 18:38:55.824865 kernel: LSM: Security Framework initializing Mar 17 18:38:55.824871 kernel: SELinux: Initializing. Mar 17 18:38:55.824879 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 18:38:55.824886 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 18:38:55.824893 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Mar 17 18:38:55.824900 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Mar 17 18:38:55.824907 kernel: ... version: 0 Mar 17 18:38:55.824914 kernel: ... bit width: 48 Mar 17 18:38:55.824920 kernel: ... generic registers: 6 Mar 17 18:38:55.824927 kernel: ... value mask: 0000ffffffffffff Mar 17 18:38:55.824934 kernel: ... max period: 00007fffffffffff Mar 17 18:38:55.824942 kernel: ... fixed-purpose events: 0 Mar 17 18:38:55.824949 kernel: ... event mask: 000000000000003f Mar 17 18:38:55.824955 kernel: signal: max sigframe size: 1776 Mar 17 18:38:55.824962 kernel: rcu: Hierarchical SRCU implementation. Mar 17 18:38:55.824969 kernel: smp: Bringing up secondary CPUs ... Mar 17 18:38:55.824976 kernel: x86: Booting SMP configuration: Mar 17 18:38:55.824982 kernel: .... node #0, CPUs: #1 Mar 17 18:38:55.824989 kernel: kvm-clock: cpu 1, msr 2e19a041, secondary cpu clock Mar 17 18:38:55.824996 kernel: kvm-guest: setup async PF for cpu 1 Mar 17 18:38:55.825004 kernel: kvm-guest: stealtime: cpu 1, msr 9b29c0c0 Mar 17 18:38:55.825011 kernel: #2 Mar 17 18:38:55.825018 kernel: kvm-clock: cpu 2, msr 2e19a081, secondary cpu clock Mar 17 18:38:55.825025 kernel: kvm-guest: setup async PF for cpu 2 Mar 17 18:38:55.825032 kernel: kvm-guest: stealtime: cpu 2, msr 9b31c0c0 Mar 17 18:38:55.825038 kernel: #3 Mar 17 18:38:55.825045 kernel: kvm-clock: cpu 3, msr 2e19a0c1, secondary cpu clock Mar 17 18:38:55.825052 kernel: kvm-guest: setup async PF for cpu 3 Mar 17 18:38:55.825059 kernel: kvm-guest: stealtime: cpu 3, msr 9b39c0c0 Mar 17 18:38:55.825065 kernel: smp: Brought up 1 node, 4 CPUs Mar 17 18:38:55.825075 kernel: smpboot: Max logical packages: 1 Mar 17 18:38:55.825083 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Mar 17 18:38:55.825090 kernel: devtmpfs: initialized Mar 17 18:38:55.825099 kernel: x86/mm: Memory block size: 128MB Mar 17 18:38:55.825106 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Mar 17 18:38:55.825113 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Mar 17 18:38:55.825120 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Mar 17 18:38:55.825127 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Mar 17 18:38:55.825133 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Mar 17 18:38:55.825142 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 17 18:38:55.825149 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 17 18:38:55.825156 kernel: pinctrl core: initialized pinctrl subsystem Mar 17 18:38:55.825163 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 17 18:38:55.825170 kernel: audit: initializing netlink subsys (disabled) Mar 17 18:38:55.825177 kernel: audit: type=2000 audit(1742236736.016:1): state=initialized audit_enabled=0 res=1 Mar 17 18:38:55.825183 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 17 18:38:55.825206 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 17 18:38:55.825214 kernel: cpuidle: using governor menu Mar 17 18:38:55.825222 kernel: ACPI: bus type PCI registered Mar 17 18:38:55.825229 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 17 18:38:55.825235 kernel: dca service started, version 1.12.1 Mar 17 18:38:55.825243 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 17 18:38:55.825249 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 Mar 17 18:38:55.825256 kernel: PCI: Using configuration type 1 for base access Mar 17 18:38:55.825263 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 17 18:38:55.825270 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Mar 17 18:38:55.825277 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Mar 17 18:38:55.825285 kernel: ACPI: Added _OSI(Module Device) Mar 17 18:38:55.825292 kernel: ACPI: Added _OSI(Processor Device) Mar 17 18:38:55.825299 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 17 18:38:55.825306 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 17 18:38:55.825313 kernel: ACPI: Added _OSI(Linux-Dell-Video) Mar 17 18:38:55.825320 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Mar 17 18:38:55.825326 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Mar 17 18:38:55.825333 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 17 18:38:55.825340 kernel: ACPI: Interpreter enabled Mar 17 18:38:55.825348 kernel: ACPI: PM: (supports S0 S3 S5) Mar 17 18:38:55.825355 kernel: ACPI: Using IOAPIC for interrupt routing Mar 17 18:38:55.825362 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 17 18:38:55.825368 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 17 18:38:55.825375 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 17 18:38:55.825484 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 17 18:38:55.825556 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 17 18:38:55.825625 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 17 18:38:55.825641 kernel: PCI host bridge to bus 0000:00 Mar 17 18:38:55.825717 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 17 18:38:55.825780 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 17 18:38:55.825841 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 17 18:38:55.825901 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Mar 17 18:38:55.825960 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 17 18:38:55.826021 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Mar 17 18:38:55.826081 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 17 18:38:55.826163 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 17 18:38:55.826253 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 17 18:38:55.826323 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Mar 17 18:38:55.826392 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Mar 17 18:38:55.826457 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Mar 17 18:38:55.826526 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Mar 17 18:38:55.826593 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 17 18:38:55.826676 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Mar 17 18:38:55.826749 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Mar 17 18:38:55.826818 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Mar 17 18:38:55.826886 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Mar 17 18:38:55.826959 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Mar 17 18:38:55.827030 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Mar 17 18:38:55.827103 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Mar 17 18:38:55.827173 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Mar 17 18:38:55.827289 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 17 18:38:55.827362 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Mar 17 18:38:55.827429 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Mar 17 18:38:55.827499 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Mar 17 18:38:55.827566 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Mar 17 18:38:55.827648 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 17 18:38:55.827717 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 17 18:38:55.827790 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 17 18:38:55.827857 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Mar 17 18:38:55.827924 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Mar 17 18:38:55.828000 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 17 18:38:55.828068 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Mar 17 18:38:55.828078 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 17 18:38:55.828085 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 17 18:38:55.828092 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 17 18:38:55.828101 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 17 18:38:55.828108 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 17 18:38:55.828117 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 17 18:38:55.828126 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 17 18:38:55.828133 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 17 18:38:55.828140 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 17 18:38:55.828147 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 17 18:38:55.828154 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 17 18:38:55.828161 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 17 18:38:55.828167 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 17 18:38:55.828174 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 17 18:38:55.828181 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 17 18:38:55.828189 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 17 18:38:55.828208 kernel: iommu: Default domain type: Translated Mar 17 18:38:55.828215 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 17 18:38:55.828286 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 17 18:38:55.828354 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 17 18:38:55.828421 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 17 18:38:55.828430 kernel: vgaarb: loaded Mar 17 18:38:55.828437 kernel: pps_core: LinuxPPS API ver. 1 registered Mar 17 18:38:55.828444 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Mar 17 18:38:55.828453 kernel: PTP clock support registered Mar 17 18:38:55.828460 kernel: Registered efivars operations Mar 17 18:38:55.828467 kernel: PCI: Using ACPI for IRQ routing Mar 17 18:38:55.828474 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 17 18:38:55.828481 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Mar 17 18:38:55.828488 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Mar 17 18:38:55.828494 kernel: e820: reserve RAM buffer [mem 0x9b438018-0x9bffffff] Mar 17 18:38:55.828501 kernel: e820: reserve RAM buffer [mem 0x9b475018-0x9bffffff] Mar 17 18:38:55.828508 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Mar 17 18:38:55.828516 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Mar 17 18:38:55.828523 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 17 18:38:55.828530 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 17 18:38:55.828537 kernel: clocksource: Switched to clocksource kvm-clock Mar 17 18:38:55.828543 kernel: VFS: Disk quotas dquot_6.6.0 Mar 17 18:38:55.828550 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 17 18:38:55.828557 kernel: pnp: PnP ACPI init Mar 17 18:38:55.828641 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 17 18:38:55.828653 kernel: pnp: PnP ACPI: found 6 devices Mar 17 18:38:55.828660 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 17 18:38:55.828668 kernel: NET: Registered PF_INET protocol family Mar 17 18:38:55.828675 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 17 18:38:55.828682 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 17 18:38:55.828689 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 17 18:38:55.828696 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 17 18:38:55.828703 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Mar 17 18:38:55.828711 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 17 18:38:55.828718 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 18:38:55.828725 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 18:38:55.828731 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 17 18:38:55.828738 kernel: NET: Registered PF_XDP protocol family Mar 17 18:38:55.828808 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Mar 17 18:38:55.828877 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Mar 17 18:38:55.828940 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 17 18:38:55.829000 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 17 18:38:55.829063 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 17 18:38:55.829136 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Mar 17 18:38:55.829222 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 17 18:38:55.829284 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Mar 17 18:38:55.829294 kernel: PCI: CLS 0 bytes, default 64 Mar 17 18:38:55.829300 kernel: Initialise system trusted keyrings Mar 17 18:38:55.829307 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 17 18:38:55.829314 kernel: Key type asymmetric registered Mar 17 18:38:55.829324 kernel: Asymmetric key parser 'x509' registered Mar 17 18:38:55.829331 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Mar 17 18:38:55.829345 kernel: io scheduler mq-deadline registered Mar 17 18:38:55.829354 kernel: io scheduler kyber registered Mar 17 18:38:55.829361 kernel: io scheduler bfq registered Mar 17 18:38:55.829368 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 17 18:38:55.829376 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 17 18:38:55.829383 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 17 18:38:55.829391 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 17 18:38:55.829399 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 17 18:38:55.829406 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 17 18:38:55.829414 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 17 18:38:55.829421 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 17 18:38:55.829428 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 17 18:38:55.829436 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 17 18:38:55.829508 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 17 18:38:55.829571 kernel: rtc_cmos 00:04: registered as rtc0 Mar 17 18:38:55.829644 kernel: rtc_cmos 00:04: setting system clock to 2025-03-17T18:38:55 UTC (1742236735) Mar 17 18:38:55.829708 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 17 18:38:55.829718 kernel: efifb: probing for efifb Mar 17 18:38:55.829725 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Mar 17 18:38:55.829732 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Mar 17 18:38:55.829739 kernel: efifb: scrolling: redraw Mar 17 18:38:55.829746 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 17 18:38:55.829754 kernel: Console: switching to colour frame buffer device 160x50 Mar 17 18:38:55.829761 kernel: fb0: EFI VGA frame buffer device Mar 17 18:38:55.829770 kernel: pstore: Registered efi as persistent store backend Mar 17 18:38:55.829777 kernel: NET: Registered PF_INET6 protocol family Mar 17 18:38:55.829785 kernel: Segment Routing with IPv6 Mar 17 18:38:55.829793 kernel: In-situ OAM (IOAM) with IPv6 Mar 17 18:38:55.829800 kernel: NET: Registered PF_PACKET protocol family Mar 17 18:38:55.829808 kernel: Key type dns_resolver registered Mar 17 18:38:55.829816 kernel: IPI shorthand broadcast: enabled Mar 17 18:38:55.829823 kernel: sched_clock: Marking stable (446241609, 127302710)->(589996124, -16451805) Mar 17 18:38:55.829830 kernel: registered taskstats version 1 Mar 17 18:38:55.829837 kernel: Loading compiled-in X.509 certificates Mar 17 18:38:55.829845 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.179-flatcar: d5b956bbabb2d386c0246a969032c0de9eaa8220' Mar 17 18:38:55.829852 kernel: Key type .fscrypt registered Mar 17 18:38:55.829859 kernel: Key type fscrypt-provisioning registered Mar 17 18:38:55.829866 kernel: pstore: Using crash dump compression: deflate Mar 17 18:38:55.829875 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 17 18:38:55.829882 kernel: ima: Allocated hash algorithm: sha1 Mar 17 18:38:55.829889 kernel: ima: No architecture policies found Mar 17 18:38:55.829896 kernel: clk: Disabling unused clocks Mar 17 18:38:55.829903 kernel: Freeing unused kernel image (initmem) memory: 47472K Mar 17 18:38:55.829910 kernel: Write protecting the kernel read-only data: 28672k Mar 17 18:38:55.829917 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Mar 17 18:38:55.829926 kernel: Freeing unused kernel image (rodata/data gap) memory: 612K Mar 17 18:38:55.829933 kernel: Run /init as init process Mar 17 18:38:55.829940 kernel: with arguments: Mar 17 18:38:55.829948 kernel: /init Mar 17 18:38:55.829955 kernel: with environment: Mar 17 18:38:55.829962 kernel: HOME=/ Mar 17 18:38:55.829969 kernel: TERM=linux Mar 17 18:38:55.829976 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 17 18:38:55.829986 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Mar 17 18:38:55.829995 systemd[1]: Detected virtualization kvm. Mar 17 18:38:55.830004 systemd[1]: Detected architecture x86-64. Mar 17 18:38:55.830011 systemd[1]: Running in initrd. Mar 17 18:38:55.830019 systemd[1]: No hostname configured, using default hostname. Mar 17 18:38:55.830026 systemd[1]: Hostname set to . Mar 17 18:38:55.830034 systemd[1]: Initializing machine ID from VM UUID. Mar 17 18:38:55.830042 systemd[1]: Queued start job for default target initrd.target. Mar 17 18:38:55.830049 systemd[1]: Started systemd-ask-password-console.path. Mar 17 18:38:55.830057 systemd[1]: Reached target cryptsetup.target. Mar 17 18:38:55.830064 systemd[1]: Reached target paths.target. Mar 17 18:38:55.830073 systemd[1]: Reached target slices.target. Mar 17 18:38:55.830080 systemd[1]: Reached target swap.target. Mar 17 18:38:55.830088 systemd[1]: Reached target timers.target. Mar 17 18:38:55.830096 systemd[1]: Listening on iscsid.socket. Mar 17 18:38:55.830103 systemd[1]: Listening on iscsiuio.socket. Mar 17 18:38:55.830111 systemd[1]: Listening on systemd-journald-audit.socket. Mar 17 18:38:55.830118 systemd[1]: Listening on systemd-journald-dev-log.socket. Mar 17 18:38:55.830127 systemd[1]: Listening on systemd-journald.socket. Mar 17 18:38:55.830134 systemd[1]: Listening on systemd-networkd.socket. Mar 17 18:38:55.830142 systemd[1]: Listening on systemd-udevd-control.socket. Mar 17 18:38:55.830150 systemd[1]: Listening on systemd-udevd-kernel.socket. Mar 17 18:38:55.830157 systemd[1]: Reached target sockets.target. Mar 17 18:38:55.830165 systemd[1]: Starting kmod-static-nodes.service... Mar 17 18:38:55.830172 systemd[1]: Finished network-cleanup.service. Mar 17 18:38:55.830180 systemd[1]: Starting systemd-fsck-usr.service... Mar 17 18:38:55.830188 systemd[1]: Starting systemd-journald.service... Mar 17 18:38:55.830208 systemd[1]: Starting systemd-modules-load.service... Mar 17 18:38:55.830215 systemd[1]: Starting systemd-resolved.service... Mar 17 18:38:55.830223 systemd[1]: Starting systemd-vconsole-setup.service... Mar 17 18:38:55.830230 systemd[1]: Finished kmod-static-nodes.service. Mar 17 18:38:55.830238 kernel: audit: type=1130 audit(1742236735.824:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:38:55.830246 systemd[1]: Finished systemd-fsck-usr.service. Mar 17 18:38:55.830254 kernel: audit: type=1130 audit(1742236735.828:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:38:55.830264 systemd-journald[197]: Journal started Mar 17 18:38:55.830301 systemd-journald[197]: Runtime Journal (/run/log/journal/8dff836050714ee287a36449295dd43b) is 6.0M, max 48.4M, 42.4M free. Mar 17 18:38:55.824000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:38:55.828000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:38:55.832222 systemd[1]: Started systemd-journald.service. Mar 17 18:38:55.833000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:38:55.833621 systemd[1]: Finished systemd-vconsole-setup.service. Mar 17 18:38:55.840834 kernel: audit: type=1130 audit(1742236735.833:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:38:55.840855 kernel: audit: type=1130 audit(1742236735.836:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:38:55.836000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:38:55.836225 systemd-modules-load[198]: Inserted module 'overlay' Mar 17 18:38:55.837771 systemd[1]: Starting dracut-cmdline-ask.service... Mar 17 18:38:55.841474 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Mar 17 18:38:55.848356 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Mar 17 18:38:55.848000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:38:55.852216 kernel: audit: type=1130 audit(1742236735.848:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:38:55.855057 systemd[1]: Finished dracut-cmdline-ask.service. Mar 17 18:38:55.860267 kernel: audit: type=1130 audit(1742236735.855:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:38:55.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:38:55.856572 systemd[1]: Starting dracut-cmdline.service... Mar 17 18:38:55.856786 systemd-resolved[199]: Positive Trust Anchors: Mar 17 18:38:55.866398 kernel: audit: type=1130 audit(1742236735.861:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:38:55.861000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:38:55.856795 systemd-resolved[199]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 18:38:55.856822 systemd-resolved[199]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Mar 17 18:38:55.859181 systemd-resolved[199]: Defaulting to hostname 'linux'. Mar 17 18:38:55.860336 systemd[1]: Started systemd-resolved.service. Mar 17 18:38:55.862005 systemd[1]: Reached target nss-lookup.target. Mar 17 18:38:55.876739 dracut-cmdline[214]: dracut-dracut-053 Mar 17 18:38:55.878567 dracut-cmdline[214]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=249ccd113f901380672c0d31e18f792e8e0344094c0e39eedc449f039418b31a Mar 17 18:38:55.887214 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 17 18:38:55.891729 systemd-modules-load[198]: Inserted module 'br_netfilter' Mar 17 18:38:55.892775 kernel: Bridge firewalling registered Mar 17 18:38:55.908215 kernel: SCSI subsystem initialized Mar 17 18:38:55.919216 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 17 18:38:55.919238 kernel: device-mapper: uevent: version 1.0.3 Mar 17 18:38:55.919248 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Mar 17 18:38:55.923028 systemd-modules-load[198]: Inserted module 'dm_multipath' Mar 17 18:38:55.923695 systemd[1]: Finished systemd-modules-load.service. Mar 17 18:38:55.928694 kernel: audit: type=1130 audit(1742236735.924:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:38:55.924000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:38:55.925419 systemd[1]: Starting systemd-sysctl.service... Mar 17 18:38:55.933529 systemd[1]: Finished systemd-sysctl.service. Mar 17 18:38:55.937743 kernel: audit: type=1130 audit(1742236735.933:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:38:55.933000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:38:55.942216 kernel: Loading iSCSI transport class v2.0-870. Mar 17 18:38:55.958218 kernel: iscsi: registered transport (tcp) Mar 17 18:38:55.978214 kernel: iscsi: registered transport (qla4xxx) Mar 17 18:38:55.978232 kernel: QLogic iSCSI HBA Driver Mar 17 18:38:56.008107 systemd[1]: Finished dracut-cmdline.service. Mar 17 18:38:56.008000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:38:56.009010 systemd[1]: Starting dracut-pre-udev.service... Mar 17 18:38:56.056230 kernel: raid6: avx2x4 gen() 30272 MB/s Mar 17 18:38:56.073214 kernel: raid6: avx2x4 xor() 8511 MB/s Mar 17 18:38:56.090212 kernel: raid6: avx2x2 gen() 32248 MB/s Mar 17 18:38:56.107213 kernel: raid6: avx2x2 xor() 19237 MB/s Mar 17 18:38:56.124210 kernel: raid6: avx2x1 gen() 26582 MB/s Mar 17 18:38:56.141212 kernel: raid6: avx2x1 xor() 15327 MB/s Mar 17 18:38:56.158210 kernel: raid6: sse2x4 gen() 14837 MB/s Mar 17 18:38:56.175211 kernel: raid6: sse2x4 xor() 7584 MB/s Mar 17 18:38:56.192212 kernel: raid6: sse2x2 gen() 16362 MB/s Mar 17 18:38:56.209210 kernel: raid6: sse2x2 xor() 9836 MB/s Mar 17 18:38:56.226211 kernel: raid6: sse2x1 gen() 12353 MB/s Mar 17 18:38:56.243637 kernel: raid6: sse2x1 xor() 7738 MB/s Mar 17 18:38:56.243649 kernel: raid6: using algorithm avx2x2 gen() 32248 MB/s Mar 17 18:38:56.243658 kernel: raid6: .... xor() 19237 MB/s, rmw enabled Mar 17 18:38:56.244353 kernel: raid6: using avx2x2 recovery algorithm Mar 17 18:38:56.256217 kernel: xor: automatically using best checksumming function avx Mar 17 18:38:56.345214 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Mar 17 18:38:56.353210 systemd[1]: Finished dracut-pre-udev.service. Mar 17 18:38:56.354000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:38:56.354000 audit: BPF prog-id=7 op=LOAD Mar 17 18:38:56.355000 audit: BPF prog-id=8 op=LOAD Mar 17 18:38:56.355625 systemd[1]: Starting systemd-udevd.service... Mar 17 18:38:56.367086 systemd-udevd[400]: Using default interface naming scheme 'v252'. Mar 17 18:38:56.370762 systemd[1]: Started systemd-udevd.service. Mar 17 18:38:56.372000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:38:56.373103 systemd[1]: Starting dracut-pre-trigger.service... Mar 17 18:38:56.382283 dracut-pre-trigger[408]: rd.md=0: removing MD RAID activation Mar 17 18:38:56.404028 systemd[1]: Finished dracut-pre-trigger.service. Mar 17 18:38:56.405000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:38:56.406340 systemd[1]: Starting systemd-udev-trigger.service... Mar 17 18:38:56.435569 systemd[1]: Finished systemd-udev-trigger.service. Mar 17 18:38:56.437000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:38:56.462492 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 17 18:38:56.467944 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 17 18:38:56.467994 kernel: GPT:9289727 != 19775487 Mar 17 18:38:56.468011 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 17 18:38:56.468020 kernel: GPT:9289727 != 19775487 Mar 17 18:38:56.468028 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 17 18:38:56.468036 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 18:38:56.471212 kernel: cryptd: max_cpu_qlen set to 1000 Mar 17 18:38:56.481216 kernel: libata version 3.00 loaded. Mar 17 18:38:56.491558 kernel: AVX2 version of gcm_enc/dec engaged. Mar 17 18:38:56.491580 kernel: AES CTR mode by8 optimization enabled Mar 17 18:38:56.493872 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Mar 17 18:38:56.498705 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (453) Mar 17 18:38:56.493964 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Mar 17 18:38:56.499374 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Mar 17 18:38:56.511525 kernel: ahci 0000:00:1f.2: version 3.0 Mar 17 18:38:56.533065 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 17 18:38:56.533079 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 17 18:38:56.533172 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 17 18:38:56.533267 kernel: scsi host0: ahci Mar 17 18:38:56.533367 kernel: scsi host1: ahci Mar 17 18:38:56.533465 kernel: scsi host2: ahci Mar 17 18:38:56.533553 kernel: scsi host3: ahci Mar 17 18:38:56.533648 kernel: scsi host4: ahci Mar 17 18:38:56.533737 kernel: scsi host5: ahci Mar 17 18:38:56.533823 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Mar 17 18:38:56.533833 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Mar 17 18:38:56.533842 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Mar 17 18:38:56.533854 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 18:38:56.533863 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Mar 17 18:38:56.533871 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Mar 17 18:38:56.533880 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 18:38:56.533888 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Mar 17 18:38:56.511796 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Mar 17 18:38:56.515960 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Mar 17 18:38:56.536431 disk-uuid[515]: Primary Header is updated. Mar 17 18:38:56.536431 disk-uuid[515]: Secondary Entries is updated. Mar 17 18:38:56.536431 disk-uuid[515]: Secondary Header is updated. Mar 17 18:38:56.517622 systemd[1]: Starting disk-uuid.service... Mar 17 18:38:56.840219 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 17 18:38:56.840280 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 17 18:38:56.841221 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 17 18:38:56.848213 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 17 18:38:56.848250 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 17 18:38:56.849222 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 17 18:38:56.850255 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 17 18:38:56.850265 kernel: ata3.00: applying bridge limits Mar 17 18:38:56.851519 kernel: ata3.00: configured for UDMA/100 Mar 17 18:38:56.852219 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 17 18:38:56.880222 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 17 18:38:56.896684 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 17 18:38:56.896697 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 17 18:38:57.533041 disk-uuid[535]: The operation has completed successfully. Mar 17 18:38:57.534364 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 18:38:57.555111 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 17 18:38:57.555000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:38:57.555000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:38:57.555204 systemd[1]: Finished disk-uuid.service. Mar 17 18:38:57.561180 systemd[1]: Starting verity-setup.service... Mar 17 18:38:57.574217 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 17 18:38:57.592353 systemd[1]: Found device dev-mapper-usr.device. Mar 17 18:38:57.594451 systemd[1]: Mounting sysusr-usr.mount... Mar 17 18:38:57.596276 systemd[1]: Finished verity-setup.service. Mar 17 18:38:57.596000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:38:57.653213 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Mar 17 18:38:57.653571 systemd[1]: Mounted sysusr-usr.mount. Mar 17 18:38:57.653718 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Mar 17 18:38:57.654957 systemd[1]: Starting ignition-setup.service... Mar 17 18:38:57.656825 systemd[1]: Starting parse-ip-for-networkd.service... Mar 17 18:38:57.666231 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 18:38:57.666263 kernel: BTRFS info (device vda6): using free space tree Mar 17 18:38:57.667764 kernel: BTRFS info (device vda6): has skinny extents Mar 17 18:38:57.674994 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 17 18:38:57.682670 systemd[1]: Finished ignition-setup.service. Mar 17 18:38:57.684000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:38:57.685026 systemd[1]: Starting ignition-fetch-offline.service... Mar 17 18:38:57.718366 systemd[1]: Finished parse-ip-for-networkd.service. Mar 17 18:38:57.719000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:38:57.720000 audit: BPF prog-id=9 op=LOAD Mar 17 18:38:57.721446 systemd[1]: Starting systemd-networkd.service... Mar 17 18:38:57.722400 ignition[650]: Ignition 2.14.0 Mar 17 18:38:57.722407 ignition[650]: Stage: fetch-offline Mar 17 18:38:57.722455 ignition[650]: no configs at "/usr/lib/ignition/base.d" Mar 17 18:38:57.722463 ignition[650]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 18:38:57.722554 ignition[650]: parsed url from cmdline: "" Mar 17 18:38:57.722556 ignition[650]: no config URL provided Mar 17 18:38:57.722561 ignition[650]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 18:38:57.722567 ignition[650]: no config at "/usr/lib/ignition/user.ign" Mar 17 18:38:57.722593 ignition[650]: op(1): [started] loading QEMU firmware config module Mar 17 18:38:57.722600 ignition[650]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 17 18:38:57.726613 ignition[650]: op(1): [finished] loading QEMU firmware config module Mar 17 18:38:57.745009 systemd-networkd[719]: lo: Link UP Mar 17 18:38:57.745019 systemd-networkd[719]: lo: Gained carrier Mar 17 18:38:57.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:38:57.745406 systemd-networkd[719]: Enumeration completed Mar 17 18:38:57.745486 systemd[1]: Started systemd-networkd.service. Mar 17 18:38:57.745594 systemd-networkd[719]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 18:38:57.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:38:57.746414 systemd-networkd[719]: eth0: Link UP Mar 17 18:38:57.746417 systemd-networkd[719]: eth0: Gained carrier Mar 17 18:38:57.747325 systemd[1]: Reached target network.target. Mar 17 18:38:57.748885 systemd[1]: Starting iscsiuio.service... Mar 17 18:38:57.759980 iscsid[725]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Mar 17 18:38:57.759980 iscsid[725]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Mar 17 18:38:57.759980 iscsid[725]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Mar 17 18:38:57.759980 iscsid[725]: If using hardware iscsi like qla4xxx this message can be ignored. Mar 17 18:38:57.759980 iscsid[725]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Mar 17 18:38:57.759980 iscsid[725]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Mar 17 18:38:57.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:38:57.754266 systemd[1]: Started iscsiuio.service. Mar 17 18:38:57.755854 systemd[1]: Starting iscsid.service... Mar 17 18:38:57.760001 systemd[1]: Started iscsid.service. Mar 17 18:38:57.775867 systemd[1]: Starting dracut-initqueue.service... Mar 17 18:38:57.784829 systemd[1]: Finished dracut-initqueue.service. Mar 17 18:38:57.786000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:38:57.786756 systemd[1]: Reached target remote-fs-pre.target. Mar 17 18:38:57.788683 systemd[1]: Reached target remote-cryptsetup.target. Mar 17 18:38:57.790692 systemd[1]: Reached target remote-fs.target. Mar 17 18:38:57.793009 systemd[1]: Starting dracut-pre-mount.service... Mar 17 18:38:57.799575 systemd[1]: Finished dracut-pre-mount.service. Mar 17 18:38:57.801000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:38:57.805144 ignition[650]: parsing config with SHA512: e9f1fe0f3c28cdd194c1a8c0f8a4d654786aeb9d630a6fa1e81a564f9b66476b4d827ea020ec91f79d0a5ffc16179d037b101e5e639e845db692a91f44e95745 Mar 17 18:38:57.810872 unknown[650]: fetched base config from "system" Mar 17 18:38:57.810883 unknown[650]: fetched user config from "qemu" Mar 17 18:38:57.812815 ignition[650]: fetch-offline: fetch-offline passed Mar 17 18:38:57.813633 ignition[650]: Ignition finished successfully Mar 17 18:38:57.815228 systemd[1]: Finished ignition-fetch-offline.service. Mar 17 18:38:57.815269 systemd-networkd[719]: eth0: DHCPv4 address 10.0.0.57/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 17 18:38:57.816000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:38:57.818386 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 17 18:38:57.820702 systemd[1]: Starting ignition-kargs.service... Mar 17 18:38:57.828642 ignition[739]: Ignition 2.14.0 Mar 17 18:38:57.828651 ignition[739]: Stage: kargs Mar 17 18:38:57.828733 ignition[739]: no configs at "/usr/lib/ignition/base.d" Mar 17 18:38:57.828741 ignition[739]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 18:38:57.829722 ignition[739]: kargs: kargs passed Mar 17 18:38:57.829754 ignition[739]: Ignition finished successfully Mar 17 18:38:57.834135 systemd[1]: Finished ignition-kargs.service. Mar 17 18:38:57.835000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:38:57.836415 systemd[1]: Starting ignition-disks.service... Mar 17 18:38:57.844653 ignition[745]: Ignition 2.14.0 Mar 17 18:38:57.844662 ignition[745]: Stage: disks Mar 17 18:38:57.844747 ignition[745]: no configs at "/usr/lib/ignition/base.d" Mar 17 18:38:57.844755 ignition[745]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 18:38:57.845743 ignition[745]: disks: disks passed Mar 17 18:38:57.845774 ignition[745]: Ignition finished successfully Mar 17 18:38:57.849718 systemd[1]: Finished ignition-disks.service. Mar 17 18:38:57.849000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:38:57.849876 systemd[1]: Reached target initrd-root-device.target. Mar 17 18:38:57.852054 systemd[1]: Reached target local-fs-pre.target. Mar 17 18:38:57.853660 systemd[1]: Reached target local-fs.target. Mar 17 18:38:57.855083 systemd[1]: Reached target sysinit.target. Mar 17 18:38:57.856434 systemd[1]: Reached target basic.target. Mar 17 18:38:57.858562 systemd[1]: Starting systemd-fsck-root.service... Mar 17 18:38:57.869469 systemd-fsck[753]: ROOT: clean, 623/553520 files, 56022/553472 blocks Mar 17 18:38:57.874601 systemd[1]: Finished systemd-fsck-root.service. Mar 17 18:38:57.874000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:38:57.875238 systemd[1]: Mounting sysroot.mount... Mar 17 18:38:57.881214 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Mar 17 18:38:57.881536 systemd[1]: Mounted sysroot.mount. Mar 17 18:38:57.882941 systemd[1]: Reached target initrd-root-fs.target. Mar 17 18:38:57.885341 systemd[1]: Mounting sysroot-usr.mount... Mar 17 18:38:57.886991 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Mar 17 18:38:57.887027 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 17 18:38:57.888359 systemd[1]: Reached target ignition-diskful.target. Mar 17 18:38:57.892328 systemd[1]: Mounted sysroot-usr.mount. Mar 17 18:38:57.894320 systemd[1]: Starting initrd-setup-root.service... Mar 17 18:38:57.898022 initrd-setup-root[763]: cut: /sysroot/etc/passwd: No such file or directory Mar 17 18:38:57.900945 initrd-setup-root[771]: cut: /sysroot/etc/group: No such file or directory Mar 17 18:38:57.904484 initrd-setup-root[779]: cut: /sysroot/etc/shadow: No such file or directory Mar 17 18:38:57.908009 initrd-setup-root[787]: cut: /sysroot/etc/gshadow: No such file or directory Mar 17 18:38:57.931487 systemd[1]: Finished initrd-setup-root.service. Mar 17 18:38:57.932000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:38:57.933045 systemd[1]: Starting ignition-mount.service... Mar 17 18:38:57.934589 systemd[1]: Starting sysroot-boot.service... Mar 17 18:38:57.937766 bash[804]: umount: /sysroot/usr/share/oem: not mounted. Mar 17 18:38:57.944270 ignition[805]: INFO : Ignition 2.14.0 Mar 17 18:38:57.944270 ignition[805]: INFO : Stage: mount Mar 17 18:38:57.945913 ignition[805]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 18:38:57.945913 ignition[805]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 18:38:57.945913 ignition[805]: INFO : mount: mount passed Mar 17 18:38:57.945913 ignition[805]: INFO : Ignition finished successfully Mar 17 18:38:57.950322 systemd[1]: Finished ignition-mount.service. Mar 17 18:38:57.950000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:38:57.953902 systemd[1]: Finished sysroot-boot.service. Mar 17 18:38:57.953000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:38:58.603974 systemd[1]: Mounting sysroot-usr-share-oem.mount... Mar 17 18:38:58.612128 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (814) Mar 17 18:38:58.612154 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 18:38:58.612164 kernel: BTRFS info (device vda6): using free space tree Mar 17 18:38:58.612928 kernel: BTRFS info (device vda6): has skinny extents Mar 17 18:38:58.616471 systemd[1]: Mounted sysroot-usr-share-oem.mount. Mar 17 18:38:58.617837 systemd[1]: Starting ignition-files.service... Mar 17 18:38:58.629951 ignition[834]: INFO : Ignition 2.14.0 Mar 17 18:38:58.629951 ignition[834]: INFO : Stage: files Mar 17 18:38:58.631761 ignition[834]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 18:38:58.631761 ignition[834]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 18:38:58.631761 ignition[834]: DEBUG : files: compiled without relabeling support, skipping Mar 17 18:38:58.635550 ignition[834]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 17 18:38:58.635550 ignition[834]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 17 18:38:58.635550 ignition[834]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 17 18:38:58.635550 ignition[834]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 17 18:38:58.635550 ignition[834]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 17 18:38:58.635550 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 17 18:38:58.635550 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 17 18:38:58.635550 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Mar 17 18:38:58.635550 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Mar 17 18:38:58.634168 unknown[834]: wrote ssh authorized keys file for user: core Mar 17 18:38:58.678086 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 17 18:38:58.818585 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Mar 17 18:38:58.820684 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 17 18:38:58.820684 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Mar 17 18:38:59.041294 systemd-networkd[719]: eth0: Gained IPv6LL Mar 17 18:38:59.378654 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Mar 17 18:38:59.612297 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 17 18:38:59.612297 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Mar 17 18:38:59.616076 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Mar 17 18:38:59.616076 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 17 18:38:59.616076 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 17 18:38:59.616076 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 18:38:59.616076 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 18:38:59.616076 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 18:38:59.616076 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 18:38:59.616076 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 18:38:59.616076 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 18:38:59.616076 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 18:38:59.616076 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 18:38:59.616076 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 18:38:59.616076 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Mar 17 18:38:59.965050 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Mar 17 18:39:00.942538 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 18:39:00.942538 ignition[834]: INFO : files: op(d): [started] processing unit "containerd.service" Mar 17 18:39:00.946598 ignition[834]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 17 18:39:00.946598 ignition[834]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 17 18:39:00.946598 ignition[834]: INFO : files: op(d): [finished] processing unit "containerd.service" Mar 17 18:39:00.946598 ignition[834]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Mar 17 18:39:00.946598 ignition[834]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 18:39:00.946598 ignition[834]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 18:39:00.946598 ignition[834]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Mar 17 18:39:00.946598 ignition[834]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Mar 17 18:39:00.946598 ignition[834]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 17 18:39:00.946598 ignition[834]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 17 18:39:00.946598 ignition[834]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Mar 17 18:39:00.946598 ignition[834]: INFO : files: op(13): [started] setting preset to enabled for "prepare-helm.service" Mar 17 18:39:00.946598 ignition[834]: INFO : files: op(13): [finished] setting preset to enabled for "prepare-helm.service" Mar 17 18:39:00.946598 ignition[834]: INFO : files: op(14): [started] setting preset to disabled for "coreos-metadata.service" Mar 17 18:39:00.946598 ignition[834]: INFO : files: op(14): op(15): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 17 18:39:00.980356 kernel: kauditd_printk_skb: 25 callbacks suppressed Mar 17 18:39:00.980380 kernel: audit: type=1130 audit(1742236740.974:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:39:00.974000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:39:00.980448 ignition[834]: INFO : files: op(14): op(15): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 17 18:39:00.980448 ignition[834]: INFO : files: op(14): [finished] setting preset to disabled for "coreos-metadata.service" Mar 17 18:39:00.980448 ignition[834]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 17 18:39:00.980448 ignition[834]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 17 18:39:00.980448 ignition[834]: INFO : files: files passed Mar 17 18:39:00.980448 ignition[834]: INFO : Ignition finished successfully Mar 17 18:39:01.001353 kernel: audit: type=1130 audit(1742236740.985:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:39:01.001370 kernel: audit: type=1130 audit(1742236740.990:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:39:01.001380 kernel: audit: type=1131 audit(1742236740.990:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:39:00.985000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:39:00.990000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:39:00.990000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:39:00.971889 systemd[1]: Finished ignition-files.service. Mar 17 18:39:00.975025 systemd[1]: Starting initrd-setup-root-after-ignition.service... Mar 17 18:39:01.003490 initrd-setup-root-after-ignition[857]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Mar 17 18:39:00.980367 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Mar 17 18:39:01.006938 initrd-setup-root-after-ignition[859]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 18:39:00.980927 systemd[1]: Starting ignition-quench.service... Mar 17 18:39:00.982963 systemd[1]: Finished initrd-setup-root-after-ignition.service. Mar 17 18:39:01.011000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:39:00.985376 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 17 18:39:01.019513 kernel: audit: type=1130 audit(1742236741.011:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:39:01.019528 kernel: audit: type=1131 audit(1742236741.011:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:39:01.011000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:39:00.985436 systemd[1]: Finished ignition-quench.service. Mar 17 18:39:00.990833 systemd[1]: Reached target ignition-complete.target. Mar 17 18:39:00.999503 systemd[1]: Starting initrd-parse-etc.service... Mar 17 18:39:01.009733 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 17 18:39:01.009800 systemd[1]: Finished initrd-parse-etc.service. Mar 17 18:39:01.011334 systemd[1]: Reached target initrd-fs.target. Mar 17 18:39:01.017808 systemd[1]: Reached target initrd.target. Mar 17 18:39:01.019550 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Mar 17 18:39:01.020111 systemd[1]: Starting dracut-pre-pivot.service... Mar 17 18:39:01.028767 systemd[1]: Finished dracut-pre-pivot.service. Mar 17 18:39:01.033808 kernel: audit: type=1130 audit(1742236741.028:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:39:01.028000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:39:01.029397 systemd[1]: Starting initrd-cleanup.service... Mar 17 18:39:01.036835 systemd[1]: Stopped target nss-lookup.target. Mar 17 18:39:01.037763 systemd[1]: Stopped target remote-cryptsetup.target. Mar 17 18:39:01.039360 systemd[1]: Stopped target timers.target. Mar 17 18:39:01.040967 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 17 18:39:01.046981 kernel: audit: type=1131 audit(1742236741.042:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:39:01.042000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:39:01.041053 systemd[1]: Stopped dracut-pre-pivot.service. Mar 17 18:39:01.042588 systemd[1]: Stopped target initrd.target. Mar 17 18:39:01.047057 systemd[1]: Stopped target basic.target. Mar 17 18:39:01.048610 systemd[1]: Stopped target ignition-complete.target. Mar 17 18:39:01.050236 systemd[1]: Stopped target ignition-diskful.target. Mar 17 18:39:01.051960 systemd[1]: Stopped target initrd-root-device.target. Mar 17 18:39:01.053528 systemd[1]: Stopped target remote-fs.target. Mar 17 18:39:01.055144 systemd[1]: Stopped target remote-fs-pre.target. Mar 17 18:39:01.056839 systemd[1]: Stopped target sysinit.target. Mar 17 18:39:01.058373 systemd[1]: Stopped target local-fs.target. Mar 17 18:39:01.059957 systemd[1]: Stopped target local-fs-pre.target. Mar 17 18:39:01.061506 systemd[1]: Stopped target swap.target. Mar 17 18:39:01.068868 kernel: audit: type=1131 audit(1742236741.064:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:39:01.064000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:39:01.062958 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 17 18:39:01.063075 systemd[1]: Stopped dracut-pre-mount.service. Mar 17 18:39:01.075130 kernel: audit: type=1131 audit(1742236741.070:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:39:01.070000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:39:01.064668 systemd[1]: Stopped target cryptsetup.target. Mar 17 18:39:01.075000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:39:01.068918 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 17 18:39:01.069005 systemd[1]: Stopped dracut-initqueue.service. Mar 17 18:39:01.070791 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 17 18:39:01.070878 systemd[1]: Stopped ignition-fetch-offline.service. Mar 17 18:39:01.075277 systemd[1]: Stopped target paths.target. Mar 17 18:39:01.076738 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 17 18:39:01.081279 systemd[1]: Stopped systemd-ask-password-console.path. Mar 17 18:39:01.083324 systemd[1]: Stopped target slices.target. Mar 17 18:39:01.084851 systemd[1]: Stopped target sockets.target. Mar 17 18:39:01.086360 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 17 18:39:01.087000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:39:01.086486 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Mar 17 18:39:01.089000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:39:01.088096 systemd[1]: ignition-files.service: Deactivated successfully. Mar 17 18:39:01.093178 iscsid[725]: iscsid shutting down. Mar 17 18:39:01.088179 systemd[1]: Stopped ignition-files.service. Mar 17 18:39:01.090774 systemd[1]: Stopping ignition-mount.service... Mar 17 18:39:01.093000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:39:01.096000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:39:01.098682 ignition[874]: INFO : Ignition 2.14.0 Mar 17 18:39:01.098682 ignition[874]: INFO : Stage: umount Mar 17 18:39:01.098682 ignition[874]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 18:39:01.098682 ignition[874]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 18:39:01.098682 ignition[874]: INFO : umount: umount passed Mar 17 18:39:01.098682 ignition[874]: INFO : Ignition finished successfully Mar 17 18:39:01.098000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:39:01.100000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:39:01.102000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:39:01.106000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:39:01.107000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:39:01.109000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:39:01.091667 systemd[1]: Stopping iscsid.service... Mar 17 18:39:01.093144 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 17 18:39:01.093285 systemd[1]: Stopped kmod-static-nodes.service. Mar 17 18:39:01.094171 systemd[1]: Stopping sysroot-boot.service... Mar 17 18:39:01.113000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:39:01.113000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:39:01.114000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:39:01.095434 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 17 18:39:01.095582 systemd[1]: Stopped systemd-udev-trigger.service. Mar 17 18:39:01.097174 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 17 18:39:01.097306 systemd[1]: Stopped dracut-pre-trigger.service. Mar 17 18:39:01.100149 systemd[1]: iscsid.service: Deactivated successfully. Mar 17 18:39:01.100242 systemd[1]: Stopped iscsid.service. Mar 17 18:39:01.101326 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 17 18:39:01.101390 systemd[1]: Stopped ignition-mount.service. Mar 17 18:39:01.103080 systemd[1]: iscsid.socket: Deactivated successfully. Mar 17 18:39:01.125000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:39:01.103146 systemd[1]: Closed iscsid.socket. Mar 17 18:39:01.104722 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 17 18:39:01.104753 systemd[1]: Stopped ignition-disks.service. Mar 17 18:39:01.106358 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 17 18:39:01.133000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:39:01.106387 systemd[1]: Stopped ignition-kargs.service. Mar 17 18:39:01.134000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:39:01.108071 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 17 18:39:01.136000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:39:01.108099 systemd[1]: Stopped ignition-setup.service. Mar 17 18:39:01.109146 systemd[1]: Stopping iscsiuio.service... Mar 17 18:39:01.111471 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 17 18:39:01.111959 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 17 18:39:01.140000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:39:01.112035 systemd[1]: Finished initrd-cleanup.service. Mar 17 18:39:01.113587 systemd[1]: iscsiuio.service: Deactivated successfully. Mar 17 18:39:01.113657 systemd[1]: Stopped iscsiuio.service. Mar 17 18:39:01.115700 systemd[1]: Stopped target network.target. Mar 17 18:39:01.116833 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 17 18:39:01.145000 audit: BPF prog-id=6 op=UNLOAD Mar 17 18:39:01.146000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:39:01.116858 systemd[1]: Closed iscsiuio.socket. Mar 17 18:39:01.118400 systemd[1]: Stopping systemd-networkd.service... Mar 17 18:39:01.148000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:39:01.120291 systemd[1]: Stopping systemd-resolved.service... Mar 17 18:39:01.123277 systemd-networkd[719]: eth0: DHCPv6 lease lost Mar 17 18:39:01.151000 audit: BPF prog-id=9 op=UNLOAD Mar 17 18:39:01.151000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:39:01.151000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:39:01.151000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:39:01.151000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:39:01.124603 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 17 18:39:01.158000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:39:01.158000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:39:01.124689 systemd[1]: Stopped systemd-networkd.service. Mar 17 18:39:01.127799 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 17 18:39:01.127825 systemd[1]: Closed systemd-networkd.socket. Mar 17 18:39:01.130261 systemd[1]: Stopping network-cleanup.service... Mar 17 18:39:01.131021 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 17 18:39:01.131060 systemd[1]: Stopped parse-ip-for-networkd.service. Mar 17 18:39:01.133093 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 18:39:01.133156 systemd[1]: Stopped systemd-sysctl.service. Mar 17 18:39:01.134781 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 17 18:39:01.134810 systemd[1]: Stopped systemd-modules-load.service. Mar 17 18:39:01.136673 systemd[1]: Stopping systemd-udevd.service... Mar 17 18:39:01.138995 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 17 18:39:01.139386 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 17 18:39:01.139478 systemd[1]: Stopped systemd-resolved.service. Mar 17 18:39:01.145327 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 17 18:39:01.145404 systemd[1]: Stopped network-cleanup.service. Mar 17 18:39:01.147065 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 17 18:39:01.147164 systemd[1]: Stopped systemd-udevd.service. Mar 17 18:39:01.149578 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 17 18:39:01.149605 systemd[1]: Closed systemd-udevd-control.socket. Mar 17 18:39:01.151236 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 17 18:39:01.151260 systemd[1]: Closed systemd-udevd-kernel.socket. Mar 17 18:39:01.151317 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 17 18:39:01.151343 systemd[1]: Stopped dracut-pre-udev.service. Mar 17 18:39:01.151537 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 17 18:39:01.151562 systemd[1]: Stopped dracut-cmdline.service. Mar 17 18:39:01.151709 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 18:39:01.151734 systemd[1]: Stopped dracut-cmdline-ask.service. Mar 17 18:39:01.152550 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Mar 17 18:39:01.152751 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 18:39:01.152792 systemd[1]: Stopped systemd-vconsole-setup.service. Mar 17 18:39:01.157612 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 17 18:39:01.157677 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Mar 17 18:39:01.202801 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 17 18:39:01.202895 systemd[1]: Stopped sysroot-boot.service. Mar 17 18:39:01.204000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:39:01.204694 systemd[1]: Reached target initrd-switch-root.target. Mar 17 18:39:01.207000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:39:01.206146 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 17 18:39:01.206181 systemd[1]: Stopped initrd-setup-root.service. Mar 17 18:39:01.208547 systemd[1]: Starting initrd-switch-root.service... Mar 17 18:39:01.214664 systemd[1]: Switching root. Mar 17 18:39:01.215000 audit: BPF prog-id=8 op=UNLOAD Mar 17 18:39:01.215000 audit: BPF prog-id=7 op=UNLOAD Mar 17 18:39:01.217000 audit: BPF prog-id=5 op=UNLOAD Mar 17 18:39:01.217000 audit: BPF prog-id=4 op=UNLOAD Mar 17 18:39:01.217000 audit: BPF prog-id=3 op=UNLOAD Mar 17 18:39:01.232773 systemd-journald[197]: Journal stopped Mar 17 18:39:03.850399 systemd-journald[197]: Received SIGTERM from PID 1 (n/a). Mar 17 18:39:03.850446 kernel: SELinux: Class mctp_socket not defined in policy. Mar 17 18:39:03.850459 kernel: SELinux: Class anon_inode not defined in policy. Mar 17 18:39:03.850469 kernel: SELinux: the above unknown classes and permissions will be allowed Mar 17 18:39:03.850478 kernel: SELinux: policy capability network_peer_controls=1 Mar 17 18:39:03.850487 kernel: SELinux: policy capability open_perms=1 Mar 17 18:39:03.850497 kernel: SELinux: policy capability extended_socket_class=1 Mar 17 18:39:03.850506 kernel: SELinux: policy capability always_check_network=0 Mar 17 18:39:03.850517 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 17 18:39:03.850530 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 17 18:39:03.850540 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 17 18:39:03.850549 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 17 18:39:03.850560 systemd[1]: Successfully loaded SELinux policy in 37.687ms. Mar 17 18:39:03.850578 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.328ms. Mar 17 18:39:03.850597 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Mar 17 18:39:03.850609 systemd[1]: Detected virtualization kvm. Mar 17 18:39:03.850620 systemd[1]: Detected architecture x86-64. Mar 17 18:39:03.850630 systemd[1]: Detected first boot. Mar 17 18:39:03.850640 systemd[1]: Initializing machine ID from VM UUID. Mar 17 18:39:03.850650 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Mar 17 18:39:03.850660 systemd[1]: Populated /etc with preset unit settings. Mar 17 18:39:03.850670 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:39:03.850683 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:39:03.850695 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:39:03.850706 systemd[1]: Queued start job for default target multi-user.target. Mar 17 18:39:03.850716 systemd[1]: Unnecessary job was removed for dev-vda6.device. Mar 17 18:39:03.850726 systemd[1]: Created slice system-addon\x2dconfig.slice. Mar 17 18:39:03.850736 systemd[1]: Created slice system-addon\x2drun.slice. Mar 17 18:39:03.850747 systemd[1]: Created slice system-getty.slice. Mar 17 18:39:03.850757 systemd[1]: Created slice system-modprobe.slice. Mar 17 18:39:03.850767 systemd[1]: Created slice system-serial\x2dgetty.slice. Mar 17 18:39:03.850778 systemd[1]: Created slice system-system\x2dcloudinit.slice. Mar 17 18:39:03.850789 systemd[1]: Created slice system-systemd\x2dfsck.slice. Mar 17 18:39:03.850799 systemd[1]: Created slice user.slice. Mar 17 18:39:03.850809 systemd[1]: Started systemd-ask-password-console.path. Mar 17 18:39:03.850820 systemd[1]: Started systemd-ask-password-wall.path. Mar 17 18:39:03.850830 systemd[1]: Set up automount boot.automount. Mar 17 18:39:03.850840 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Mar 17 18:39:03.850851 systemd[1]: Reached target integritysetup.target. Mar 17 18:39:03.850860 systemd[1]: Reached target remote-cryptsetup.target. Mar 17 18:39:03.850870 systemd[1]: Reached target remote-fs.target. Mar 17 18:39:03.850883 systemd[1]: Reached target slices.target. Mar 17 18:39:03.850893 systemd[1]: Reached target swap.target. Mar 17 18:39:03.850906 systemd[1]: Reached target torcx.target. Mar 17 18:39:03.850921 systemd[1]: Reached target veritysetup.target. Mar 17 18:39:03.850934 systemd[1]: Listening on systemd-coredump.socket. Mar 17 18:39:03.850947 systemd[1]: Listening on systemd-initctl.socket. Mar 17 18:39:03.850961 systemd[1]: Listening on systemd-journald-audit.socket. Mar 17 18:39:03.850975 systemd[1]: Listening on systemd-journald-dev-log.socket. Mar 17 18:39:03.850991 systemd[1]: Listening on systemd-journald.socket. Mar 17 18:39:03.851005 systemd[1]: Listening on systemd-networkd.socket. Mar 17 18:39:03.851018 systemd[1]: Listening on systemd-udevd-control.socket. Mar 17 18:39:03.851032 systemd[1]: Listening on systemd-udevd-kernel.socket. Mar 17 18:39:03.851046 systemd[1]: Listening on systemd-userdbd.socket. Mar 17 18:39:03.851057 systemd[1]: Mounting dev-hugepages.mount... Mar 17 18:39:03.851067 systemd[1]: Mounting dev-mqueue.mount... Mar 17 18:39:03.851077 systemd[1]: Mounting media.mount... Mar 17 18:39:03.851087 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:39:03.851098 systemd[1]: Mounting sys-kernel-debug.mount... Mar 17 18:39:03.851108 systemd[1]: Mounting sys-kernel-tracing.mount... Mar 17 18:39:03.851118 systemd[1]: Mounting tmp.mount... Mar 17 18:39:03.851128 systemd[1]: Starting flatcar-tmpfiles.service... Mar 17 18:39:03.851138 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:39:03.851148 systemd[1]: Starting kmod-static-nodes.service... Mar 17 18:39:03.851159 systemd[1]: Starting modprobe@configfs.service... Mar 17 18:39:03.851169 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:39:03.851179 systemd[1]: Starting modprobe@drm.service... Mar 17 18:39:03.851231 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:39:03.851242 systemd[1]: Starting modprobe@fuse.service... Mar 17 18:39:03.851252 systemd[1]: Starting modprobe@loop.service... Mar 17 18:39:03.851262 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 17 18:39:03.851272 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Mar 17 18:39:03.851282 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Mar 17 18:39:03.851292 systemd[1]: Starting systemd-journald.service... Mar 17 18:39:03.851303 kernel: loop: module loaded Mar 17 18:39:03.851313 kernel: fuse: init (API version 7.34) Mar 17 18:39:03.851324 systemd[1]: Starting systemd-modules-load.service... Mar 17 18:39:03.851334 systemd[1]: Starting systemd-network-generator.service... Mar 17 18:39:03.851344 systemd[1]: Starting systemd-remount-fs.service... Mar 17 18:39:03.851354 systemd[1]: Starting systemd-udev-trigger.service... Mar 17 18:39:03.851365 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:39:03.851382 systemd[1]: Mounted dev-hugepages.mount. Mar 17 18:39:03.851392 systemd[1]: Mounted dev-mqueue.mount. Mar 17 18:39:03.851404 systemd[1]: Mounted media.mount. Mar 17 18:39:03.851414 systemd[1]: Mounted sys-kernel-debug.mount. Mar 17 18:39:03.851426 systemd[1]: Mounted sys-kernel-tracing.mount. Mar 17 18:39:03.851436 systemd[1]: Mounted tmp.mount. Mar 17 18:39:03.851454 systemd-journald[1015]: Journal started Mar 17 18:39:03.851498 systemd-journald[1015]: Runtime Journal (/run/log/journal/8dff836050714ee287a36449295dd43b) is 6.0M, max 48.4M, 42.4M free. Mar 17 18:39:03.769000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Mar 17 18:39:03.769000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Mar 17 18:39:03.849000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Mar 17 18:39:03.849000 audit[1015]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffce0188f80 a2=4000 a3=7ffce018901c items=0 ppid=1 pid=1015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:39:03.849000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Mar 17 18:39:03.855161 systemd[1]: Started systemd-journald.service. Mar 17 18:39:03.854000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:39:03.855469 systemd[1]: Finished kmod-static-nodes.service. Mar 17 18:39:03.856000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:39:03.856582 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 17 18:39:03.856824 systemd[1]: Finished modprobe@configfs.service. Mar 17 18:39:03.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:39:03.857000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:39:03.858130 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:39:03.858336 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:39:03.859000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:39:03.859000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:39:03.859642 systemd[1]: Finished flatcar-tmpfiles.service. Mar 17 18:39:03.860000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:39:03.860761 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 18:39:03.860960 systemd[1]: Finished modprobe@drm.service. Mar 17 18:39:03.861000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:39:03.861000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:39:03.862211 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:39:03.862415 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:39:03.863000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:39:03.863000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:39:03.863599 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 17 18:39:03.863801 systemd[1]: Finished modprobe@fuse.service. Mar 17 18:39:03.864000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:39:03.864000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:39:03.864882 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:39:03.865230 systemd[1]: Finished modprobe@loop.service. Mar 17 18:39:03.865000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:39:03.865000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:39:03.866573 systemd[1]: Finished systemd-modules-load.service. Mar 17 18:39:03.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:39:03.867917 systemd[1]: Finished systemd-network-generator.service. Mar 17 18:39:03.868000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:39:03.869270 systemd[1]: Finished systemd-remount-fs.service. Mar 17 18:39:03.869000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:39:03.870541 systemd[1]: Reached target network-pre.target. Mar 17 18:39:03.872581 systemd[1]: Mounting sys-fs-fuse-connections.mount... Mar 17 18:39:03.874351 systemd[1]: Mounting sys-kernel-config.mount... Mar 17 18:39:03.875100 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 17 18:39:03.876685 systemd[1]: Starting systemd-hwdb-update.service... Mar 17 18:39:03.878689 systemd[1]: Starting systemd-journal-flush.service... Mar 17 18:39:03.879559 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:39:03.880452 systemd[1]: Starting systemd-random-seed.service... Mar 17 18:39:03.881309 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:39:03.882180 systemd[1]: Starting systemd-sysctl.service... Mar 17 18:39:03.884726 systemd-journald[1015]: Time spent on flushing to /var/log/journal/8dff836050714ee287a36449295dd43b is 18.485ms for 1106 entries. Mar 17 18:39:03.884726 systemd-journald[1015]: System Journal (/var/log/journal/8dff836050714ee287a36449295dd43b) is 8.0M, max 195.6M, 187.6M free. Mar 17 18:39:03.919313 systemd-journald[1015]: Received client request to flush runtime journal. Mar 17 18:39:03.892000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:39:03.896000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:39:03.899000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:39:03.906000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:39:03.884040 systemd[1]: Starting systemd-sysusers.service... Mar 17 18:39:03.887876 systemd[1]: Mounted sys-fs-fuse-connections.mount. Mar 17 18:39:03.888862 systemd[1]: Mounted sys-kernel-config.mount. Mar 17 18:39:03.919849 udevadm[1060]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 17 18:39:03.891946 systemd[1]: Finished systemd-random-seed.service. Mar 17 18:39:03.921000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:39:03.892974 systemd[1]: Reached target first-boot-complete.target. Mar 17 18:39:03.896028 systemd[1]: Finished systemd-udev-trigger.service. Mar 17 18:39:03.897836 systemd[1]: Starting systemd-udev-settle.service... Mar 17 18:39:03.898928 systemd[1]: Finished systemd-sysctl.service. Mar 17 18:39:03.905635 systemd[1]: Finished systemd-sysusers.service. Mar 17 18:39:03.907343 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Mar 17 18:39:03.920141 systemd[1]: Finished systemd-journal-flush.service. Mar 17 18:39:03.925751 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Mar 17 18:39:03.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:39:04.298133 systemd[1]: Finished systemd-hwdb-update.service. Mar 17 18:39:04.298000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:39:04.300574 systemd[1]: Starting systemd-udevd.service... Mar 17 18:39:04.317523 systemd-udevd[1068]: Using default interface naming scheme 'v252'. Mar 17 18:39:04.329444 systemd[1]: Started systemd-udevd.service. Mar 17 18:39:04.330000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:39:04.332225 systemd[1]: Starting systemd-networkd.service... Mar 17 18:39:04.339578 systemd[1]: Starting systemd-userdbd.service... Mar 17 18:39:04.357719 systemd[1]: Found device dev-ttyS0.device. Mar 17 18:39:04.370124 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Mar 17 18:39:04.371000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:39:04.371255 systemd[1]: Started systemd-userdbd.service. Mar 17 18:39:04.406238 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 17 18:39:04.410222 kernel: ACPI: button: Power Button [PWRF] Mar 17 18:39:04.418809 systemd-networkd[1079]: lo: Link UP Mar 17 18:39:04.418821 systemd-networkd[1079]: lo: Gained carrier Mar 17 18:39:04.419156 systemd-networkd[1079]: Enumeration completed Mar 17 18:39:04.420000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:39:04.419277 systemd[1]: Started systemd-networkd.service. Mar 17 18:39:04.419798 systemd-networkd[1079]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 18:39:04.420724 systemd-networkd[1079]: eth0: Link UP Mar 17 18:39:04.420742 systemd-networkd[1079]: eth0: Gained carrier Mar 17 18:39:04.417000 audit[1087]: AVC avc: denied { confidentiality } for pid=1087 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Mar 17 18:39:04.417000 audit[1087]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55ea23bdace0 a1=338ac a2=7fe44dfc5bc5 a3=5 items=110 ppid=1068 pid=1087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:39:04.417000 audit: CWD cwd="/" Mar 17 18:39:04.417000 audit: PATH item=0 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=1 name=(null) inode=11032 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=2 name=(null) inode=11032 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=3 name=(null) inode=11033 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=4 name=(null) inode=11032 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=5 name=(null) inode=11034 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=6 name=(null) inode=11032 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=7 name=(null) inode=11035 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=8 name=(null) inode=11035 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=9 name=(null) inode=11036 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=10 name=(null) inode=11035 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=11 name=(null) inode=11037 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=12 name=(null) inode=11035 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=13 name=(null) inode=11038 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=14 name=(null) inode=11035 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=15 name=(null) inode=11039 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=16 name=(null) inode=11035 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=17 name=(null) inode=11040 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=18 name=(null) inode=11032 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=19 name=(null) inode=11041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=20 name=(null) inode=11041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=21 name=(null) inode=11042 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=22 name=(null) inode=11041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=23 name=(null) inode=11043 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=24 name=(null) inode=11041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=25 name=(null) inode=11044 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=26 name=(null) inode=11041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=27 name=(null) inode=11045 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=28 name=(null) inode=11041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=29 name=(null) inode=11046 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=30 name=(null) inode=11032 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=31 name=(null) inode=11047 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=32 name=(null) inode=11047 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=33 name=(null) inode=11048 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=34 name=(null) inode=11047 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=35 name=(null) inode=11049 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=36 name=(null) inode=11047 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=37 name=(null) inode=11050 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=38 name=(null) inode=11047 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=39 name=(null) inode=11051 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=40 name=(null) inode=11047 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=41 name=(null) inode=11052 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=42 name=(null) inode=11032 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=43 name=(null) inode=11053 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=44 name=(null) inode=11053 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=45 name=(null) inode=11054 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=46 name=(null) inode=11053 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=47 name=(null) inode=11055 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=48 name=(null) inode=11053 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=49 name=(null) inode=11056 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=50 name=(null) inode=11053 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=51 name=(null) inode=11057 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=52 name=(null) inode=11053 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=53 name=(null) inode=11058 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=54 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=55 name=(null) inode=11059 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=56 name=(null) inode=11059 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=57 name=(null) inode=11060 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=58 name=(null) inode=11059 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=59 name=(null) inode=11061 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=60 name=(null) inode=11059 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=61 name=(null) inode=11062 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=62 name=(null) inode=11062 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=63 name=(null) inode=11063 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=64 name=(null) inode=11062 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=65 name=(null) inode=11064 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=66 name=(null) inode=11062 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=67 name=(null) inode=11065 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=68 name=(null) inode=11062 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=69 name=(null) inode=11066 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=70 name=(null) inode=11062 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=71 name=(null) inode=11067 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=72 name=(null) inode=11059 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=73 name=(null) inode=11068 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=74 name=(null) inode=11068 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=75 name=(null) inode=11069 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=76 name=(null) inode=11068 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=77 name=(null) inode=11070 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=78 name=(null) inode=11068 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=79 name=(null) inode=11071 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=80 name=(null) inode=11068 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=81 name=(null) inode=11072 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=82 name=(null) inode=11068 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=83 name=(null) inode=11073 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=84 name=(null) inode=11059 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=85 name=(null) inode=11074 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=86 name=(null) inode=11074 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=87 name=(null) inode=11075 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=88 name=(null) inode=11074 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=89 name=(null) inode=11076 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=90 name=(null) inode=11074 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=91 name=(null) inode=11077 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=92 name=(null) inode=11074 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=93 name=(null) inode=11078 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=94 name=(null) inode=11074 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=95 name=(null) inode=11079 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=96 name=(null) inode=11059 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=97 name=(null) inode=11080 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=98 name=(null) inode=11080 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=99 name=(null) inode=11081 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=100 name=(null) inode=11080 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=101 name=(null) inode=11082 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=102 name=(null) inode=11080 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=103 name=(null) inode=11083 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=104 name=(null) inode=11080 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=105 name=(null) inode=11084 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=106 name=(null) inode=11080 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=107 name=(null) inode=11085 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PATH item=109 name=(null) inode=11086 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:39:04.417000 audit: PROCTITLE proctitle="(udev-worker)" Mar 17 18:39:04.432349 systemd-networkd[1079]: eth0: DHCPv4 address 10.0.0.57/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 17 18:39:04.441121 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Mar 17 18:39:04.445574 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 17 18:39:04.445683 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 17 18:39:04.445793 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 17 18:39:04.445875 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 17 18:39:04.449228 kernel: mousedev: PS/2 mouse device common for all mice Mar 17 18:39:04.495679 kernel: kvm: Nested Virtualization enabled Mar 17 18:39:04.495737 kernel: SVM: kvm: Nested Paging enabled Mar 17 18:39:04.495752 kernel: SVM: Virtual VMLOAD VMSAVE supported Mar 17 18:39:04.496336 kernel: SVM: Virtual GIF supported Mar 17 18:39:04.512234 kernel: EDAC MC: Ver: 3.0.0 Mar 17 18:39:04.537589 systemd[1]: Finished systemd-udev-settle.service. Mar 17 18:39:04.538000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:39:04.539733 systemd[1]: Starting lvm2-activation-early.service... Mar 17 18:39:04.547891 lvm[1105]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 18:39:04.576942 systemd[1]: Finished lvm2-activation-early.service. Mar 17 18:39:04.577000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:39:04.577971 systemd[1]: Reached target cryptsetup.target. Mar 17 18:39:04.579689 systemd[1]: Starting lvm2-activation.service... Mar 17 18:39:04.584212 lvm[1107]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 18:39:04.616412 systemd[1]: Finished lvm2-activation.service. Mar 17 18:39:04.617000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:39:04.617502 systemd[1]: Reached target local-fs-pre.target. Mar 17 18:39:04.618422 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 17 18:39:04.618445 systemd[1]: Reached target local-fs.target. Mar 17 18:39:04.619280 systemd[1]: Reached target machines.target. Mar 17 18:39:04.621282 systemd[1]: Starting ldconfig.service... Mar 17 18:39:04.622248 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:39:04.622316 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:39:04.623236 systemd[1]: Starting systemd-boot-update.service... Mar 17 18:39:04.624897 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Mar 17 18:39:04.626931 systemd[1]: Starting systemd-machine-id-commit.service... Mar 17 18:39:04.629013 systemd[1]: Starting systemd-sysext.service... Mar 17 18:39:04.630165 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1110 (bootctl) Mar 17 18:39:04.631139 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Mar 17 18:39:04.634551 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Mar 17 18:39:04.636000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:39:04.643171 systemd[1]: Unmounting usr-share-oem.mount... Mar 17 18:39:04.646358 systemd[1]: usr-share-oem.mount: Deactivated successfully. Mar 17 18:39:04.646556 systemd[1]: Unmounted usr-share-oem.mount. Mar 17 18:39:04.657320 kernel: loop0: detected capacity change from 0 to 210664 Mar 17 18:39:04.659738 systemd-fsck[1118]: fsck.fat 4.2 (2021-01-31) Mar 17 18:39:04.659738 systemd-fsck[1118]: /dev/vda1: 790 files, 119319/258078 clusters Mar 17 18:39:04.661568 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Mar 17 18:39:04.662000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:39:04.664554 systemd[1]: Mounting boot.mount... Mar 17 18:39:04.677808 systemd[1]: Mounted boot.mount. Mar 17 18:39:04.869216 systemd[1]: Finished systemd-boot-update.service. Mar 17 18:39:04.870000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:39:04.877373 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 17 18:39:04.877778 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 17 18:39:04.878339 systemd[1]: Finished systemd-machine-id-commit.service. Mar 17 18:39:04.879000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:39:04.891210 kernel: loop1: detected capacity change from 0 to 210664 Mar 17 18:39:04.894604 (sd-sysext)[1131]: Using extensions 'kubernetes'. Mar 17 18:39:04.894921 (sd-sysext)[1131]: Merged extensions into '/usr'. Mar 17 18:39:04.909454 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:39:04.910766 systemd[1]: Mounting usr-share-oem.mount... Mar 17 18:39:04.911990 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:39:04.912962 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:39:04.914958 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:39:04.916746 systemd[1]: Starting modprobe@loop.service... Mar 17 18:39:04.917706 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:39:04.917829 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:39:04.917936 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:39:04.920641 systemd[1]: Mounted usr-share-oem.mount. Mar 17 18:39:04.921875 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:39:04.922007 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:39:04.922000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:39:04.922000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:39:04.923364 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:39:04.923487 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:39:04.924000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:39:04.924000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:39:04.924952 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:39:04.925253 systemd[1]: Finished modprobe@loop.service. Mar 17 18:39:04.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:39:04.926000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:39:04.926544 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:39:04.926633 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:39:04.928204 systemd[1]: Finished systemd-sysext.service. Mar 17 18:39:04.928000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:39:04.930577 systemd[1]: Starting ensure-sysext.service... Mar 17 18:39:04.932334 systemd[1]: Starting systemd-tmpfiles-setup.service... Mar 17 18:39:04.935690 ldconfig[1109]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 17 18:39:04.938028 systemd[1]: Finished ldconfig.service. Mar 17 18:39:04.938000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:39:04.940425 systemd-tmpfiles[1145]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Mar 17 18:39:04.940465 systemd[1]: Reloading. Mar 17 18:39:04.941407 systemd-tmpfiles[1145]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 17 18:39:04.942674 systemd-tmpfiles[1145]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 17 18:39:04.982232 /usr/lib/systemd/system-generators/torcx-generator[1166]: time="2025-03-17T18:39:04Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 18:39:04.982556 /usr/lib/systemd/system-generators/torcx-generator[1166]: time="2025-03-17T18:39:04Z" level=info msg="torcx already run" Mar 17 18:39:05.047944 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:39:05.047960 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:39:05.064407 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:39:05.118155 systemd[1]: Finished systemd-tmpfiles-setup.service. Mar 17 18:39:05.119000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:39:05.121762 systemd[1]: Starting audit-rules.service... Mar 17 18:39:05.123456 systemd[1]: Starting clean-ca-certificates.service... Mar 17 18:39:05.125453 systemd[1]: Starting systemd-journal-catalog-update.service... Mar 17 18:39:05.127961 systemd[1]: Starting systemd-resolved.service... Mar 17 18:39:05.129899 systemd[1]: Starting systemd-timesyncd.service... Mar 17 18:39:05.131573 systemd[1]: Starting systemd-update-utmp.service... Mar 17 18:39:05.132941 systemd[1]: Finished clean-ca-certificates.service. Mar 17 18:39:05.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:39:05.135000 audit[1226]: SYSTEM_BOOT pid=1226 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Mar 17 18:39:05.139422 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:39:05.139626 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:39:05.141922 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:39:05.149000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:39:05.143820 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:39:05.145593 systemd[1]: Starting modprobe@loop.service... Mar 17 18:39:05.146434 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:39:05.146568 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:39:05.146697 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 18:39:05.146824 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:39:05.148089 systemd[1]: Finished systemd-journal-catalog-update.service. Mar 17 18:39:05.150000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:39:05.150000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:39:05.149562 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:39:05.149701 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:39:05.151123 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:39:05.151265 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:39:05.152000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:39:05.152000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:39:05.153000 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:39:05.153122 systemd[1]: Finished modprobe@loop.service. Mar 17 18:39:05.154000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:39:05.154000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:39:05.155000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Mar 17 18:39:05.155000 audit[1245]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffcd7dad260 a2=420 a3=0 items=0 ppid=1215 pid=1245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:39:05.155000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Mar 17 18:39:05.154690 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:39:05.157880 augenrules[1245]: No rules Mar 17 18:39:05.154806 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:39:05.156761 systemd[1]: Starting systemd-update-done.service... Mar 17 18:39:05.162110 systemd[1]: Finished audit-rules.service. Mar 17 18:39:05.163411 systemd[1]: Finished systemd-update-utmp.service. Mar 17 18:39:05.164661 systemd[1]: Finished systemd-update-done.service. Mar 17 18:39:05.167888 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:39:05.168073 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:39:05.169436 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:39:05.171080 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:39:05.172651 systemd[1]: Starting modprobe@loop.service... Mar 17 18:39:05.173398 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:39:05.173503 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:39:05.173593 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 18:39:05.173659 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:39:05.174407 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:39:05.175079 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:39:05.176318 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:39:05.176472 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:39:05.177681 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:39:05.177805 systemd[1]: Finished modprobe@loop.service. Mar 17 18:39:05.178931 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:39:05.179017 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:39:05.181310 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:39:05.181505 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:39:05.182747 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:39:05.184414 systemd[1]: Starting modprobe@drm.service... Mar 17 18:39:05.186385 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:39:05.188367 systemd[1]: Starting modprobe@loop.service... Mar 17 18:39:05.189351 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:39:05.189450 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:39:05.190725 systemd[1]: Starting systemd-networkd-wait-online.service... Mar 17 18:39:05.191787 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 18:39:05.191893 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:39:05.192912 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:39:05.193035 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:39:05.194443 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 18:39:05.194565 systemd[1]: Finished modprobe@drm.service. Mar 17 18:39:05.195793 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:39:05.195937 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:39:05.197539 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:39:05.197824 systemd[1]: Finished modprobe@loop.service. Mar 17 18:39:05.199260 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:39:05.199349 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:39:05.200445 systemd[1]: Finished ensure-sysext.service. Mar 17 18:39:05.210007 systemd-resolved[1221]: Positive Trust Anchors: Mar 17 18:39:05.210019 systemd-resolved[1221]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 18:39:05.210046 systemd-resolved[1221]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Mar 17 18:39:05.213748 systemd[1]: Started systemd-timesyncd.service. Mar 17 18:39:05.214923 systemd[1]: Reached target time-set.target. Mar 17 18:39:06.207146 systemd-timesyncd[1223]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 17 18:39:06.207181 systemd-timesyncd[1223]: Initial clock synchronization to Mon 2025-03-17 18:39:06.207076 UTC. Mar 17 18:39:06.208683 systemd-resolved[1221]: Defaulting to hostname 'linux'. Mar 17 18:39:06.210030 systemd[1]: Started systemd-resolved.service. Mar 17 18:39:06.210890 systemd[1]: Reached target network.target. Mar 17 18:39:06.211674 systemd[1]: Reached target nss-lookup.target. Mar 17 18:39:06.212480 systemd[1]: Reached target sysinit.target. Mar 17 18:39:06.213326 systemd[1]: Started motdgen.path. Mar 17 18:39:06.214033 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Mar 17 18:39:06.215201 systemd[1]: Started logrotate.timer. Mar 17 18:39:06.215990 systemd[1]: Started mdadm.timer. Mar 17 18:39:06.216656 systemd[1]: Started systemd-tmpfiles-clean.timer. Mar 17 18:39:06.217486 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 17 18:39:06.217508 systemd[1]: Reached target paths.target. Mar 17 18:39:06.218237 systemd[1]: Reached target timers.target. Mar 17 18:39:06.219226 systemd[1]: Listening on dbus.socket. Mar 17 18:39:06.221011 systemd[1]: Starting docker.socket... Mar 17 18:39:06.222500 systemd[1]: Listening on sshd.socket. Mar 17 18:39:06.223304 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:39:06.223546 systemd[1]: Listening on docker.socket. Mar 17 18:39:06.224305 systemd[1]: Reached target sockets.target. Mar 17 18:39:06.225079 systemd[1]: Reached target basic.target. Mar 17 18:39:06.225919 systemd[1]: System is tainted: cgroupsv1 Mar 17 18:39:06.225956 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Mar 17 18:39:06.225974 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Mar 17 18:39:06.226809 systemd[1]: Starting containerd.service... Mar 17 18:39:06.228413 systemd[1]: Starting dbus.service... Mar 17 18:39:06.229972 systemd[1]: Starting enable-oem-cloudinit.service... Mar 17 18:39:06.231733 systemd[1]: Starting extend-filesystems.service... Mar 17 18:39:06.234000 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Mar 17 18:39:06.235336 jq[1278]: false Mar 17 18:39:06.235204 systemd[1]: Starting motdgen.service... Mar 17 18:39:06.236837 systemd[1]: Starting prepare-helm.service... Mar 17 18:39:06.238489 systemd[1]: Starting ssh-key-proc-cmdline.service... Mar 17 18:39:06.240253 systemd[1]: Starting sshd-keygen.service... Mar 17 18:39:06.242599 systemd[1]: Starting systemd-logind.service... Mar 17 18:39:06.243347 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:39:06.243396 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 17 18:39:06.244313 systemd[1]: Starting update-engine.service... Mar 17 18:39:06.245994 systemd[1]: Starting update-ssh-keys-after-ignition.service... Mar 17 18:39:06.248124 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 17 18:39:06.255059 jq[1296]: true Mar 17 18:39:06.249233 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Mar 17 18:39:06.257331 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 17 18:39:06.257515 systemd[1]: Finished ssh-key-proc-cmdline.service. Mar 17 18:39:06.263855 tar[1300]: linux-amd64/helm Mar 17 18:39:06.263443 systemd[1]: motdgen.service: Deactivated successfully. Mar 17 18:39:06.263624 systemd[1]: Finished motdgen.service. Mar 17 18:39:06.265414 jq[1301]: true Mar 17 18:39:06.269407 systemd[1]: Started dbus.service. Mar 17 18:39:06.269273 dbus-daemon[1277]: [system] SELinux support is enabled Mar 17 18:39:06.271790 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 17 18:39:06.271815 systemd[1]: Reached target system-config.target. Mar 17 18:39:06.272746 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 17 18:39:06.272760 systemd[1]: Reached target user-config.target. Mar 17 18:39:06.279748 update_engine[1292]: I0317 18:39:06.279620 1292 main.cc:92] Flatcar Update Engine starting Mar 17 18:39:06.283371 extend-filesystems[1279]: Found loop1 Mar 17 18:39:06.283371 extend-filesystems[1279]: Found sr0 Mar 17 18:39:06.283371 extend-filesystems[1279]: Found vda Mar 17 18:39:06.283371 extend-filesystems[1279]: Found vda1 Mar 17 18:39:06.283371 extend-filesystems[1279]: Found vda2 Mar 17 18:39:06.283371 extend-filesystems[1279]: Found vda3 Mar 17 18:39:06.283371 extend-filesystems[1279]: Found usr Mar 17 18:39:06.283371 extend-filesystems[1279]: Found vda4 Mar 17 18:39:06.283371 extend-filesystems[1279]: Found vda6 Mar 17 18:39:06.283371 extend-filesystems[1279]: Found vda7 Mar 17 18:39:06.283371 extend-filesystems[1279]: Found vda9 Mar 17 18:39:06.283371 extend-filesystems[1279]: Checking size of /dev/vda9 Mar 17 18:39:06.295730 env[1303]: time="2025-03-17T18:39:06.284583080Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Mar 17 18:39:06.296585 systemd[1]: Started update-engine.service. Mar 17 18:39:06.297670 update_engine[1292]: I0317 18:39:06.297427 1292 update_check_scheduler.cc:74] Next update check in 9m51s Mar 17 18:39:06.298700 systemd[1]: Started locksmithd.service. Mar 17 18:39:06.306973 extend-filesystems[1279]: Resized partition /dev/vda9 Mar 17 18:39:06.309202 extend-filesystems[1340]: resize2fs 1.46.5 (30-Dec-2021) Mar 17 18:39:06.309643 systemd-logind[1289]: Watching system buttons on /dev/input/event1 (Power Button) Mar 17 18:39:06.309660 systemd-logind[1289]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 17 18:39:06.310545 systemd-logind[1289]: New seat seat0. Mar 17 18:39:06.315118 systemd[1]: Started systemd-logind.service. Mar 17 18:39:06.316887 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 17 18:39:06.319673 env[1303]: time="2025-03-17T18:39:06.319639573Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 17 18:39:06.323469 env[1303]: time="2025-03-17T18:39:06.323319865Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:39:06.326251 env[1303]: time="2025-03-17T18:39:06.325106987Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.179-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 17 18:39:06.326251 env[1303]: time="2025-03-17T18:39:06.325152903Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:39:06.326251 env[1303]: time="2025-03-17T18:39:06.325448197Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 18:39:06.326251 env[1303]: time="2025-03-17T18:39:06.325467844Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 17 18:39:06.326251 env[1303]: time="2025-03-17T18:39:06.325484385Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Mar 17 18:39:06.326251 env[1303]: time="2025-03-17T18:39:06.325496497Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 17 18:39:06.326251 env[1303]: time="2025-03-17T18:39:06.325572870Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:39:06.326251 env[1303]: time="2025-03-17T18:39:06.325787232Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:39:06.326251 env[1303]: time="2025-03-17T18:39:06.325992057Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 18:39:06.326251 env[1303]: time="2025-03-17T18:39:06.326018065Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 17 18:39:06.336159 env[1303]: time="2025-03-17T18:39:06.326086404Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Mar 17 18:39:06.336159 env[1303]: time="2025-03-17T18:39:06.326097384Z" level=info msg="metadata content store policy set" policy=shared Mar 17 18:39:06.341200 bash[1337]: Updated "/home/core/.ssh/authorized_keys" Mar 17 18:39:06.341826 systemd[1]: Finished update-ssh-keys-after-ignition.service. Mar 17 18:39:06.351901 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 17 18:39:06.369939 locksmithd[1333]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 17 18:39:06.379174 extend-filesystems[1340]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 17 18:39:06.379174 extend-filesystems[1340]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 17 18:39:06.379174 extend-filesystems[1340]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 17 18:39:06.394275 extend-filesystems[1279]: Resized filesystem in /dev/vda9 Mar 17 18:39:06.379763 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 17 18:39:06.395308 env[1303]: time="2025-03-17T18:39:06.379653180Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 17 18:39:06.395308 env[1303]: time="2025-03-17T18:39:06.379700940Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 17 18:39:06.395308 env[1303]: time="2025-03-17T18:39:06.379713534Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 17 18:39:06.395308 env[1303]: time="2025-03-17T18:39:06.379817018Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 17 18:39:06.395308 env[1303]: time="2025-03-17T18:39:06.379840642Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 17 18:39:06.395308 env[1303]: time="2025-03-17T18:39:06.379854628Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 17 18:39:06.395308 env[1303]: time="2025-03-17T18:39:06.379880386Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 17 18:39:06.395308 env[1303]: time="2025-03-17T18:39:06.379892860Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 17 18:39:06.395308 env[1303]: time="2025-03-17T18:39:06.379905674Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Mar 17 18:39:06.395308 env[1303]: time="2025-03-17T18:39:06.379918839Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 17 18:39:06.395308 env[1303]: time="2025-03-17T18:39:06.379931723Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 17 18:39:06.395308 env[1303]: time="2025-03-17T18:39:06.379945268Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 17 18:39:06.395308 env[1303]: time="2025-03-17T18:39:06.380090871Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 17 18:39:06.395308 env[1303]: time="2025-03-17T18:39:06.380158568Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 17 18:39:06.379968 systemd[1]: Finished extend-filesystems.service. Mar 17 18:39:06.395636 env[1303]: time="2025-03-17T18:39:06.380441348Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 17 18:39:06.395636 env[1303]: time="2025-03-17T18:39:06.380462618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 17 18:39:06.395636 env[1303]: time="2025-03-17T18:39:06.380475412Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 17 18:39:06.395636 env[1303]: time="2025-03-17T18:39:06.380515738Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 17 18:39:06.395636 env[1303]: time="2025-03-17T18:39:06.380526919Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 17 18:39:06.395636 env[1303]: time="2025-03-17T18:39:06.380537789Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 17 18:39:06.395636 env[1303]: time="2025-03-17T18:39:06.380547437Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 17 18:39:06.395636 env[1303]: time="2025-03-17T18:39:06.380558428Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 17 18:39:06.395636 env[1303]: time="2025-03-17T18:39:06.380569689Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 17 18:39:06.395636 env[1303]: time="2025-03-17T18:39:06.380580670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 17 18:39:06.395636 env[1303]: time="2025-03-17T18:39:06.380591059Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 17 18:39:06.395636 env[1303]: time="2025-03-17T18:39:06.380602491Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 17 18:39:06.395636 env[1303]: time="2025-03-17T18:39:06.380704262Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 17 18:39:06.395636 env[1303]: time="2025-03-17T18:39:06.380717737Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 17 18:39:06.395636 env[1303]: time="2025-03-17T18:39:06.380728176Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 17 18:39:06.391455 systemd[1]: Started containerd.service. Mar 17 18:39:06.395947 env[1303]: time="2025-03-17T18:39:06.380737975Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 17 18:39:06.395947 env[1303]: time="2025-03-17T18:39:06.380751169Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Mar 17 18:39:06.395947 env[1303]: time="2025-03-17T18:39:06.380760968Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 17 18:39:06.395947 env[1303]: time="2025-03-17T18:39:06.380777900Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Mar 17 18:39:06.395947 env[1303]: time="2025-03-17T18:39:06.380809749Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 17 18:39:06.396050 env[1303]: time="2025-03-17T18:39:06.380988244Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 17 18:39:06.396050 env[1303]: time="2025-03-17T18:39:06.381042045Z" level=info msg="Connect containerd service" Mar 17 18:39:06.396050 env[1303]: time="2025-03-17T18:39:06.381071681Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 17 18:39:06.396050 env[1303]: time="2025-03-17T18:39:06.381478093Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 18:39:06.396050 env[1303]: time="2025-03-17T18:39:06.381660304Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 17 18:39:06.396050 env[1303]: time="2025-03-17T18:39:06.381687225Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 17 18:39:06.396050 env[1303]: time="2025-03-17T18:39:06.381719175Z" level=info msg="containerd successfully booted in 0.099908s" Mar 17 18:39:06.396050 env[1303]: time="2025-03-17T18:39:06.387590405Z" level=info msg="Start subscribing containerd event" Mar 17 18:39:06.396050 env[1303]: time="2025-03-17T18:39:06.387640019Z" level=info msg="Start recovering state" Mar 17 18:39:06.396050 env[1303]: time="2025-03-17T18:39:06.387693639Z" level=info msg="Start event monitor" Mar 17 18:39:06.396050 env[1303]: time="2025-03-17T18:39:06.387710380Z" level=info msg="Start snapshots syncer" Mar 17 18:39:06.396050 env[1303]: time="2025-03-17T18:39:06.387717684Z" level=info msg="Start cni network conf syncer for default" Mar 17 18:39:06.396050 env[1303]: time="2025-03-17T18:39:06.387723866Z" level=info msg="Start streaming server" Mar 17 18:39:06.497013 systemd-networkd[1079]: eth0: Gained IPv6LL Mar 17 18:39:06.499188 systemd[1]: Finished systemd-networkd-wait-online.service. Mar 17 18:39:06.500585 systemd[1]: Reached target network-online.target. Mar 17 18:39:06.502860 systemd[1]: Starting kubelet.service... Mar 17 18:39:06.676768 tar[1300]: linux-amd64/LICENSE Mar 17 18:39:06.677018 tar[1300]: linux-amd64/README.md Mar 17 18:39:06.681738 systemd[1]: Finished prepare-helm.service. Mar 17 18:39:06.726762 sshd_keygen[1299]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 17 18:39:06.744567 systemd[1]: Finished sshd-keygen.service. Mar 17 18:39:06.746851 systemd[1]: Starting issuegen.service... Mar 17 18:39:06.751770 systemd[1]: issuegen.service: Deactivated successfully. Mar 17 18:39:06.752054 systemd[1]: Finished issuegen.service. Mar 17 18:39:06.754326 systemd[1]: Starting systemd-user-sessions.service... Mar 17 18:39:06.759641 systemd[1]: Finished systemd-user-sessions.service. Mar 17 18:39:06.761706 systemd[1]: Started getty@tty1.service. Mar 17 18:39:06.763433 systemd[1]: Started serial-getty@ttyS0.service. Mar 17 18:39:06.764479 systemd[1]: Reached target getty.target. Mar 17 18:39:07.053986 systemd[1]: Started kubelet.service. Mar 17 18:39:07.055692 systemd[1]: Reached target multi-user.target. Mar 17 18:39:07.058473 systemd[1]: Starting systemd-update-utmp-runlevel.service... Mar 17 18:39:07.065011 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Mar 17 18:39:07.065244 systemd[1]: Finished systemd-update-utmp-runlevel.service. Mar 17 18:39:07.069261 systemd[1]: Startup finished in 6.199s (kernel) + 4.805s (userspace) = 11.005s. Mar 17 18:39:07.472426 kubelet[1379]: E0317 18:39:07.472311 1379 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:39:07.474090 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:39:07.474223 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:39:11.873476 systemd[1]: Created slice system-sshd.slice. Mar 17 18:39:11.874484 systemd[1]: Started sshd@0-10.0.0.57:22-10.0.0.1:46374.service. Mar 17 18:39:11.907280 sshd[1390]: Accepted publickey for core from 10.0.0.1 port 46374 ssh2: RSA SHA256:DYcGKLA+BUI3KXBOyjzF6/uTec/cV0nLMAEcssN4/64 Mar 17 18:39:11.908520 sshd[1390]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:39:11.916206 systemd-logind[1289]: New session 1 of user core. Mar 17 18:39:11.916846 systemd[1]: Created slice user-500.slice. Mar 17 18:39:11.917612 systemd[1]: Starting user-runtime-dir@500.service... Mar 17 18:39:11.924814 systemd[1]: Finished user-runtime-dir@500.service. Mar 17 18:39:11.925833 systemd[1]: Starting user@500.service... Mar 17 18:39:11.928626 (systemd)[1394]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:39:11.996385 systemd[1394]: Queued start job for default target default.target. Mar 17 18:39:11.996570 systemd[1394]: Reached target paths.target. Mar 17 18:39:11.996585 systemd[1394]: Reached target sockets.target. Mar 17 18:39:11.996596 systemd[1394]: Reached target timers.target. Mar 17 18:39:11.996606 systemd[1394]: Reached target basic.target. Mar 17 18:39:11.996642 systemd[1394]: Reached target default.target. Mar 17 18:39:11.996663 systemd[1394]: Startup finished in 62ms. Mar 17 18:39:11.996734 systemd[1]: Started user@500.service. Mar 17 18:39:11.997690 systemd[1]: Started session-1.scope. Mar 17 18:39:12.047544 systemd[1]: Started sshd@1-10.0.0.57:22-10.0.0.1:46384.service. Mar 17 18:39:12.082163 sshd[1404]: Accepted publickey for core from 10.0.0.1 port 46384 ssh2: RSA SHA256:DYcGKLA+BUI3KXBOyjzF6/uTec/cV0nLMAEcssN4/64 Mar 17 18:39:12.083396 sshd[1404]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:39:12.086741 systemd-logind[1289]: New session 2 of user core. Mar 17 18:39:12.087442 systemd[1]: Started session-2.scope. Mar 17 18:39:12.139240 sshd[1404]: pam_unix(sshd:session): session closed for user core Mar 17 18:39:12.141306 systemd[1]: Started sshd@2-10.0.0.57:22-10.0.0.1:46390.service. Mar 17 18:39:12.141659 systemd[1]: sshd@1-10.0.0.57:22-10.0.0.1:46384.service: Deactivated successfully. Mar 17 18:39:12.142392 systemd-logind[1289]: Session 2 logged out. Waiting for processes to exit. Mar 17 18:39:12.142507 systemd[1]: session-2.scope: Deactivated successfully. Mar 17 18:39:12.143311 systemd-logind[1289]: Removed session 2. Mar 17 18:39:12.169607 sshd[1409]: Accepted publickey for core from 10.0.0.1 port 46390 ssh2: RSA SHA256:DYcGKLA+BUI3KXBOyjzF6/uTec/cV0nLMAEcssN4/64 Mar 17 18:39:12.170385 sshd[1409]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:39:12.173339 systemd-logind[1289]: New session 3 of user core. Mar 17 18:39:12.174002 systemd[1]: Started session-3.scope. Mar 17 18:39:12.221928 sshd[1409]: pam_unix(sshd:session): session closed for user core Mar 17 18:39:12.224121 systemd[1]: Started sshd@3-10.0.0.57:22-10.0.0.1:46392.service. Mar 17 18:39:12.224467 systemd[1]: sshd@2-10.0.0.57:22-10.0.0.1:46390.service: Deactivated successfully. Mar 17 18:39:12.225182 systemd-logind[1289]: Session 3 logged out. Waiting for processes to exit. Mar 17 18:39:12.225232 systemd[1]: session-3.scope: Deactivated successfully. Mar 17 18:39:12.226065 systemd-logind[1289]: Removed session 3. Mar 17 18:39:12.253003 sshd[1417]: Accepted publickey for core from 10.0.0.1 port 46392 ssh2: RSA SHA256:DYcGKLA+BUI3KXBOyjzF6/uTec/cV0nLMAEcssN4/64 Mar 17 18:39:12.253763 sshd[1417]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:39:12.256698 systemd-logind[1289]: New session 4 of user core. Mar 17 18:39:12.257340 systemd[1]: Started session-4.scope. Mar 17 18:39:12.309591 sshd[1417]: pam_unix(sshd:session): session closed for user core Mar 17 18:39:12.311609 systemd[1]: Started sshd@4-10.0.0.57:22-10.0.0.1:46402.service. Mar 17 18:39:12.311974 systemd[1]: sshd@3-10.0.0.57:22-10.0.0.1:46392.service: Deactivated successfully. Mar 17 18:39:12.312684 systemd-logind[1289]: Session 4 logged out. Waiting for processes to exit. Mar 17 18:39:12.312757 systemd[1]: session-4.scope: Deactivated successfully. Mar 17 18:39:12.313671 systemd-logind[1289]: Removed session 4. Mar 17 18:39:12.341987 sshd[1423]: Accepted publickey for core from 10.0.0.1 port 46402 ssh2: RSA SHA256:DYcGKLA+BUI3KXBOyjzF6/uTec/cV0nLMAEcssN4/64 Mar 17 18:39:12.342908 sshd[1423]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:39:12.345912 systemd-logind[1289]: New session 5 of user core. Mar 17 18:39:12.346679 systemd[1]: Started session-5.scope. Mar 17 18:39:12.400147 sudo[1429]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 17 18:39:12.400336 sudo[1429]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Mar 17 18:39:12.417362 systemd[1]: Starting docker.service... Mar 17 18:39:12.450249 env[1442]: time="2025-03-17T18:39:12.450203619Z" level=info msg="Starting up" Mar 17 18:39:12.451410 env[1442]: time="2025-03-17T18:39:12.451371680Z" level=info msg="parsed scheme: \"unix\"" module=grpc Mar 17 18:39:12.451410 env[1442]: time="2025-03-17T18:39:12.451394513Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Mar 17 18:39:12.451410 env[1442]: time="2025-03-17T18:39:12.451418237Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Mar 17 18:39:12.451563 env[1442]: time="2025-03-17T18:39:12.451427975Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Mar 17 18:39:12.452883 env[1442]: time="2025-03-17T18:39:12.452844151Z" level=info msg="parsed scheme: \"unix\"" module=grpc Mar 17 18:39:12.452883 env[1442]: time="2025-03-17T18:39:12.452860372Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Mar 17 18:39:12.452957 env[1442]: time="2025-03-17T18:39:12.452885178Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Mar 17 18:39:12.452957 env[1442]: time="2025-03-17T18:39:12.452893834Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Mar 17 18:39:13.039711 env[1442]: time="2025-03-17T18:39:13.039665040Z" level=warning msg="Your kernel does not support cgroup blkio weight" Mar 17 18:39:13.039711 env[1442]: time="2025-03-17T18:39:13.039694144Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Mar 17 18:39:13.039970 env[1442]: time="2025-03-17T18:39:13.039822986Z" level=info msg="Loading containers: start." Mar 17 18:39:13.142892 kernel: Initializing XFRM netlink socket Mar 17 18:39:13.168368 env[1442]: time="2025-03-17T18:39:13.168330383Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Mar 17 18:39:13.212923 systemd-networkd[1079]: docker0: Link UP Mar 17 18:39:13.228849 env[1442]: time="2025-03-17T18:39:13.228810305Z" level=info msg="Loading containers: done." Mar 17 18:39:13.240349 env[1442]: time="2025-03-17T18:39:13.240311965Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 17 18:39:13.240454 env[1442]: time="2025-03-17T18:39:13.240439053Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Mar 17 18:39:13.240541 env[1442]: time="2025-03-17T18:39:13.240517730Z" level=info msg="Daemon has completed initialization" Mar 17 18:39:13.256377 systemd[1]: Started docker.service. Mar 17 18:39:13.259676 env[1442]: time="2025-03-17T18:39:13.259643398Z" level=info msg="API listen on /run/docker.sock" Mar 17 18:39:14.262379 env[1303]: time="2025-03-17T18:39:14.262319323Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\"" Mar 17 18:39:14.798487 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1652718981.mount: Deactivated successfully. Mar 17 18:39:16.332118 env[1303]: time="2025-03-17T18:39:16.332054876Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:39:16.334265 env[1303]: time="2025-03-17T18:39:16.334221549Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4db5a05c271eac8f5da2f95895ea1ccb9a38f48db3135ba3bdfe35941a396ea8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:39:16.336301 env[1303]: time="2025-03-17T18:39:16.336252188Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:39:16.338009 env[1303]: time="2025-03-17T18:39:16.337974507Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:77c54346965036acc7ac95c3200597ede36db9246179248dde21c1a3ecc1caf0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:39:16.338967 env[1303]: time="2025-03-17T18:39:16.338933726Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\" returns image reference \"sha256:4db5a05c271eac8f5da2f95895ea1ccb9a38f48db3135ba3bdfe35941a396ea8\"" Mar 17 18:39:16.348961 env[1303]: time="2025-03-17T18:39:16.348930965Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\"" Mar 17 18:39:17.725050 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 17 18:39:17.725229 systemd[1]: Stopped kubelet.service. Mar 17 18:39:17.726680 systemd[1]: Starting kubelet.service... Mar 17 18:39:17.838710 systemd[1]: Started kubelet.service. Mar 17 18:39:17.896337 kubelet[1596]: E0317 18:39:17.896282 1596 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:39:17.899397 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:39:17.899595 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:39:18.644907 env[1303]: time="2025-03-17T18:39:18.644820036Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:39:18.647003 env[1303]: time="2025-03-17T18:39:18.646945943Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:de1025c2d496829d3250130380737609ffcdd10a4dce6f2dcd03f23a85a15e6a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:39:18.648934 env[1303]: time="2025-03-17T18:39:18.648893786Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:39:18.650829 env[1303]: time="2025-03-17T18:39:18.650795703Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:d8874f3fb45591ecdac67a3035c730808f18b3ab13147495c7d77eb1960d4f6f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:39:18.651747 env[1303]: time="2025-03-17T18:39:18.651707583Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\" returns image reference \"sha256:de1025c2d496829d3250130380737609ffcdd10a4dce6f2dcd03f23a85a15e6a\"" Mar 17 18:39:18.662282 env[1303]: time="2025-03-17T18:39:18.662237300Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\"" Mar 17 18:39:20.206086 env[1303]: time="2025-03-17T18:39:20.206029020Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:39:20.208714 env[1303]: time="2025-03-17T18:39:20.208665114Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:11492f0faf138e933cadd6f533f03e401da9a35e53711e833f18afa6b185b2b7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:39:20.210617 env[1303]: time="2025-03-17T18:39:20.210562653Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:39:20.212168 env[1303]: time="2025-03-17T18:39:20.212128309Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c699f8c97ae7ec819c8bd878d3db104ba72fc440d810d9030e09286b696017b5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:39:20.212787 env[1303]: time="2025-03-17T18:39:20.212750255Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\" returns image reference \"sha256:11492f0faf138e933cadd6f533f03e401da9a35e53711e833f18afa6b185b2b7\"" Mar 17 18:39:20.223572 env[1303]: time="2025-03-17T18:39:20.223531935Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\"" Mar 17 18:39:21.347648 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3445108578.mount: Deactivated successfully. Mar 17 18:39:22.104345 env[1303]: time="2025-03-17T18:39:22.104276803Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:39:22.106952 env[1303]: time="2025-03-17T18:39:22.106921422Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:39:22.109450 env[1303]: time="2025-03-17T18:39:22.109382187Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:39:22.110661 env[1303]: time="2025-03-17T18:39:22.110618826Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:39:22.111006 env[1303]: time="2025-03-17T18:39:22.110961278Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\" returns image reference \"sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc\"" Mar 17 18:39:22.119142 env[1303]: time="2025-03-17T18:39:22.119105141Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Mar 17 18:39:22.657619 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3303496530.mount: Deactivated successfully. Mar 17 18:39:23.520559 env[1303]: time="2025-03-17T18:39:23.520483245Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:39:23.522311 env[1303]: time="2025-03-17T18:39:23.522278221Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:39:23.524117 env[1303]: time="2025-03-17T18:39:23.524088246Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:39:23.525649 env[1303]: time="2025-03-17T18:39:23.525629196Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:39:23.526290 env[1303]: time="2025-03-17T18:39:23.526266661Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Mar 17 18:39:23.534453 env[1303]: time="2025-03-17T18:39:23.534399553Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Mar 17 18:39:24.005216 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount442628814.mount: Deactivated successfully. Mar 17 18:39:24.010051 env[1303]: time="2025-03-17T18:39:24.010017437Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:39:24.011765 env[1303]: time="2025-03-17T18:39:24.011723376Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:39:24.013180 env[1303]: time="2025-03-17T18:39:24.013141716Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:39:24.014407 env[1303]: time="2025-03-17T18:39:24.014375630Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:39:24.014795 env[1303]: time="2025-03-17T18:39:24.014759720Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Mar 17 18:39:24.023474 env[1303]: time="2025-03-17T18:39:24.023449647Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Mar 17 18:39:24.564247 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1473992711.mount: Deactivated successfully. Mar 17 18:39:27.427783 env[1303]: time="2025-03-17T18:39:27.427729457Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:39:27.429752 env[1303]: time="2025-03-17T18:39:27.429719979Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:39:27.431579 env[1303]: time="2025-03-17T18:39:27.431553268Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:39:27.433253 env[1303]: time="2025-03-17T18:39:27.433222207Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:39:27.433883 env[1303]: time="2025-03-17T18:39:27.433839515Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Mar 17 18:39:27.908183 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 17 18:39:27.908380 systemd[1]: Stopped kubelet.service. Mar 17 18:39:27.909541 systemd[1]: Starting kubelet.service... Mar 17 18:39:27.981131 systemd[1]: Started kubelet.service. Mar 17 18:39:28.029336 kubelet[1673]: E0317 18:39:28.029295 1673 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:39:28.031187 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:39:28.031399 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:39:29.683430 systemd[1]: Stopped kubelet.service. Mar 17 18:39:29.685239 systemd[1]: Starting kubelet.service... Mar 17 18:39:29.699689 systemd[1]: Reloading. Mar 17 18:39:29.753547 /usr/lib/systemd/system-generators/torcx-generator[1754]: time="2025-03-17T18:39:29Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 18:39:29.753576 /usr/lib/systemd/system-generators/torcx-generator[1754]: time="2025-03-17T18:39:29Z" level=info msg="torcx already run" Mar 17 18:39:30.227395 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:39:30.227413 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:39:30.243921 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:39:30.312926 systemd[1]: Started kubelet.service. Mar 17 18:39:30.316490 systemd[1]: Stopping kubelet.service... Mar 17 18:39:30.317426 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 18:39:30.317664 systemd[1]: Stopped kubelet.service. Mar 17 18:39:30.319531 systemd[1]: Starting kubelet.service... Mar 17 18:39:30.395923 systemd[1]: Started kubelet.service. Mar 17 18:39:30.429727 kubelet[1819]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:39:30.429727 kubelet[1819]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 18:39:30.429727 kubelet[1819]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:39:30.430610 kubelet[1819]: I0317 18:39:30.430567 1819 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 18:39:30.721153 kubelet[1819]: I0317 18:39:30.721060 1819 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 17 18:39:30.721153 kubelet[1819]: I0317 18:39:30.721086 1819 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 18:39:30.721285 kubelet[1819]: I0317 18:39:30.721276 1819 server.go:927] "Client rotation is on, will bootstrap in background" Mar 17 18:39:30.733527 kubelet[1819]: I0317 18:39:30.733476 1819 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 18:39:30.734013 kubelet[1819]: E0317 18:39:30.733988 1819 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.57:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.57:6443: connect: connection refused Mar 17 18:39:30.743051 kubelet[1819]: I0317 18:39:30.743008 1819 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 18:39:30.745340 kubelet[1819]: I0317 18:39:30.745290 1819 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 18:39:30.745557 kubelet[1819]: I0317 18:39:30.745338 1819 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 17 18:39:30.746073 kubelet[1819]: I0317 18:39:30.746049 1819 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 18:39:30.746073 kubelet[1819]: I0317 18:39:30.746075 1819 container_manager_linux.go:301] "Creating device plugin manager" Mar 17 18:39:30.746255 kubelet[1819]: I0317 18:39:30.746233 1819 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:39:30.746983 kubelet[1819]: I0317 18:39:30.746963 1819 kubelet.go:400] "Attempting to sync node with API server" Mar 17 18:39:30.747017 kubelet[1819]: I0317 18:39:30.746985 1819 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 18:39:30.747017 kubelet[1819]: I0317 18:39:30.747009 1819 kubelet.go:312] "Adding apiserver pod source" Mar 17 18:39:30.747062 kubelet[1819]: I0317 18:39:30.747027 1819 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 18:39:30.752389 kubelet[1819]: W0317 18:39:30.752340 1819 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.57:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.57:6443: connect: connection refused Mar 17 18:39:30.752389 kubelet[1819]: E0317 18:39:30.752387 1819 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.57:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.57:6443: connect: connection refused Mar 17 18:39:30.757930 kubelet[1819]: W0317 18:39:30.757898 1819 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.57:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.57:6443: connect: connection refused Mar 17 18:39:30.757930 kubelet[1819]: E0317 18:39:30.757930 1819 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.57:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.57:6443: connect: connection refused Mar 17 18:39:30.761895 kubelet[1819]: I0317 18:39:30.761858 1819 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Mar 17 18:39:30.763272 kubelet[1819]: I0317 18:39:30.763251 1819 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 18:39:30.763318 kubelet[1819]: W0317 18:39:30.763303 1819 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 17 18:39:30.763819 kubelet[1819]: I0317 18:39:30.763801 1819 server.go:1264] "Started kubelet" Mar 17 18:39:30.764615 kubelet[1819]: I0317 18:39:30.764574 1819 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 18:39:30.764763 kubelet[1819]: I0317 18:39:30.764726 1819 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 18:39:30.765102 kubelet[1819]: I0317 18:39:30.765087 1819 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 18:39:30.766934 kubelet[1819]: I0317 18:39:30.766918 1819 server.go:455] "Adding debug handlers to kubelet server" Mar 17 18:39:30.768670 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Mar 17 18:39:30.768761 kubelet[1819]: I0317 18:39:30.768744 1819 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 18:39:30.771161 kubelet[1819]: I0317 18:39:30.771146 1819 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 17 18:39:30.771427 kubelet[1819]: I0317 18:39:30.771412 1819 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 18:39:30.772899 kubelet[1819]: I0317 18:39:30.772880 1819 reconciler.go:26] "Reconciler: start to sync state" Mar 17 18:39:30.774324 kubelet[1819]: W0317 18:39:30.774261 1819 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.57:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.57:6443: connect: connection refused Mar 17 18:39:30.775030 kubelet[1819]: E0317 18:39:30.775016 1819 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.57:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.57:6443: connect: connection refused Mar 17 18:39:30.775077 kubelet[1819]: E0317 18:39:30.774446 1819 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.57:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.57:6443: connect: connection refused" interval="200ms" Mar 17 18:39:30.775742 kubelet[1819]: I0317 18:39:30.775712 1819 factory.go:221] Registration of the systemd container factory successfully Mar 17 18:39:30.775833 kubelet[1819]: I0317 18:39:30.775789 1819 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 18:39:30.784747 kubelet[1819]: I0317 18:39:30.782421 1819 factory.go:221] Registration of the containerd container factory successfully Mar 17 18:39:30.784747 kubelet[1819]: E0317 18:39:30.782020 1819 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.57:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.57:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.182dab1cbbffbb2e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-03-17 18:39:30.763782958 +0000 UTC m=+0.364671941,LastTimestamp:2025-03-17 18:39:30.763782958 +0000 UTC m=+0.364671941,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 17 18:39:30.785983 kubelet[1819]: I0317 18:39:30.785916 1819 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 18:39:30.786683 kubelet[1819]: I0317 18:39:30.786643 1819 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 18:39:30.786683 kubelet[1819]: I0317 18:39:30.786675 1819 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 18:39:30.786683 kubelet[1819]: I0317 18:39:30.786691 1819 kubelet.go:2337] "Starting kubelet main sync loop" Mar 17 18:39:30.786854 kubelet[1819]: E0317 18:39:30.786736 1819 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 18:39:30.790347 kubelet[1819]: W0317 18:39:30.790306 1819 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.57:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.57:6443: connect: connection refused Mar 17 18:39:30.790347 kubelet[1819]: E0317 18:39:30.790348 1819 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.57:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.57:6443: connect: connection refused Mar 17 18:39:30.797004 kubelet[1819]: I0317 18:39:30.796992 1819 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 18:39:30.797004 kubelet[1819]: I0317 18:39:30.797002 1819 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 18:39:30.797078 kubelet[1819]: I0317 18:39:30.797021 1819 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:39:30.872896 kubelet[1819]: I0317 18:39:30.872843 1819 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 17 18:39:30.873306 kubelet[1819]: E0317 18:39:30.873264 1819 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.57:6443/api/v1/nodes\": dial tcp 10.0.0.57:6443: connect: connection refused" node="localhost" Mar 17 18:39:30.887529 kubelet[1819]: E0317 18:39:30.887498 1819 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 17 18:39:30.975854 kubelet[1819]: E0317 18:39:30.975729 1819 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.57:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.57:6443: connect: connection refused" interval="400ms" Mar 17 18:39:31.075055 kubelet[1819]: I0317 18:39:31.075025 1819 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 17 18:39:31.075375 kubelet[1819]: E0317 18:39:31.075349 1819 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.57:6443/api/v1/nodes\": dial tcp 10.0.0.57:6443: connect: connection refused" node="localhost" Mar 17 18:39:31.088451 kubelet[1819]: E0317 18:39:31.088434 1819 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 17 18:39:31.211499 kubelet[1819]: I0317 18:39:31.211466 1819 policy_none.go:49] "None policy: Start" Mar 17 18:39:31.212077 kubelet[1819]: I0317 18:39:31.212044 1819 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 18:39:31.212077 kubelet[1819]: I0317 18:39:31.212068 1819 state_mem.go:35] "Initializing new in-memory state store" Mar 17 18:39:31.217391 kubelet[1819]: I0317 18:39:31.217369 1819 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 18:39:31.217527 kubelet[1819]: I0317 18:39:31.217496 1819 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 18:39:31.217599 kubelet[1819]: I0317 18:39:31.217588 1819 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 18:39:31.219286 kubelet[1819]: E0317 18:39:31.219254 1819 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 17 18:39:31.296488 kubelet[1819]: E0317 18:39:31.296334 1819 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.57:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.57:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.182dab1cbbffbb2e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-03-17 18:39:30.763782958 +0000 UTC m=+0.364671941,LastTimestamp:2025-03-17 18:39:30.763782958 +0000 UTC m=+0.364671941,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 17 18:39:31.376820 kubelet[1819]: E0317 18:39:31.376790 1819 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.57:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.57:6443: connect: connection refused" interval="800ms" Mar 17 18:39:31.477035 kubelet[1819]: I0317 18:39:31.477000 1819 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 17 18:39:31.477357 kubelet[1819]: E0317 18:39:31.477322 1819 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.57:6443/api/v1/nodes\": dial tcp 10.0.0.57:6443: connect: connection refused" node="localhost" Mar 17 18:39:31.489442 kubelet[1819]: I0317 18:39:31.489377 1819 topology_manager.go:215] "Topology Admit Handler" podUID="debff437311f3d610c89b73b36f7b4f1" podNamespace="kube-system" podName="kube-apiserver-localhost" Mar 17 18:39:31.490514 kubelet[1819]: I0317 18:39:31.490468 1819 topology_manager.go:215] "Topology Admit Handler" podUID="23a18e2dc14f395c5f1bea711a5a9344" podNamespace="kube-system" podName="kube-controller-manager-localhost" Mar 17 18:39:31.491312 kubelet[1819]: I0317 18:39:31.491286 1819 topology_manager.go:215] "Topology Admit Handler" podUID="d79ab404294384d4bcc36fb5b5509bbb" podNamespace="kube-system" podName="kube-scheduler-localhost" Mar 17 18:39:31.577318 kubelet[1819]: I0317 18:39:31.577227 1819 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/debff437311f3d610c89b73b36f7b4f1-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"debff437311f3d610c89b73b36f7b4f1\") " pod="kube-system/kube-apiserver-localhost" Mar 17 18:39:31.577318 kubelet[1819]: I0317 18:39:31.577264 1819 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/debff437311f3d610c89b73b36f7b4f1-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"debff437311f3d610c89b73b36f7b4f1\") " pod="kube-system/kube-apiserver-localhost" Mar 17 18:39:31.577318 kubelet[1819]: I0317 18:39:31.577285 1819 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 18:39:31.577318 kubelet[1819]: I0317 18:39:31.577301 1819 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d79ab404294384d4bcc36fb5b5509bbb-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d79ab404294384d4bcc36fb5b5509bbb\") " pod="kube-system/kube-scheduler-localhost" Mar 17 18:39:31.577318 kubelet[1819]: I0317 18:39:31.577314 1819 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 18:39:31.577514 kubelet[1819]: I0317 18:39:31.577332 1819 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 18:39:31.577514 kubelet[1819]: I0317 18:39:31.577350 1819 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/debff437311f3d610c89b73b36f7b4f1-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"debff437311f3d610c89b73b36f7b4f1\") " pod="kube-system/kube-apiserver-localhost" Mar 17 18:39:31.577514 kubelet[1819]: I0317 18:39:31.577366 1819 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 18:39:31.577514 kubelet[1819]: I0317 18:39:31.577404 1819 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 18:39:31.682108 kubelet[1819]: W0317 18:39:31.682077 1819 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.57:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.57:6443: connect: connection refused Mar 17 18:39:31.682168 kubelet[1819]: E0317 18:39:31.682111 1819 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.57:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.57:6443: connect: connection refused Mar 17 18:39:31.719424 kubelet[1819]: W0317 18:39:31.719380 1819 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.57:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.57:6443: connect: connection refused Mar 17 18:39:31.719474 kubelet[1819]: E0317 18:39:31.719430 1819 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.57:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.57:6443: connect: connection refused Mar 17 18:39:31.794946 kubelet[1819]: E0317 18:39:31.794914 1819 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:39:31.795215 kubelet[1819]: E0317 18:39:31.795179 1819 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:39:31.795654 env[1303]: time="2025-03-17T18:39:31.795611174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:debff437311f3d610c89b73b36f7b4f1,Namespace:kube-system,Attempt:0,}" Mar 17 18:39:31.796016 kubelet[1819]: E0317 18:39:31.795996 1819 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:39:31.796127 env[1303]: time="2025-03-17T18:39:31.796083029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:23a18e2dc14f395c5f1bea711a5a9344,Namespace:kube-system,Attempt:0,}" Mar 17 18:39:31.796369 env[1303]: time="2025-03-17T18:39:31.796328950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d79ab404294384d4bcc36fb5b5509bbb,Namespace:kube-system,Attempt:0,}" Mar 17 18:39:31.925161 kubelet[1819]: W0317 18:39:31.925006 1819 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.57:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.57:6443: connect: connection refused Mar 17 18:39:31.925161 kubelet[1819]: E0317 18:39:31.925083 1819 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.57:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.57:6443: connect: connection refused Mar 17 18:39:32.177684 kubelet[1819]: E0317 18:39:32.177569 1819 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.57:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.57:6443: connect: connection refused" interval="1.6s" Mar 17 18:39:32.272768 kubelet[1819]: W0317 18:39:32.272710 1819 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.57:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.57:6443: connect: connection refused Mar 17 18:39:32.272768 kubelet[1819]: E0317 18:39:32.272763 1819 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.57:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.57:6443: connect: connection refused Mar 17 18:39:32.278668 kubelet[1819]: I0317 18:39:32.278649 1819 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 17 18:39:32.278907 kubelet[1819]: E0317 18:39:32.278885 1819 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.57:6443/api/v1/nodes\": dial tcp 10.0.0.57:6443: connect: connection refused" node="localhost" Mar 17 18:39:32.534179 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3905701040.mount: Deactivated successfully. Mar 17 18:39:32.540534 env[1303]: time="2025-03-17T18:39:32.540493103Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:39:32.542512 env[1303]: time="2025-03-17T18:39:32.542482935Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:39:32.543858 env[1303]: time="2025-03-17T18:39:32.543814803Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:39:32.545693 env[1303]: time="2025-03-17T18:39:32.545665824Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:39:32.547501 env[1303]: time="2025-03-17T18:39:32.547467092Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:39:32.548635 env[1303]: time="2025-03-17T18:39:32.548597813Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:39:32.549733 env[1303]: time="2025-03-17T18:39:32.549704959Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:39:32.550782 env[1303]: time="2025-03-17T18:39:32.550753345Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:39:32.552637 env[1303]: time="2025-03-17T18:39:32.552603565Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:39:32.553324 env[1303]: time="2025-03-17T18:39:32.553294340Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:39:32.554361 env[1303]: time="2025-03-17T18:39:32.554337627Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:39:32.556377 env[1303]: time="2025-03-17T18:39:32.556346654Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:39:32.568650 env[1303]: time="2025-03-17T18:39:32.568591197Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:39:32.568650 env[1303]: time="2025-03-17T18:39:32.568634008Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:39:32.568728 env[1303]: time="2025-03-17T18:39:32.568648244Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:39:32.568801 env[1303]: time="2025-03-17T18:39:32.568755475Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2469f618073482b08cb839f634fb01c4fd860f190dd0cd57c4ea0a3fae76c4a9 pid=1859 runtime=io.containerd.runc.v2 Mar 17 18:39:32.596900 env[1303]: time="2025-03-17T18:39:32.593277773Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:39:32.596900 env[1303]: time="2025-03-17T18:39:32.593310014Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:39:32.596900 env[1303]: time="2025-03-17T18:39:32.593319492Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:39:32.596900 env[1303]: time="2025-03-17T18:39:32.593470986Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0c068981ff063ab17e0dfe0abdc1fade50daf7f6ec024f952322ed84d6df3822 pid=1895 runtime=io.containerd.runc.v2 Mar 17 18:39:32.596900 env[1303]: time="2025-03-17T18:39:32.594764602Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:39:32.596900 env[1303]: time="2025-03-17T18:39:32.594812512Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:39:32.596900 env[1303]: time="2025-03-17T18:39:32.594821789Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:39:32.596900 env[1303]: time="2025-03-17T18:39:32.594938748Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6354e47e2528c762a4cf38aa933231df331a0f29eed7597797678b813ad96198 pid=1906 runtime=io.containerd.runc.v2 Mar 17 18:39:32.620684 env[1303]: time="2025-03-17T18:39:32.620628646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:debff437311f3d610c89b73b36f7b4f1,Namespace:kube-system,Attempt:0,} returns sandbox id \"2469f618073482b08cb839f634fb01c4fd860f190dd0cd57c4ea0a3fae76c4a9\"" Mar 17 18:39:32.621498 kubelet[1819]: E0317 18:39:32.621472 1819 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:39:32.623412 env[1303]: time="2025-03-17T18:39:32.623377752Z" level=info msg="CreateContainer within sandbox \"2469f618073482b08cb839f634fb01c4fd860f190dd0cd57c4ea0a3fae76c4a9\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 17 18:39:32.640341 env[1303]: time="2025-03-17T18:39:32.640287834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:23a18e2dc14f395c5f1bea711a5a9344,Namespace:kube-system,Attempt:0,} returns sandbox id \"0c068981ff063ab17e0dfe0abdc1fade50daf7f6ec024f952322ed84d6df3822\"" Mar 17 18:39:32.640843 kubelet[1819]: E0317 18:39:32.640818 1819 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:39:32.642683 env[1303]: time="2025-03-17T18:39:32.642653501Z" level=info msg="CreateContainer within sandbox \"0c068981ff063ab17e0dfe0abdc1fade50daf7f6ec024f952322ed84d6df3822\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 17 18:39:32.645455 env[1303]: time="2025-03-17T18:39:32.645414969Z" level=info msg="CreateContainer within sandbox \"2469f618073482b08cb839f634fb01c4fd860f190dd0cd57c4ea0a3fae76c4a9\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6283baf8121ba2070a8b2b01270024ced83d54196aa1c8dbcba8071555409b7d\"" Mar 17 18:39:32.645769 env[1303]: time="2025-03-17T18:39:32.645740570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d79ab404294384d4bcc36fb5b5509bbb,Namespace:kube-system,Attempt:0,} returns sandbox id \"6354e47e2528c762a4cf38aa933231df331a0f29eed7597797678b813ad96198\"" Mar 17 18:39:32.645936 env[1303]: time="2025-03-17T18:39:32.645907894Z" level=info msg="StartContainer for \"6283baf8121ba2070a8b2b01270024ced83d54196aa1c8dbcba8071555409b7d\"" Mar 17 18:39:32.649813 kubelet[1819]: E0317 18:39:32.649788 1819 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:39:32.651037 env[1303]: time="2025-03-17T18:39:32.651009371Z" level=info msg="CreateContainer within sandbox \"6354e47e2528c762a4cf38aa933231df331a0f29eed7597797678b813ad96198\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 17 18:39:32.667406 env[1303]: time="2025-03-17T18:39:32.667362589Z" level=info msg="CreateContainer within sandbox \"0c068981ff063ab17e0dfe0abdc1fade50daf7f6ec024f952322ed84d6df3822\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e97722e2c9f9e314febb269b7619a4ade34b11a0bfa7bab02c5190a21cbb3b1b\"" Mar 17 18:39:32.667809 env[1303]: time="2025-03-17T18:39:32.667783238Z" level=info msg="StartContainer for \"e97722e2c9f9e314febb269b7619a4ade34b11a0bfa7bab02c5190a21cbb3b1b\"" Mar 17 18:39:32.674140 env[1303]: time="2025-03-17T18:39:32.674099633Z" level=info msg="CreateContainer within sandbox \"6354e47e2528c762a4cf38aa933231df331a0f29eed7597797678b813ad96198\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"10a16137bf734ed39e1c304eace41d89d81c1ce5de71a1d91c560b7d17084e09\"" Mar 17 18:39:32.674732 env[1303]: time="2025-03-17T18:39:32.674714617Z" level=info msg="StartContainer for \"10a16137bf734ed39e1c304eace41d89d81c1ce5de71a1d91c560b7d17084e09\"" Mar 17 18:39:32.701368 env[1303]: time="2025-03-17T18:39:32.701316224Z" level=info msg="StartContainer for \"6283baf8121ba2070a8b2b01270024ced83d54196aa1c8dbcba8071555409b7d\" returns successfully" Mar 17 18:39:32.729021 env[1303]: time="2025-03-17T18:39:32.728975465Z" level=info msg="StartContainer for \"e97722e2c9f9e314febb269b7619a4ade34b11a0bfa7bab02c5190a21cbb3b1b\" returns successfully" Mar 17 18:39:32.739783 env[1303]: time="2025-03-17T18:39:32.739743509Z" level=info msg="StartContainer for \"10a16137bf734ed39e1c304eace41d89d81c1ce5de71a1d91c560b7d17084e09\" returns successfully" Mar 17 18:39:32.794766 kubelet[1819]: E0317 18:39:32.793738 1819 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:39:32.796422 kubelet[1819]: E0317 18:39:32.796378 1819 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:39:32.798035 kubelet[1819]: E0317 18:39:32.797986 1819 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:39:33.754002 kubelet[1819]: I0317 18:39:33.753968 1819 apiserver.go:52] "Watching apiserver" Mar 17 18:39:33.772093 kubelet[1819]: I0317 18:39:33.772052 1819 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 18:39:33.780813 kubelet[1819]: E0317 18:39:33.780775 1819 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 17 18:39:33.799638 kubelet[1819]: E0317 18:39:33.799596 1819 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:39:33.880250 kubelet[1819]: I0317 18:39:33.880202 1819 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 17 18:39:33.884725 kubelet[1819]: I0317 18:39:33.884702 1819 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Mar 17 18:39:35.826226 systemd[1]: Reloading. Mar 17 18:39:35.877498 /usr/lib/systemd/system-generators/torcx-generator[2112]: time="2025-03-17T18:39:35Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 18:39:35.877854 /usr/lib/systemd/system-generators/torcx-generator[2112]: time="2025-03-17T18:39:35Z" level=info msg="torcx already run" Mar 17 18:39:35.944304 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:39:35.944321 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:39:35.960672 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:39:36.035285 systemd[1]: Stopping kubelet.service... Mar 17 18:39:36.035465 kubelet[1819]: E0317 18:39:36.035229 1819 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{localhost.182dab1cbbffbb2e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-03-17 18:39:30.763782958 +0000 UTC m=+0.364671941,LastTimestamp:2025-03-17 18:39:30.763782958 +0000 UTC m=+0.364671941,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 17 18:39:36.055259 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 18:39:36.055549 systemd[1]: Stopped kubelet.service. Mar 17 18:39:36.057135 systemd[1]: Starting kubelet.service... Mar 17 18:39:36.143507 systemd[1]: Started kubelet.service. Mar 17 18:39:36.183852 kubelet[2168]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:39:36.183852 kubelet[2168]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 18:39:36.183852 kubelet[2168]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:39:36.184269 kubelet[2168]: I0317 18:39:36.183895 2168 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 18:39:36.187811 kubelet[2168]: I0317 18:39:36.187769 2168 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 17 18:39:36.187811 kubelet[2168]: I0317 18:39:36.187793 2168 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 18:39:36.188013 kubelet[2168]: I0317 18:39:36.187997 2168 server.go:927] "Client rotation is on, will bootstrap in background" Mar 17 18:39:36.189300 kubelet[2168]: I0317 18:39:36.189278 2168 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 17 18:39:36.190337 kubelet[2168]: I0317 18:39:36.190314 2168 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 18:39:36.197097 kubelet[2168]: I0317 18:39:36.197068 2168 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 18:39:36.197471 kubelet[2168]: I0317 18:39:36.197438 2168 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 18:39:36.197622 kubelet[2168]: I0317 18:39:36.197463 2168 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 17 18:39:36.197713 kubelet[2168]: I0317 18:39:36.197631 2168 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 18:39:36.197713 kubelet[2168]: I0317 18:39:36.197640 2168 container_manager_linux.go:301] "Creating device plugin manager" Mar 17 18:39:36.197713 kubelet[2168]: I0317 18:39:36.197676 2168 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:39:36.197783 kubelet[2168]: I0317 18:39:36.197757 2168 kubelet.go:400] "Attempting to sync node with API server" Mar 17 18:39:36.197783 kubelet[2168]: I0317 18:39:36.197769 2168 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 18:39:36.197828 kubelet[2168]: I0317 18:39:36.197787 2168 kubelet.go:312] "Adding apiserver pod source" Mar 17 18:39:36.197828 kubelet[2168]: I0317 18:39:36.197801 2168 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 18:39:36.198610 kubelet[2168]: I0317 18:39:36.198587 2168 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Mar 17 18:39:36.198801 kubelet[2168]: I0317 18:39:36.198775 2168 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 18:39:36.199345 kubelet[2168]: I0317 18:39:36.199320 2168 server.go:1264] "Started kubelet" Mar 17 18:39:36.202281 kubelet[2168]: I0317 18:39:36.200933 2168 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 18:39:36.202281 kubelet[2168]: I0317 18:39:36.201200 2168 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 18:39:36.202281 kubelet[2168]: I0317 18:39:36.201550 2168 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 18:39:36.202593 kubelet[2168]: I0317 18:39:36.202574 2168 server.go:455] "Adding debug handlers to kubelet server" Mar 17 18:39:36.206926 kubelet[2168]: I0317 18:39:36.205751 2168 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 18:39:36.206926 kubelet[2168]: E0317 18:39:36.205939 2168 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 18:39:36.213560 kubelet[2168]: I0317 18:39:36.213526 2168 factory.go:221] Registration of the systemd container factory successfully Mar 17 18:39:36.213729 kubelet[2168]: I0317 18:39:36.213624 2168 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 18:39:36.217141 kubelet[2168]: I0317 18:39:36.217125 2168 factory.go:221] Registration of the containerd container factory successfully Mar 17 18:39:36.217379 kubelet[2168]: I0317 18:39:36.217358 2168 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 17 18:39:36.217690 kubelet[2168]: I0317 18:39:36.217676 2168 reconciler.go:26] "Reconciler: start to sync state" Mar 17 18:39:36.217751 kubelet[2168]: I0317 18:39:36.217699 2168 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 18:39:36.222888 kubelet[2168]: I0317 18:39:36.222820 2168 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 18:39:36.224212 kubelet[2168]: I0317 18:39:36.224189 2168 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 18:39:36.224267 kubelet[2168]: I0317 18:39:36.224221 2168 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 18:39:36.224267 kubelet[2168]: I0317 18:39:36.224242 2168 kubelet.go:2337] "Starting kubelet main sync loop" Mar 17 18:39:36.224319 kubelet[2168]: E0317 18:39:36.224283 2168 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 18:39:36.269924 kubelet[2168]: I0317 18:39:36.269893 2168 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 18:39:36.269924 kubelet[2168]: I0317 18:39:36.269915 2168 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 18:39:36.269924 kubelet[2168]: I0317 18:39:36.269933 2168 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:39:36.270156 kubelet[2168]: I0317 18:39:36.270139 2168 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 17 18:39:36.270182 kubelet[2168]: I0317 18:39:36.270156 2168 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 17 18:39:36.270182 kubelet[2168]: I0317 18:39:36.270174 2168 policy_none.go:49] "None policy: Start" Mar 17 18:39:36.270773 kubelet[2168]: I0317 18:39:36.270755 2168 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 18:39:36.270896 kubelet[2168]: I0317 18:39:36.270776 2168 state_mem.go:35] "Initializing new in-memory state store" Mar 17 18:39:36.270953 kubelet[2168]: I0317 18:39:36.270942 2168 state_mem.go:75] "Updated machine memory state" Mar 17 18:39:36.271935 kubelet[2168]: I0317 18:39:36.271911 2168 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 18:39:36.272076 kubelet[2168]: I0317 18:39:36.272050 2168 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 18:39:36.272745 kubelet[2168]: I0317 18:39:36.272728 2168 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 18:39:36.324997 kubelet[2168]: I0317 18:39:36.324941 2168 topology_manager.go:215] "Topology Admit Handler" podUID="debff437311f3d610c89b73b36f7b4f1" podNamespace="kube-system" podName="kube-apiserver-localhost" Mar 17 18:39:36.325153 kubelet[2168]: I0317 18:39:36.325079 2168 topology_manager.go:215] "Topology Admit Handler" podUID="23a18e2dc14f395c5f1bea711a5a9344" podNamespace="kube-system" podName="kube-controller-manager-localhost" Mar 17 18:39:36.325153 kubelet[2168]: I0317 18:39:36.325129 2168 topology_manager.go:215] "Topology Admit Handler" podUID="d79ab404294384d4bcc36fb5b5509bbb" podNamespace="kube-system" podName="kube-scheduler-localhost" Mar 17 18:39:36.379723 kubelet[2168]: I0317 18:39:36.379679 2168 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 17 18:39:36.385163 kubelet[2168]: I0317 18:39:36.385146 2168 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Mar 17 18:39:36.385226 kubelet[2168]: I0317 18:39:36.385209 2168 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Mar 17 18:39:36.419959 kubelet[2168]: I0317 18:39:36.419245 2168 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 18:39:36.419959 kubelet[2168]: I0317 18:39:36.419276 2168 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 18:39:36.419959 kubelet[2168]: I0317 18:39:36.419300 2168 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 18:39:36.419959 kubelet[2168]: I0317 18:39:36.419323 2168 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d79ab404294384d4bcc36fb5b5509bbb-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d79ab404294384d4bcc36fb5b5509bbb\") " pod="kube-system/kube-scheduler-localhost" Mar 17 18:39:36.419959 kubelet[2168]: I0317 18:39:36.419339 2168 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/debff437311f3d610c89b73b36f7b4f1-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"debff437311f3d610c89b73b36f7b4f1\") " pod="kube-system/kube-apiserver-localhost" Mar 17 18:39:36.420191 kubelet[2168]: I0317 18:39:36.419355 2168 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/debff437311f3d610c89b73b36f7b4f1-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"debff437311f3d610c89b73b36f7b4f1\") " pod="kube-system/kube-apiserver-localhost" Mar 17 18:39:36.420191 kubelet[2168]: I0317 18:39:36.419369 2168 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/debff437311f3d610c89b73b36f7b4f1-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"debff437311f3d610c89b73b36f7b4f1\") " pod="kube-system/kube-apiserver-localhost" Mar 17 18:39:36.420191 kubelet[2168]: I0317 18:39:36.419384 2168 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 18:39:36.420191 kubelet[2168]: I0317 18:39:36.419401 2168 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 18:39:36.633207 kubelet[2168]: E0317 18:39:36.633166 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:39:36.633342 kubelet[2168]: E0317 18:39:36.633325 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:39:36.633380 kubelet[2168]: E0317 18:39:36.633325 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:39:36.823658 sudo[2202]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 17 18:39:36.823839 sudo[2202]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Mar 17 18:39:37.198272 kubelet[2168]: I0317 18:39:37.198170 2168 apiserver.go:52] "Watching apiserver" Mar 17 18:39:37.218782 kubelet[2168]: I0317 18:39:37.218747 2168 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 18:39:37.232299 kubelet[2168]: E0317 18:39:37.232280 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:39:37.240295 kubelet[2168]: E0317 18:39:37.240278 2168 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Mar 17 18:39:37.240498 kubelet[2168]: E0317 18:39:37.240394 2168 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 17 18:39:37.240969 kubelet[2168]: E0317 18:39:37.240815 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:39:37.240969 kubelet[2168]: E0317 18:39:37.240898 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:39:37.253519 kubelet[2168]: I0317 18:39:37.253473 2168 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.253461602 podStartE2EDuration="1.253461602s" podCreationTimestamp="2025-03-17 18:39:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:39:37.253211382 +0000 UTC m=+1.106017729" watchObservedRunningTime="2025-03-17 18:39:37.253461602 +0000 UTC m=+1.106267939" Mar 17 18:39:37.264498 kubelet[2168]: I0317 18:39:37.264467 2168 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.264456441 podStartE2EDuration="1.264456441s" podCreationTimestamp="2025-03-17 18:39:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:39:37.258756952 +0000 UTC m=+1.111563299" watchObservedRunningTime="2025-03-17 18:39:37.264456441 +0000 UTC m=+1.117262788" Mar 17 18:39:37.271258 kubelet[2168]: I0317 18:39:37.271228 2168 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.271219244 podStartE2EDuration="1.271219244s" podCreationTimestamp="2025-03-17 18:39:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:39:37.264792301 +0000 UTC m=+1.117598648" watchObservedRunningTime="2025-03-17 18:39:37.271219244 +0000 UTC m=+1.124025591" Mar 17 18:39:37.274521 sudo[2202]: pam_unix(sudo:session): session closed for user root Mar 17 18:39:38.233344 kubelet[2168]: E0317 18:39:38.233311 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:39:38.233701 kubelet[2168]: E0317 18:39:38.233449 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:39:38.485447 sudo[1429]: pam_unix(sudo:session): session closed for user root Mar 17 18:39:38.486613 sshd[1423]: pam_unix(sshd:session): session closed for user core Mar 17 18:39:38.488431 systemd[1]: sshd@4-10.0.0.57:22-10.0.0.1:46402.service: Deactivated successfully. Mar 17 18:39:38.489337 systemd[1]: session-5.scope: Deactivated successfully. Mar 17 18:39:38.489723 systemd-logind[1289]: Session 5 logged out. Waiting for processes to exit. Mar 17 18:39:38.490539 systemd-logind[1289]: Removed session 5. Mar 17 18:39:39.271574 kubelet[2168]: E0317 18:39:39.271541 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:39:46.656107 kubelet[2168]: E0317 18:39:46.656070 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:39:46.848540 kubelet[2168]: E0317 18:39:46.848506 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:39:47.244668 kubelet[2168]: E0317 18:39:47.244624 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:39:49.275773 kubelet[2168]: E0317 18:39:49.275739 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:39:49.618193 kubelet[2168]: I0317 18:39:49.618065 2168 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 17 18:39:49.618514 env[1303]: time="2025-03-17T18:39:49.618462592Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 17 18:39:49.618812 kubelet[2168]: I0317 18:39:49.618730 2168 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 17 18:39:50.602852 kubelet[2168]: I0317 18:39:50.602799 2168 topology_manager.go:215] "Topology Admit Handler" podUID="924c885d-5e49-40d8-890e-e45fb16ba92a" podNamespace="kube-system" podName="cilium-operator-599987898-mmdd5" Mar 17 18:39:50.658596 kubelet[2168]: I0317 18:39:50.658561 2168 topology_manager.go:215] "Topology Admit Handler" podUID="dfea3790-254d-4fcf-93e4-55283b485e5d" podNamespace="kube-system" podName="kube-proxy-msbmv" Mar 17 18:39:50.664754 kubelet[2168]: I0317 18:39:50.664718 2168 topology_manager.go:215] "Topology Admit Handler" podUID="d683e4fe-4fcd-4270-933b-482485f025c0" podNamespace="kube-system" podName="cilium-6mpmm" Mar 17 18:39:50.710041 kubelet[2168]: I0317 18:39:50.709996 2168 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/924c885d-5e49-40d8-890e-e45fb16ba92a-cilium-config-path\") pod \"cilium-operator-599987898-mmdd5\" (UID: \"924c885d-5e49-40d8-890e-e45fb16ba92a\") " pod="kube-system/cilium-operator-599987898-mmdd5" Mar 17 18:39:50.710041 kubelet[2168]: I0317 18:39:50.710037 2168 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxtzg\" (UniqueName: \"kubernetes.io/projected/924c885d-5e49-40d8-890e-e45fb16ba92a-kube-api-access-cxtzg\") pod \"cilium-operator-599987898-mmdd5\" (UID: \"924c885d-5e49-40d8-890e-e45fb16ba92a\") " pod="kube-system/cilium-operator-599987898-mmdd5" Mar 17 18:39:50.810642 kubelet[2168]: I0317 18:39:50.810569 2168 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dfea3790-254d-4fcf-93e4-55283b485e5d-lib-modules\") pod \"kube-proxy-msbmv\" (UID: \"dfea3790-254d-4fcf-93e4-55283b485e5d\") " pod="kube-system/kube-proxy-msbmv" Mar 17 18:39:50.810642 kubelet[2168]: I0317 18:39:50.810621 2168 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7v5x4\" (UniqueName: \"kubernetes.io/projected/dfea3790-254d-4fcf-93e4-55283b485e5d-kube-api-access-7v5x4\") pod \"kube-proxy-msbmv\" (UID: \"dfea3790-254d-4fcf-93e4-55283b485e5d\") " pod="kube-system/kube-proxy-msbmv" Mar 17 18:39:50.810642 kubelet[2168]: I0317 18:39:50.810637 2168 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d683e4fe-4fcd-4270-933b-482485f025c0-hostproc\") pod \"cilium-6mpmm\" (UID: \"d683e4fe-4fcd-4270-933b-482485f025c0\") " pod="kube-system/cilium-6mpmm" Mar 17 18:39:50.810642 kubelet[2168]: I0317 18:39:50.810651 2168 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d683e4fe-4fcd-4270-933b-482485f025c0-cni-path\") pod \"cilium-6mpmm\" (UID: \"d683e4fe-4fcd-4270-933b-482485f025c0\") " pod="kube-system/cilium-6mpmm" Mar 17 18:39:50.810956 kubelet[2168]: I0317 18:39:50.810664 2168 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d683e4fe-4fcd-4270-933b-482485f025c0-etc-cni-netd\") pod \"cilium-6mpmm\" (UID: \"d683e4fe-4fcd-4270-933b-482485f025c0\") " pod="kube-system/cilium-6mpmm" Mar 17 18:39:50.810956 kubelet[2168]: I0317 18:39:50.810729 2168 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d683e4fe-4fcd-4270-933b-482485f025c0-host-proc-sys-kernel\") pod \"cilium-6mpmm\" (UID: \"d683e4fe-4fcd-4270-933b-482485f025c0\") " pod="kube-system/cilium-6mpmm" Mar 17 18:39:50.810956 kubelet[2168]: I0317 18:39:50.810800 2168 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d683e4fe-4fcd-4270-933b-482485f025c0-cilium-cgroup\") pod \"cilium-6mpmm\" (UID: \"d683e4fe-4fcd-4270-933b-482485f025c0\") " pod="kube-system/cilium-6mpmm" Mar 17 18:39:50.810956 kubelet[2168]: I0317 18:39:50.810822 2168 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d683e4fe-4fcd-4270-933b-482485f025c0-lib-modules\") pod \"cilium-6mpmm\" (UID: \"d683e4fe-4fcd-4270-933b-482485f025c0\") " pod="kube-system/cilium-6mpmm" Mar 17 18:39:50.810956 kubelet[2168]: I0317 18:39:50.810837 2168 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5jbt\" (UniqueName: \"kubernetes.io/projected/d683e4fe-4fcd-4270-933b-482485f025c0-kube-api-access-g5jbt\") pod \"cilium-6mpmm\" (UID: \"d683e4fe-4fcd-4270-933b-482485f025c0\") " pod="kube-system/cilium-6mpmm" Mar 17 18:39:50.810956 kubelet[2168]: I0317 18:39:50.810855 2168 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d683e4fe-4fcd-4270-933b-482485f025c0-xtables-lock\") pod \"cilium-6mpmm\" (UID: \"d683e4fe-4fcd-4270-933b-482485f025c0\") " pod="kube-system/cilium-6mpmm" Mar 17 18:39:50.811161 kubelet[2168]: I0317 18:39:50.810911 2168 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d683e4fe-4fcd-4270-933b-482485f025c0-host-proc-sys-net\") pod \"cilium-6mpmm\" (UID: \"d683e4fe-4fcd-4270-933b-482485f025c0\") " pod="kube-system/cilium-6mpmm" Mar 17 18:39:50.811161 kubelet[2168]: I0317 18:39:50.810946 2168 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d683e4fe-4fcd-4270-933b-482485f025c0-hubble-tls\") pod \"cilium-6mpmm\" (UID: \"d683e4fe-4fcd-4270-933b-482485f025c0\") " pod="kube-system/cilium-6mpmm" Mar 17 18:39:50.811161 kubelet[2168]: I0317 18:39:50.810999 2168 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d683e4fe-4fcd-4270-933b-482485f025c0-bpf-maps\") pod \"cilium-6mpmm\" (UID: \"d683e4fe-4fcd-4270-933b-482485f025c0\") " pod="kube-system/cilium-6mpmm" Mar 17 18:39:50.811161 kubelet[2168]: I0317 18:39:50.811028 2168 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d683e4fe-4fcd-4270-933b-482485f025c0-cilium-config-path\") pod \"cilium-6mpmm\" (UID: \"d683e4fe-4fcd-4270-933b-482485f025c0\") " pod="kube-system/cilium-6mpmm" Mar 17 18:39:50.811161 kubelet[2168]: I0317 18:39:50.811050 2168 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/dfea3790-254d-4fcf-93e4-55283b485e5d-kube-proxy\") pod \"kube-proxy-msbmv\" (UID: \"dfea3790-254d-4fcf-93e4-55283b485e5d\") " pod="kube-system/kube-proxy-msbmv" Mar 17 18:39:50.811161 kubelet[2168]: I0317 18:39:50.811067 2168 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d683e4fe-4fcd-4270-933b-482485f025c0-cilium-run\") pod \"cilium-6mpmm\" (UID: \"d683e4fe-4fcd-4270-933b-482485f025c0\") " pod="kube-system/cilium-6mpmm" Mar 17 18:39:50.811344 kubelet[2168]: I0317 18:39:50.811108 2168 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dfea3790-254d-4fcf-93e4-55283b485e5d-xtables-lock\") pod \"kube-proxy-msbmv\" (UID: \"dfea3790-254d-4fcf-93e4-55283b485e5d\") " pod="kube-system/kube-proxy-msbmv" Mar 17 18:39:50.811344 kubelet[2168]: I0317 18:39:50.811132 2168 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d683e4fe-4fcd-4270-933b-482485f025c0-clustermesh-secrets\") pod \"cilium-6mpmm\" (UID: \"d683e4fe-4fcd-4270-933b-482485f025c0\") " pod="kube-system/cilium-6mpmm" Mar 17 18:39:50.906292 kubelet[2168]: E0317 18:39:50.906060 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:39:50.907194 env[1303]: time="2025-03-17T18:39:50.906686960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-mmdd5,Uid:924c885d-5e49-40d8-890e-e45fb16ba92a,Namespace:kube-system,Attempt:0,}" Mar 17 18:39:50.962233 kubelet[2168]: E0317 18:39:50.962189 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:39:50.962680 env[1303]: time="2025-03-17T18:39:50.962639349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-msbmv,Uid:dfea3790-254d-4fcf-93e4-55283b485e5d,Namespace:kube-system,Attempt:0,}" Mar 17 18:39:50.974228 kubelet[2168]: E0317 18:39:50.974200 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:39:50.974515 env[1303]: time="2025-03-17T18:39:50.974479678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6mpmm,Uid:d683e4fe-4fcd-4270-933b-482485f025c0,Namespace:kube-system,Attempt:0,}" Mar 17 18:39:51.077142 update_engine[1292]: I0317 18:39:51.077078 1292 update_attempter.cc:509] Updating boot flags... Mar 17 18:39:51.980676 env[1303]: time="2025-03-17T18:39:51.980611377Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:39:51.980676 env[1303]: time="2025-03-17T18:39:51.980652395Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:39:51.980676 env[1303]: time="2025-03-17T18:39:51.980663175Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:39:51.981024 env[1303]: time="2025-03-17T18:39:51.980854550Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5d707bf70c083ab8a55a91a2a6b40ff5e63353cb41f181a545eb142e99c8b631 pid=2279 runtime=io.containerd.runc.v2 Mar 17 18:39:51.994227 env[1303]: time="2025-03-17T18:39:51.994061559Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:39:51.994227 env[1303]: time="2025-03-17T18:39:51.994104781Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:39:51.994227 env[1303]: time="2025-03-17T18:39:51.994114850Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:39:51.994502 env[1303]: time="2025-03-17T18:39:51.994439017Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d329f6becee75d8f742653bf893b024f1ea7e58005821977d6a6ec85c735dd78 pid=2311 runtime=io.containerd.runc.v2 Mar 17 18:39:52.004859 env[1303]: time="2025-03-17T18:39:52.004707967Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:39:52.004859 env[1303]: time="2025-03-17T18:39:52.004741651Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:39:52.004859 env[1303]: time="2025-03-17T18:39:52.004751440Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:39:52.005250 env[1303]: time="2025-03-17T18:39:52.005169254Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ae8aa50511bed516d1278c42f49c89861e873b843b86a8a7a776e0d01c5a804a pid=2324 runtime=io.containerd.runc.v2 Mar 17 18:39:52.028518 env[1303]: time="2025-03-17T18:39:52.028478030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-msbmv,Uid:dfea3790-254d-4fcf-93e4-55283b485e5d,Namespace:kube-system,Attempt:0,} returns sandbox id \"5d707bf70c083ab8a55a91a2a6b40ff5e63353cb41f181a545eb142e99c8b631\"" Mar 17 18:39:52.029063 kubelet[2168]: E0317 18:39:52.029037 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:39:52.032486 env[1303]: time="2025-03-17T18:39:52.032448752Z" level=info msg="CreateContainer within sandbox \"5d707bf70c083ab8a55a91a2a6b40ff5e63353cb41f181a545eb142e99c8b631\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 17 18:39:52.039114 env[1303]: time="2025-03-17T18:39:52.039057860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6mpmm,Uid:d683e4fe-4fcd-4270-933b-482485f025c0,Namespace:kube-system,Attempt:0,} returns sandbox id \"ae8aa50511bed516d1278c42f49c89861e873b843b86a8a7a776e0d01c5a804a\"" Mar 17 18:39:52.039829 kubelet[2168]: E0317 18:39:52.039807 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:39:52.040709 env[1303]: time="2025-03-17T18:39:52.040669215Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 17 18:39:52.055364 env[1303]: time="2025-03-17T18:39:52.055319484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-mmdd5,Uid:924c885d-5e49-40d8-890e-e45fb16ba92a,Namespace:kube-system,Attempt:0,} returns sandbox id \"d329f6becee75d8f742653bf893b024f1ea7e58005821977d6a6ec85c735dd78\"" Mar 17 18:39:52.056206 kubelet[2168]: E0317 18:39:52.056185 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:39:52.059503 env[1303]: time="2025-03-17T18:39:52.059472783Z" level=info msg="CreateContainer within sandbox \"5d707bf70c083ab8a55a91a2a6b40ff5e63353cb41f181a545eb142e99c8b631\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"aa3344ad45fd697eb76bd3989ca98bc2fb15d84bc5ed444453e16bb07fde3598\"" Mar 17 18:39:52.060039 env[1303]: time="2025-03-17T18:39:52.060014623Z" level=info msg="StartContainer for \"aa3344ad45fd697eb76bd3989ca98bc2fb15d84bc5ed444453e16bb07fde3598\"" Mar 17 18:39:52.109545 env[1303]: time="2025-03-17T18:39:52.109487195Z" level=info msg="StartContainer for \"aa3344ad45fd697eb76bd3989ca98bc2fb15d84bc5ed444453e16bb07fde3598\" returns successfully" Mar 17 18:39:52.252955 kubelet[2168]: E0317 18:39:52.252833 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:39:52.458905 kubelet[2168]: I0317 18:39:52.458838 2168 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-msbmv" podStartSLOduration=2.458820524 podStartE2EDuration="2.458820524s" podCreationTimestamp="2025-03-17 18:39:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:39:52.45867684 +0000 UTC m=+16.311483187" watchObservedRunningTime="2025-03-17 18:39:52.458820524 +0000 UTC m=+16.311626861" Mar 17 18:39:56.824402 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3450171409.mount: Deactivated successfully. Mar 17 18:40:00.812453 env[1303]: time="2025-03-17T18:40:00.812406956Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:40:00.814374 env[1303]: time="2025-03-17T18:40:00.814351324Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:40:00.816062 env[1303]: time="2025-03-17T18:40:00.816039627Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:40:00.816524 env[1303]: time="2025-03-17T18:40:00.816492804Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Mar 17 18:40:00.817439 env[1303]: time="2025-03-17T18:40:00.817419916Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 17 18:40:00.819213 env[1303]: time="2025-03-17T18:40:00.819189453Z" level=info msg="CreateContainer within sandbox \"ae8aa50511bed516d1278c42f49c89861e873b843b86a8a7a776e0d01c5a804a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 18:40:00.837397 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1631110923.mount: Deactivated successfully. Mar 17 18:40:00.838970 env[1303]: time="2025-03-17T18:40:00.838922350Z" level=info msg="CreateContainer within sandbox \"ae8aa50511bed516d1278c42f49c89861e873b843b86a8a7a776e0d01c5a804a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"23833d725ee865bc81e4707cfe66fb14646610e4d1b38e891d8f7877fed13a43\"" Mar 17 18:40:00.839551 env[1303]: time="2025-03-17T18:40:00.839364436Z" level=info msg="StartContainer for \"23833d725ee865bc81e4707cfe66fb14646610e4d1b38e891d8f7877fed13a43\"" Mar 17 18:40:00.854880 systemd[1]: run-containerd-runc-k8s.io-23833d725ee865bc81e4707cfe66fb14646610e4d1b38e891d8f7877fed13a43-runc.6nZ5Fj.mount: Deactivated successfully. Mar 17 18:40:00.879697 env[1303]: time="2025-03-17T18:40:00.879655890Z" level=info msg="StartContainer for \"23833d725ee865bc81e4707cfe66fb14646610e4d1b38e891d8f7877fed13a43\" returns successfully" Mar 17 18:40:01.356156 kubelet[2168]: E0317 18:40:01.265042 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:40:01.364227 env[1303]: time="2025-03-17T18:40:01.364175637Z" level=info msg="shim disconnected" id=23833d725ee865bc81e4707cfe66fb14646610e4d1b38e891d8f7877fed13a43 Mar 17 18:40:01.364363 env[1303]: time="2025-03-17T18:40:01.364330350Z" level=warning msg="cleaning up after shim disconnected" id=23833d725ee865bc81e4707cfe66fb14646610e4d1b38e891d8f7877fed13a43 namespace=k8s.io Mar 17 18:40:01.364363 env[1303]: time="2025-03-17T18:40:01.364353444Z" level=info msg="cleaning up dead shim" Mar 17 18:40:01.373265 env[1303]: time="2025-03-17T18:40:01.373214171Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:40:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2601 runtime=io.containerd.runc.v2\n" Mar 17 18:40:01.835853 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-23833d725ee865bc81e4707cfe66fb14646610e4d1b38e891d8f7877fed13a43-rootfs.mount: Deactivated successfully. Mar 17 18:40:02.267565 kubelet[2168]: E0317 18:40:02.267535 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:40:02.274185 env[1303]: time="2025-03-17T18:40:02.274129084Z" level=info msg="CreateContainer within sandbox \"ae8aa50511bed516d1278c42f49c89861e873b843b86a8a7a776e0d01c5a804a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 18:40:02.287980 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1554728487.mount: Deactivated successfully. Mar 17 18:40:02.291076 env[1303]: time="2025-03-17T18:40:02.291018792Z" level=info msg="CreateContainer within sandbox \"ae8aa50511bed516d1278c42f49c89861e873b843b86a8a7a776e0d01c5a804a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"4ab3fdc2a52abe381b0dbed697218f9c8529860ae65d8cbfdccd729bb4dd2b0d\"" Mar 17 18:40:02.291430 env[1303]: time="2025-03-17T18:40:02.291401875Z" level=info msg="StartContainer for \"4ab3fdc2a52abe381b0dbed697218f9c8529860ae65d8cbfdccd729bb4dd2b0d\"" Mar 17 18:40:02.328733 env[1303]: time="2025-03-17T18:40:02.328673558Z" level=info msg="StartContainer for \"4ab3fdc2a52abe381b0dbed697218f9c8529860ae65d8cbfdccd729bb4dd2b0d\" returns successfully" Mar 17 18:40:02.336409 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 18:40:02.336635 systemd[1]: Stopped systemd-sysctl.service. Mar 17 18:40:02.336783 systemd[1]: Stopping systemd-sysctl.service... Mar 17 18:40:02.338328 systemd[1]: Starting systemd-sysctl.service... Mar 17 18:40:02.345283 systemd[1]: Finished systemd-sysctl.service. Mar 17 18:40:02.362118 env[1303]: time="2025-03-17T18:40:02.362077544Z" level=info msg="shim disconnected" id=4ab3fdc2a52abe381b0dbed697218f9c8529860ae65d8cbfdccd729bb4dd2b0d Mar 17 18:40:02.362118 env[1303]: time="2025-03-17T18:40:02.362116998Z" level=warning msg="cleaning up after shim disconnected" id=4ab3fdc2a52abe381b0dbed697218f9c8529860ae65d8cbfdccd729bb4dd2b0d namespace=k8s.io Mar 17 18:40:02.362340 env[1303]: time="2025-03-17T18:40:02.362126266Z" level=info msg="cleaning up dead shim" Mar 17 18:40:02.368040 env[1303]: time="2025-03-17T18:40:02.367996216Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:40:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2666 runtime=io.containerd.runc.v2\n" Mar 17 18:40:02.835306 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4ab3fdc2a52abe381b0dbed697218f9c8529860ae65d8cbfdccd729bb4dd2b0d-rootfs.mount: Deactivated successfully. Mar 17 18:40:03.270672 kubelet[2168]: E0317 18:40:03.270630 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:40:03.272324 env[1303]: time="2025-03-17T18:40:03.272263078Z" level=info msg="CreateContainer within sandbox \"ae8aa50511bed516d1278c42f49c89861e873b843b86a8a7a776e0d01c5a804a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 18:40:03.530853 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1033003014.mount: Deactivated successfully. Mar 17 18:40:03.544113 env[1303]: time="2025-03-17T18:40:03.544065705Z" level=info msg="CreateContainer within sandbox \"ae8aa50511bed516d1278c42f49c89861e873b843b86a8a7a776e0d01c5a804a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ed27693a345fa75bb086745b15b0456812cf9a15fef0a1b1b3c1ad8534a7ac25\"" Mar 17 18:40:03.544586 env[1303]: time="2025-03-17T18:40:03.544540011Z" level=info msg="StartContainer for \"ed27693a345fa75bb086745b15b0456812cf9a15fef0a1b1b3c1ad8534a7ac25\"" Mar 17 18:40:03.590041 env[1303]: time="2025-03-17T18:40:03.589984895Z" level=info msg="StartContainer for \"ed27693a345fa75bb086745b15b0456812cf9a15fef0a1b1b3c1ad8534a7ac25\" returns successfully" Mar 17 18:40:03.620782 env[1303]: time="2025-03-17T18:40:03.620734049Z" level=info msg="shim disconnected" id=ed27693a345fa75bb086745b15b0456812cf9a15fef0a1b1b3c1ad8534a7ac25 Mar 17 18:40:03.620782 env[1303]: time="2025-03-17T18:40:03.620781799Z" level=warning msg="cleaning up after shim disconnected" id=ed27693a345fa75bb086745b15b0456812cf9a15fef0a1b1b3c1ad8534a7ac25 namespace=k8s.io Mar 17 18:40:03.621048 env[1303]: time="2025-03-17T18:40:03.620793041Z" level=info msg="cleaning up dead shim" Mar 17 18:40:03.627487 env[1303]: time="2025-03-17T18:40:03.627437677Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:40:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2723 runtime=io.containerd.runc.v2\n" Mar 17 18:40:03.835308 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ed27693a345fa75bb086745b15b0456812cf9a15fef0a1b1b3c1ad8534a7ac25-rootfs.mount: Deactivated successfully. Mar 17 18:40:04.274098 kubelet[2168]: E0317 18:40:04.274068 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:40:04.275581 env[1303]: time="2025-03-17T18:40:04.275546876Z" level=info msg="CreateContainer within sandbox \"ae8aa50511bed516d1278c42f49c89861e873b843b86a8a7a776e0d01c5a804a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 18:40:04.336262 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1786589036.mount: Deactivated successfully. Mar 17 18:40:04.338612 env[1303]: time="2025-03-17T18:40:04.338559252Z" level=info msg="CreateContainer within sandbox \"ae8aa50511bed516d1278c42f49c89861e873b843b86a8a7a776e0d01c5a804a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"733b1a6306f7c4d976da26daa144058f23a5ad58d771ea70a37d294f3e9f6a1a\"" Mar 17 18:40:04.339298 env[1303]: time="2025-03-17T18:40:04.339253433Z" level=info msg="StartContainer for \"733b1a6306f7c4d976da26daa144058f23a5ad58d771ea70a37d294f3e9f6a1a\"" Mar 17 18:40:04.416272 env[1303]: time="2025-03-17T18:40:04.416231369Z" level=info msg="StartContainer for \"733b1a6306f7c4d976da26daa144058f23a5ad58d771ea70a37d294f3e9f6a1a\" returns successfully" Mar 17 18:40:04.690776 env[1303]: time="2025-03-17T18:40:04.690626330Z" level=info msg="shim disconnected" id=733b1a6306f7c4d976da26daa144058f23a5ad58d771ea70a37d294f3e9f6a1a Mar 17 18:40:04.690776 env[1303]: time="2025-03-17T18:40:04.690683307Z" level=warning msg="cleaning up after shim disconnected" id=733b1a6306f7c4d976da26daa144058f23a5ad58d771ea70a37d294f3e9f6a1a namespace=k8s.io Mar 17 18:40:04.690776 env[1303]: time="2025-03-17T18:40:04.690693105Z" level=info msg="cleaning up dead shim" Mar 17 18:40:04.696862 env[1303]: time="2025-03-17T18:40:04.696823086Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:40:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2779 runtime=io.containerd.runc.v2\n" Mar 17 18:40:04.697564 env[1303]: time="2025-03-17T18:40:04.697488662Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:40:04.700756 env[1303]: time="2025-03-17T18:40:04.700717627Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:40:04.702353 env[1303]: time="2025-03-17T18:40:04.702314391Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:40:04.702980 env[1303]: time="2025-03-17T18:40:04.702939090Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Mar 17 18:40:04.709535 env[1303]: time="2025-03-17T18:40:04.709497188Z" level=info msg="CreateContainer within sandbox \"d329f6becee75d8f742653bf893b024f1ea7e58005821977d6a6ec85c735dd78\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 17 18:40:04.720991 env[1303]: time="2025-03-17T18:40:04.720946900Z" level=info msg="CreateContainer within sandbox \"d329f6becee75d8f742653bf893b024f1ea7e58005821977d6a6ec85c735dd78\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"41638ae4d7834d78890be8db29e726c764f3a48a85da8fa082a77cf5975baf25\"" Mar 17 18:40:04.721362 env[1303]: time="2025-03-17T18:40:04.721333138Z" level=info msg="StartContainer for \"41638ae4d7834d78890be8db29e726c764f3a48a85da8fa082a77cf5975baf25\"" Mar 17 18:40:04.758055 env[1303]: time="2025-03-17T18:40:04.757997603Z" level=info msg="StartContainer for \"41638ae4d7834d78890be8db29e726c764f3a48a85da8fa082a77cf5975baf25\" returns successfully" Mar 17 18:40:04.837176 systemd[1]: run-containerd-runc-k8s.io-733b1a6306f7c4d976da26daa144058f23a5ad58d771ea70a37d294f3e9f6a1a-runc.ww0qBO.mount: Deactivated successfully. Mar 17 18:40:04.837357 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-733b1a6306f7c4d976da26daa144058f23a5ad58d771ea70a37d294f3e9f6a1a-rootfs.mount: Deactivated successfully. Mar 17 18:40:04.918066 systemd[1]: Started sshd@5-10.0.0.57:22-10.0.0.1:48088.service. Mar 17 18:40:04.952084 sshd[2829]: Accepted publickey for core from 10.0.0.1 port 48088 ssh2: RSA SHA256:DYcGKLA+BUI3KXBOyjzF6/uTec/cV0nLMAEcssN4/64 Mar 17 18:40:04.953075 sshd[2829]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:40:04.957341 systemd[1]: Started session-6.scope. Mar 17 18:40:04.957915 systemd-logind[1289]: New session 6 of user core. Mar 17 18:40:05.277938 kubelet[2168]: E0317 18:40:05.277804 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:40:05.279341 kubelet[2168]: E0317 18:40:05.279313 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:40:05.280688 env[1303]: time="2025-03-17T18:40:05.280644808Z" level=info msg="CreateContainer within sandbox \"ae8aa50511bed516d1278c42f49c89861e873b843b86a8a7a776e0d01c5a804a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 18:40:05.300481 env[1303]: time="2025-03-17T18:40:05.300429407Z" level=info msg="CreateContainer within sandbox \"ae8aa50511bed516d1278c42f49c89861e873b843b86a8a7a776e0d01c5a804a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"35c3fb32397012954e5ba7bc565432f70d745f57293b2e1ac909d30f4b528371\"" Mar 17 18:40:05.300883 env[1303]: time="2025-03-17T18:40:05.300841033Z" level=info msg="StartContainer for \"35c3fb32397012954e5ba7bc565432f70d745f57293b2e1ac909d30f4b528371\"" Mar 17 18:40:05.340451 env[1303]: time="2025-03-17T18:40:05.340407585Z" level=info msg="StartContainer for \"35c3fb32397012954e5ba7bc565432f70d745f57293b2e1ac909d30f4b528371\" returns successfully" Mar 17 18:40:05.440905 sshd[2829]: pam_unix(sshd:session): session closed for user core Mar 17 18:40:05.445644 systemd-logind[1289]: Session 6 logged out. Waiting for processes to exit. Mar 17 18:40:05.446061 systemd[1]: sshd@5-10.0.0.57:22-10.0.0.1:48088.service: Deactivated successfully. Mar 17 18:40:05.446835 systemd[1]: session-6.scope: Deactivated successfully. Mar 17 18:40:05.448124 systemd-logind[1289]: Removed session 6. Mar 17 18:40:05.489507 kubelet[2168]: I0317 18:40:05.489470 2168 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Mar 17 18:40:05.559013 kubelet[2168]: I0317 18:40:05.558856 2168 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-mmdd5" podStartSLOduration=2.912096502 podStartE2EDuration="15.558838808s" podCreationTimestamp="2025-03-17 18:39:50 +0000 UTC" firstStartedPulling="2025-03-17 18:39:52.056980523 +0000 UTC m=+15.909786870" lastFinishedPulling="2025-03-17 18:40:04.703722839 +0000 UTC m=+28.556529176" observedRunningTime="2025-03-17 18:40:05.548918922 +0000 UTC m=+29.401725269" watchObservedRunningTime="2025-03-17 18:40:05.558838808 +0000 UTC m=+29.411645155" Mar 17 18:40:05.560380 kubelet[2168]: I0317 18:40:05.560352 2168 topology_manager.go:215] "Topology Admit Handler" podUID="70ffa84e-ab06-44bf-b863-e235cea1dc2c" podNamespace="kube-system" podName="coredns-7db6d8ff4d-25ppq" Mar 17 18:40:05.561082 kubelet[2168]: I0317 18:40:05.561037 2168 topology_manager.go:215] "Topology Admit Handler" podUID="e96af643-680b-4433-aee5-b766149ca658" podNamespace="kube-system" podName="coredns-7db6d8ff4d-dstr6" Mar 17 18:40:05.608984 kubelet[2168]: I0317 18:40:05.608937 2168 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e96af643-680b-4433-aee5-b766149ca658-config-volume\") pod \"coredns-7db6d8ff4d-dstr6\" (UID: \"e96af643-680b-4433-aee5-b766149ca658\") " pod="kube-system/coredns-7db6d8ff4d-dstr6" Mar 17 18:40:05.608984 kubelet[2168]: I0317 18:40:05.608982 2168 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvztd\" (UniqueName: \"kubernetes.io/projected/70ffa84e-ab06-44bf-b863-e235cea1dc2c-kube-api-access-qvztd\") pod \"coredns-7db6d8ff4d-25ppq\" (UID: \"70ffa84e-ab06-44bf-b863-e235cea1dc2c\") " pod="kube-system/coredns-7db6d8ff4d-25ppq" Mar 17 18:40:05.609171 kubelet[2168]: I0317 18:40:05.609006 2168 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fzmv\" (UniqueName: \"kubernetes.io/projected/e96af643-680b-4433-aee5-b766149ca658-kube-api-access-4fzmv\") pod \"coredns-7db6d8ff4d-dstr6\" (UID: \"e96af643-680b-4433-aee5-b766149ca658\") " pod="kube-system/coredns-7db6d8ff4d-dstr6" Mar 17 18:40:05.609171 kubelet[2168]: I0317 18:40:05.609020 2168 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/70ffa84e-ab06-44bf-b863-e235cea1dc2c-config-volume\") pod \"coredns-7db6d8ff4d-25ppq\" (UID: \"70ffa84e-ab06-44bf-b863-e235cea1dc2c\") " pod="kube-system/coredns-7db6d8ff4d-25ppq" Mar 17 18:40:05.866589 kubelet[2168]: E0317 18:40:05.866486 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:40:05.866824 kubelet[2168]: E0317 18:40:05.866759 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:40:05.867463 env[1303]: time="2025-03-17T18:40:05.867037460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-25ppq,Uid:70ffa84e-ab06-44bf-b863-e235cea1dc2c,Namespace:kube-system,Attempt:0,}" Mar 17 18:40:05.867463 env[1303]: time="2025-03-17T18:40:05.867236666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-dstr6,Uid:e96af643-680b-4433-aee5-b766149ca658,Namespace:kube-system,Attempt:0,}" Mar 17 18:40:06.282449 kubelet[2168]: E0317 18:40:06.282419 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:40:06.282831 kubelet[2168]: E0317 18:40:06.282718 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:40:06.295475 kubelet[2168]: I0317 18:40:06.295263 2168 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-6mpmm" podStartSLOduration=7.518217818 podStartE2EDuration="16.295245759s" podCreationTimestamp="2025-03-17 18:39:50 +0000 UTC" firstStartedPulling="2025-03-17 18:39:52.040268091 +0000 UTC m=+15.893074438" lastFinishedPulling="2025-03-17 18:40:00.817296022 +0000 UTC m=+24.670102379" observedRunningTime="2025-03-17 18:40:06.294998372 +0000 UTC m=+30.147804719" watchObservedRunningTime="2025-03-17 18:40:06.295245759 +0000 UTC m=+30.148052106" Mar 17 18:40:07.284388 kubelet[2168]: E0317 18:40:07.284351 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:40:08.111613 systemd-networkd[1079]: cilium_host: Link UP Mar 17 18:40:08.111731 systemd-networkd[1079]: cilium_net: Link UP Mar 17 18:40:08.114953 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Mar 17 18:40:08.115005 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Mar 17 18:40:08.113661 systemd-networkd[1079]: cilium_net: Gained carrier Mar 17 18:40:08.115142 systemd-networkd[1079]: cilium_host: Gained carrier Mar 17 18:40:08.115261 systemd-networkd[1079]: cilium_net: Gained IPv6LL Mar 17 18:40:08.115378 systemd-networkd[1079]: cilium_host: Gained IPv6LL Mar 17 18:40:08.181560 systemd-networkd[1079]: cilium_vxlan: Link UP Mar 17 18:40:08.181569 systemd-networkd[1079]: cilium_vxlan: Gained carrier Mar 17 18:40:08.286672 kubelet[2168]: E0317 18:40:08.286630 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:40:08.360897 kernel: NET: Registered PF_ALG protocol family Mar 17 18:40:08.874146 systemd-networkd[1079]: lxc_health: Link UP Mar 17 18:40:08.887287 systemd-networkd[1079]: lxc_health: Gained carrier Mar 17 18:40:08.887891 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Mar 17 18:40:09.288505 kubelet[2168]: E0317 18:40:09.288471 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:40:09.446144 systemd-networkd[1079]: lxc194a4420aa68: Link UP Mar 17 18:40:09.451895 kernel: eth0: renamed from tmpd32b3 Mar 17 18:40:09.459578 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Mar 17 18:40:09.459619 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc194a4420aa68: link becomes ready Mar 17 18:40:09.459788 systemd-networkd[1079]: lxc194a4420aa68: Gained carrier Mar 17 18:40:09.465985 systemd-networkd[1079]: lxcaf7c2141222f: Link UP Mar 17 18:40:09.473925 kernel: eth0: renamed from tmpe3f7a Mar 17 18:40:09.481054 systemd-networkd[1079]: cilium_vxlan: Gained IPv6LL Mar 17 18:40:09.483454 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Mar 17 18:40:09.483494 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcaf7c2141222f: link becomes ready Mar 17 18:40:09.483566 systemd-networkd[1079]: lxcaf7c2141222f: Gained carrier Mar 17 18:40:10.176998 systemd-networkd[1079]: lxc_health: Gained IPv6LL Mar 17 18:40:10.290096 kubelet[2168]: E0317 18:40:10.290059 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:40:10.443674 systemd[1]: Started sshd@6-10.0.0.57:22-10.0.0.1:48096.service. Mar 17 18:40:10.475987 sshd[3368]: Accepted publickey for core from 10.0.0.1 port 48096 ssh2: RSA SHA256:DYcGKLA+BUI3KXBOyjzF6/uTec/cV0nLMAEcssN4/64 Mar 17 18:40:10.477078 sshd[3368]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:40:10.481184 systemd-logind[1289]: New session 7 of user core. Mar 17 18:40:10.481828 systemd[1]: Started session-7.scope. Mar 17 18:40:10.601975 sshd[3368]: pam_unix(sshd:session): session closed for user core Mar 17 18:40:10.604273 systemd[1]: sshd@6-10.0.0.57:22-10.0.0.1:48096.service: Deactivated successfully. Mar 17 18:40:10.605235 systemd[1]: session-7.scope: Deactivated successfully. Mar 17 18:40:10.605257 systemd-logind[1289]: Session 7 logged out. Waiting for processes to exit. Mar 17 18:40:10.606258 systemd-logind[1289]: Removed session 7. Mar 17 18:40:10.752994 systemd-networkd[1079]: lxc194a4420aa68: Gained IPv6LL Mar 17 18:40:11.290755 kubelet[2168]: E0317 18:40:11.290723 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:40:11.457000 systemd-networkd[1079]: lxcaf7c2141222f: Gained IPv6LL Mar 17 18:40:12.622937 env[1303]: time="2025-03-17T18:40:12.622741360Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:40:12.622937 env[1303]: time="2025-03-17T18:40:12.622787958Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:40:12.622937 env[1303]: time="2025-03-17T18:40:12.622797767Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:40:12.623338 env[1303]: time="2025-03-17T18:40:12.623068106Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d32b3ac56e136ca183c6ad8cf1f97ac6965817faf6031498b7baf0e9391a68b2 pid=3406 runtime=io.containerd.runc.v2 Mar 17 18:40:12.631443 env[1303]: time="2025-03-17T18:40:12.625716071Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:40:12.631443 env[1303]: time="2025-03-17T18:40:12.625763239Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:40:12.631443 env[1303]: time="2025-03-17T18:40:12.625773128Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:40:12.631443 env[1303]: time="2025-03-17T18:40:12.625917810Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e3f7ad686178f02262f7127f00d3ee714679402cb8f54812003b80d488e89d7d pid=3414 runtime=io.containerd.runc.v2 Mar 17 18:40:12.642737 systemd[1]: run-containerd-runc-k8s.io-e3f7ad686178f02262f7127f00d3ee714679402cb8f54812003b80d488e89d7d-runc.nJWrx1.mount: Deactivated successfully. Mar 17 18:40:12.655262 systemd-resolved[1221]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 18:40:12.655679 systemd-resolved[1221]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 18:40:12.679782 env[1303]: time="2025-03-17T18:40:12.679033656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-dstr6,Uid:e96af643-680b-4433-aee5-b766149ca658,Namespace:kube-system,Attempt:0,} returns sandbox id \"d32b3ac56e136ca183c6ad8cf1f97ac6965817faf6031498b7baf0e9391a68b2\"" Mar 17 18:40:12.680340 kubelet[2168]: E0317 18:40:12.680308 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:40:12.681391 env[1303]: time="2025-03-17T18:40:12.681318848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-25ppq,Uid:70ffa84e-ab06-44bf-b863-e235cea1dc2c,Namespace:kube-system,Attempt:0,} returns sandbox id \"e3f7ad686178f02262f7127f00d3ee714679402cb8f54812003b80d488e89d7d\"" Mar 17 18:40:12.682446 env[1303]: time="2025-03-17T18:40:12.682420702Z" level=info msg="CreateContainer within sandbox \"d32b3ac56e136ca183c6ad8cf1f97ac6965817faf6031498b7baf0e9391a68b2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 18:40:12.683034 kubelet[2168]: E0317 18:40:12.683011 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:40:12.686117 env[1303]: time="2025-03-17T18:40:12.686085422Z" level=info msg="CreateContainer within sandbox \"e3f7ad686178f02262f7127f00d3ee714679402cb8f54812003b80d488e89d7d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 18:40:12.697846 env[1303]: time="2025-03-17T18:40:12.697804584Z" level=info msg="CreateContainer within sandbox \"d32b3ac56e136ca183c6ad8cf1f97ac6965817faf6031498b7baf0e9391a68b2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2c211dfbf8e7f06a21b10107d5094cf817d5aa134ca8b12a29966294eafaf84e\"" Mar 17 18:40:12.698240 env[1303]: time="2025-03-17T18:40:12.698219946Z" level=info msg="StartContainer for \"2c211dfbf8e7f06a21b10107d5094cf817d5aa134ca8b12a29966294eafaf84e\"" Mar 17 18:40:12.706042 env[1303]: time="2025-03-17T18:40:12.705989864Z" level=info msg="CreateContainer within sandbox \"e3f7ad686178f02262f7127f00d3ee714679402cb8f54812003b80d488e89d7d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7a63677c430c293105c979faad2f33a13360312d28a6e0751e411309a287b1dc\"" Mar 17 18:40:12.707378 env[1303]: time="2025-03-17T18:40:12.707347970Z" level=info msg="StartContainer for \"7a63677c430c293105c979faad2f33a13360312d28a6e0751e411309a287b1dc\"" Mar 17 18:40:12.739581 env[1303]: time="2025-03-17T18:40:12.739539705Z" level=info msg="StartContainer for \"2c211dfbf8e7f06a21b10107d5094cf817d5aa134ca8b12a29966294eafaf84e\" returns successfully" Mar 17 18:40:12.752312 env[1303]: time="2025-03-17T18:40:12.752114338Z" level=info msg="StartContainer for \"7a63677c430c293105c979faad2f33a13360312d28a6e0751e411309a287b1dc\" returns successfully" Mar 17 18:40:13.295494 kubelet[2168]: E0317 18:40:13.295450 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:40:13.297228 kubelet[2168]: E0317 18:40:13.297084 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:40:13.305103 kubelet[2168]: I0317 18:40:13.305049 2168 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-dstr6" podStartSLOduration=23.305033729 podStartE2EDuration="23.305033729s" podCreationTimestamp="2025-03-17 18:39:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:40:13.304558625 +0000 UTC m=+37.157364972" watchObservedRunningTime="2025-03-17 18:40:13.305033729 +0000 UTC m=+37.157840076" Mar 17 18:40:13.322957 kubelet[2168]: I0317 18:40:13.322895 2168 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-25ppq" podStartSLOduration=23.322843029 podStartE2EDuration="23.322843029s" podCreationTimestamp="2025-03-17 18:39:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:40:13.322745686 +0000 UTC m=+37.175552033" watchObservedRunningTime="2025-03-17 18:40:13.322843029 +0000 UTC m=+37.175649376" Mar 17 18:40:14.299301 kubelet[2168]: E0317 18:40:14.299271 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:40:14.299667 kubelet[2168]: E0317 18:40:14.299371 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:40:15.300644 kubelet[2168]: E0317 18:40:15.300600 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:40:15.301092 kubelet[2168]: E0317 18:40:15.300760 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:40:15.604412 systemd[1]: Started sshd@7-10.0.0.57:22-10.0.0.1:44540.service. Mar 17 18:40:15.636404 sshd[3559]: Accepted publickey for core from 10.0.0.1 port 44540 ssh2: RSA SHA256:DYcGKLA+BUI3KXBOyjzF6/uTec/cV0nLMAEcssN4/64 Mar 17 18:40:15.637586 sshd[3559]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:40:15.641120 systemd-logind[1289]: New session 8 of user core. Mar 17 18:40:15.641822 systemd[1]: Started session-8.scope. Mar 17 18:40:15.752588 sshd[3559]: pam_unix(sshd:session): session closed for user core Mar 17 18:40:15.754972 systemd[1]: sshd@7-10.0.0.57:22-10.0.0.1:44540.service: Deactivated successfully. Mar 17 18:40:15.756084 systemd-logind[1289]: Session 8 logged out. Waiting for processes to exit. Mar 17 18:40:15.756085 systemd[1]: session-8.scope: Deactivated successfully. Mar 17 18:40:15.756814 systemd-logind[1289]: Removed session 8. Mar 17 18:40:20.755383 systemd[1]: Started sshd@8-10.0.0.57:22-10.0.0.1:44556.service. Mar 17 18:40:20.784468 sshd[3574]: Accepted publickey for core from 10.0.0.1 port 44556 ssh2: RSA SHA256:DYcGKLA+BUI3KXBOyjzF6/uTec/cV0nLMAEcssN4/64 Mar 17 18:40:20.785265 sshd[3574]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:40:20.788556 systemd-logind[1289]: New session 9 of user core. Mar 17 18:40:20.789327 systemd[1]: Started session-9.scope. Mar 17 18:40:20.892325 sshd[3574]: pam_unix(sshd:session): session closed for user core Mar 17 18:40:20.894762 systemd[1]: Started sshd@9-10.0.0.57:22-10.0.0.1:44568.service. Mar 17 18:40:20.895176 systemd[1]: sshd@8-10.0.0.57:22-10.0.0.1:44556.service: Deactivated successfully. Mar 17 18:40:20.895998 systemd-logind[1289]: Session 9 logged out. Waiting for processes to exit. Mar 17 18:40:20.896116 systemd[1]: session-9.scope: Deactivated successfully. Mar 17 18:40:20.897321 systemd-logind[1289]: Removed session 9. Mar 17 18:40:20.926106 sshd[3588]: Accepted publickey for core from 10.0.0.1 port 44568 ssh2: RSA SHA256:DYcGKLA+BUI3KXBOyjzF6/uTec/cV0nLMAEcssN4/64 Mar 17 18:40:20.927244 sshd[3588]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:40:20.930385 systemd-logind[1289]: New session 10 of user core. Mar 17 18:40:20.931081 systemd[1]: Started session-10.scope. Mar 17 18:40:21.082693 sshd[3588]: pam_unix(sshd:session): session closed for user core Mar 17 18:40:21.086447 systemd[1]: Started sshd@10-10.0.0.57:22-10.0.0.1:44578.service. Mar 17 18:40:21.087637 systemd[1]: sshd@9-10.0.0.57:22-10.0.0.1:44568.service: Deactivated successfully. Mar 17 18:40:21.088680 systemd[1]: session-10.scope: Deactivated successfully. Mar 17 18:40:21.089400 systemd-logind[1289]: Session 10 logged out. Waiting for processes to exit. Mar 17 18:40:21.090539 systemd-logind[1289]: Removed session 10. Mar 17 18:40:21.126519 sshd[3600]: Accepted publickey for core from 10.0.0.1 port 44578 ssh2: RSA SHA256:DYcGKLA+BUI3KXBOyjzF6/uTec/cV0nLMAEcssN4/64 Mar 17 18:40:21.127678 sshd[3600]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:40:21.130995 systemd-logind[1289]: New session 11 of user core. Mar 17 18:40:21.131686 systemd[1]: Started session-11.scope. Mar 17 18:40:21.236222 sshd[3600]: pam_unix(sshd:session): session closed for user core Mar 17 18:40:21.238467 systemd[1]: sshd@10-10.0.0.57:22-10.0.0.1:44578.service: Deactivated successfully. Mar 17 18:40:21.239330 systemd-logind[1289]: Session 11 logged out. Waiting for processes to exit. Mar 17 18:40:21.239355 systemd[1]: session-11.scope: Deactivated successfully. Mar 17 18:40:21.240214 systemd-logind[1289]: Removed session 11. Mar 17 18:40:26.238132 systemd[1]: Started sshd@11-10.0.0.57:22-10.0.0.1:60636.service. Mar 17 18:40:26.270572 sshd[3618]: Accepted publickey for core from 10.0.0.1 port 60636 ssh2: RSA SHA256:DYcGKLA+BUI3KXBOyjzF6/uTec/cV0nLMAEcssN4/64 Mar 17 18:40:26.271598 sshd[3618]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:40:26.274775 systemd-logind[1289]: New session 12 of user core. Mar 17 18:40:26.275618 systemd[1]: Started session-12.scope. Mar 17 18:40:26.403614 sshd[3618]: pam_unix(sshd:session): session closed for user core Mar 17 18:40:26.405952 systemd[1]: sshd@11-10.0.0.57:22-10.0.0.1:60636.service: Deactivated successfully. Mar 17 18:40:26.406771 systemd-logind[1289]: Session 12 logged out. Waiting for processes to exit. Mar 17 18:40:26.406805 systemd[1]: session-12.scope: Deactivated successfully. Mar 17 18:40:26.407422 systemd-logind[1289]: Removed session 12. Mar 17 18:40:31.406206 systemd[1]: Started sshd@12-10.0.0.57:22-10.0.0.1:60652.service. Mar 17 18:40:31.435610 sshd[3633]: Accepted publickey for core from 10.0.0.1 port 60652 ssh2: RSA SHA256:DYcGKLA+BUI3KXBOyjzF6/uTec/cV0nLMAEcssN4/64 Mar 17 18:40:31.436888 sshd[3633]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:40:31.440317 systemd-logind[1289]: New session 13 of user core. Mar 17 18:40:31.441294 systemd[1]: Started session-13.scope. Mar 17 18:40:31.544011 sshd[3633]: pam_unix(sshd:session): session closed for user core Mar 17 18:40:31.546773 systemd[1]: Started sshd@13-10.0.0.57:22-10.0.0.1:60654.service. Mar 17 18:40:31.547384 systemd[1]: sshd@12-10.0.0.57:22-10.0.0.1:60652.service: Deactivated successfully. Mar 17 18:40:31.548161 systemd[1]: session-13.scope: Deactivated successfully. Mar 17 18:40:31.550311 systemd-logind[1289]: Session 13 logged out. Waiting for processes to exit. Mar 17 18:40:31.551218 systemd-logind[1289]: Removed session 13. Mar 17 18:40:31.577328 sshd[3645]: Accepted publickey for core from 10.0.0.1 port 60654 ssh2: RSA SHA256:DYcGKLA+BUI3KXBOyjzF6/uTec/cV0nLMAEcssN4/64 Mar 17 18:40:31.578302 sshd[3645]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:40:31.581330 systemd-logind[1289]: New session 14 of user core. Mar 17 18:40:31.582292 systemd[1]: Started session-14.scope. Mar 17 18:40:31.780731 sshd[3645]: pam_unix(sshd:session): session closed for user core Mar 17 18:40:31.783017 systemd[1]: Started sshd@14-10.0.0.57:22-10.0.0.1:60660.service. Mar 17 18:40:31.783442 systemd[1]: sshd@13-10.0.0.57:22-10.0.0.1:60654.service: Deactivated successfully. Mar 17 18:40:31.784394 systemd[1]: session-14.scope: Deactivated successfully. Mar 17 18:40:31.784801 systemd-logind[1289]: Session 14 logged out. Waiting for processes to exit. Mar 17 18:40:31.785677 systemd-logind[1289]: Removed session 14. Mar 17 18:40:31.815828 sshd[3657]: Accepted publickey for core from 10.0.0.1 port 60660 ssh2: RSA SHA256:DYcGKLA+BUI3KXBOyjzF6/uTec/cV0nLMAEcssN4/64 Mar 17 18:40:31.816769 sshd[3657]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:40:31.819738 systemd-logind[1289]: New session 15 of user core. Mar 17 18:40:31.820427 systemd[1]: Started session-15.scope. Mar 17 18:40:33.227724 sshd[3657]: pam_unix(sshd:session): session closed for user core Mar 17 18:40:33.230180 systemd[1]: Started sshd@15-10.0.0.57:22-10.0.0.1:60672.service. Mar 17 18:40:33.231783 systemd[1]: sshd@14-10.0.0.57:22-10.0.0.1:60660.service: Deactivated successfully. Mar 17 18:40:33.232815 systemd[1]: session-15.scope: Deactivated successfully. Mar 17 18:40:33.232903 systemd-logind[1289]: Session 15 logged out. Waiting for processes to exit. Mar 17 18:40:33.236834 systemd-logind[1289]: Removed session 15. Mar 17 18:40:33.269030 sshd[3676]: Accepted publickey for core from 10.0.0.1 port 60672 ssh2: RSA SHA256:DYcGKLA+BUI3KXBOyjzF6/uTec/cV0nLMAEcssN4/64 Mar 17 18:40:33.270150 sshd[3676]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:40:33.273417 systemd-logind[1289]: New session 16 of user core. Mar 17 18:40:33.274104 systemd[1]: Started session-16.scope. Mar 17 18:40:33.494537 sshd[3676]: pam_unix(sshd:session): session closed for user core Mar 17 18:40:33.497253 systemd[1]: Started sshd@16-10.0.0.57:22-10.0.0.1:60686.service. Mar 17 18:40:33.497643 systemd[1]: sshd@15-10.0.0.57:22-10.0.0.1:60672.service: Deactivated successfully. Mar 17 18:40:33.498389 systemd[1]: session-16.scope: Deactivated successfully. Mar 17 18:40:33.499537 systemd-logind[1289]: Session 16 logged out. Waiting for processes to exit. Mar 17 18:40:33.500356 systemd-logind[1289]: Removed session 16. Mar 17 18:40:33.527336 sshd[3691]: Accepted publickey for core from 10.0.0.1 port 60686 ssh2: RSA SHA256:DYcGKLA+BUI3KXBOyjzF6/uTec/cV0nLMAEcssN4/64 Mar 17 18:40:33.528347 sshd[3691]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:40:33.531433 systemd-logind[1289]: New session 17 of user core. Mar 17 18:40:33.532207 systemd[1]: Started session-17.scope. Mar 17 18:40:33.627932 sshd[3691]: pam_unix(sshd:session): session closed for user core Mar 17 18:40:33.630082 systemd[1]: sshd@16-10.0.0.57:22-10.0.0.1:60686.service: Deactivated successfully. Mar 17 18:40:33.630960 systemd[1]: session-17.scope: Deactivated successfully. Mar 17 18:40:33.630996 systemd-logind[1289]: Session 17 logged out. Waiting for processes to exit. Mar 17 18:40:33.631740 systemd-logind[1289]: Removed session 17. Mar 17 18:40:38.630650 systemd[1]: Started sshd@17-10.0.0.57:22-10.0.0.1:38060.service. Mar 17 18:40:38.660800 sshd[3709]: Accepted publickey for core from 10.0.0.1 port 38060 ssh2: RSA SHA256:DYcGKLA+BUI3KXBOyjzF6/uTec/cV0nLMAEcssN4/64 Mar 17 18:40:38.662072 sshd[3709]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:40:38.666115 systemd-logind[1289]: New session 18 of user core. Mar 17 18:40:38.666957 systemd[1]: Started session-18.scope. Mar 17 18:40:38.769862 sshd[3709]: pam_unix(sshd:session): session closed for user core Mar 17 18:40:38.771912 systemd[1]: sshd@17-10.0.0.57:22-10.0.0.1:38060.service: Deactivated successfully. Mar 17 18:40:38.772692 systemd[1]: session-18.scope: Deactivated successfully. Mar 17 18:40:38.773621 systemd-logind[1289]: Session 18 logged out. Waiting for processes to exit. Mar 17 18:40:38.774375 systemd-logind[1289]: Removed session 18. Mar 17 18:40:43.772658 systemd[1]: Started sshd@18-10.0.0.57:22-10.0.0.1:38076.service. Mar 17 18:40:43.803013 sshd[3726]: Accepted publickey for core from 10.0.0.1 port 38076 ssh2: RSA SHA256:DYcGKLA+BUI3KXBOyjzF6/uTec/cV0nLMAEcssN4/64 Mar 17 18:40:43.804101 sshd[3726]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:40:43.807274 systemd-logind[1289]: New session 19 of user core. Mar 17 18:40:43.807981 systemd[1]: Started session-19.scope. Mar 17 18:40:43.903606 sshd[3726]: pam_unix(sshd:session): session closed for user core Mar 17 18:40:43.905665 systemd[1]: sshd@18-10.0.0.57:22-10.0.0.1:38076.service: Deactivated successfully. Mar 17 18:40:43.906498 systemd[1]: session-19.scope: Deactivated successfully. Mar 17 18:40:43.907508 systemd-logind[1289]: Session 19 logged out. Waiting for processes to exit. Mar 17 18:40:43.908255 systemd-logind[1289]: Removed session 19. Mar 17 18:40:48.907222 systemd[1]: Started sshd@19-10.0.0.57:22-10.0.0.1:41628.service. Mar 17 18:40:48.937491 sshd[3740]: Accepted publickey for core from 10.0.0.1 port 41628 ssh2: RSA SHA256:DYcGKLA+BUI3KXBOyjzF6/uTec/cV0nLMAEcssN4/64 Mar 17 18:40:48.938686 sshd[3740]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:40:48.942752 systemd-logind[1289]: New session 20 of user core. Mar 17 18:40:48.943729 systemd[1]: Started session-20.scope. Mar 17 18:40:49.046617 sshd[3740]: pam_unix(sshd:session): session closed for user core Mar 17 18:40:49.049140 systemd[1]: sshd@19-10.0.0.57:22-10.0.0.1:41628.service: Deactivated successfully. Mar 17 18:40:49.049847 systemd[1]: session-20.scope: Deactivated successfully. Mar 17 18:40:49.050582 systemd-logind[1289]: Session 20 logged out. Waiting for processes to exit. Mar 17 18:40:49.051444 systemd-logind[1289]: Removed session 20. Mar 17 18:40:53.225783 kubelet[2168]: E0317 18:40:53.225739 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:40:54.049949 systemd[1]: Started sshd@20-10.0.0.57:22-10.0.0.1:34078.service. Mar 17 18:40:54.079251 sshd[3756]: Accepted publickey for core from 10.0.0.1 port 34078 ssh2: RSA SHA256:DYcGKLA+BUI3KXBOyjzF6/uTec/cV0nLMAEcssN4/64 Mar 17 18:40:54.080239 sshd[3756]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:40:54.083225 systemd-logind[1289]: New session 21 of user core. Mar 17 18:40:54.084225 systemd[1]: Started session-21.scope. Mar 17 18:40:54.186815 sshd[3756]: pam_unix(sshd:session): session closed for user core Mar 17 18:40:54.189978 systemd[1]: Started sshd@21-10.0.0.57:22-10.0.0.1:34086.service. Mar 17 18:40:54.190492 systemd[1]: sshd@20-10.0.0.57:22-10.0.0.1:34078.service: Deactivated successfully. Mar 17 18:40:54.191552 systemd-logind[1289]: Session 21 logged out. Waiting for processes to exit. Mar 17 18:40:54.191617 systemd[1]: session-21.scope: Deactivated successfully. Mar 17 18:40:54.192769 systemd-logind[1289]: Removed session 21. Mar 17 18:40:54.219283 sshd[3769]: Accepted publickey for core from 10.0.0.1 port 34086 ssh2: RSA SHA256:DYcGKLA+BUI3KXBOyjzF6/uTec/cV0nLMAEcssN4/64 Mar 17 18:40:54.220550 sshd[3769]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:40:54.223921 systemd-logind[1289]: New session 22 of user core. Mar 17 18:40:54.224642 systemd[1]: Started session-22.scope. Mar 17 18:40:54.225225 kubelet[2168]: E0317 18:40:54.225199 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:40:55.876313 env[1303]: time="2025-03-17T18:40:55.876139712Z" level=info msg="StopContainer for \"41638ae4d7834d78890be8db29e726c764f3a48a85da8fa082a77cf5975baf25\" with timeout 30 (s)" Mar 17 18:40:55.879644 env[1303]: time="2025-03-17T18:40:55.879556517Z" level=info msg="Stop container \"41638ae4d7834d78890be8db29e726c764f3a48a85da8fa082a77cf5975baf25\" with signal terminated" Mar 17 18:40:55.902599 env[1303]: time="2025-03-17T18:40:55.902529873Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 18:40:55.907427 env[1303]: time="2025-03-17T18:40:55.907389419Z" level=info msg="StopContainer for \"35c3fb32397012954e5ba7bc565432f70d745f57293b2e1ac909d30f4b528371\" with timeout 2 (s)" Mar 17 18:40:55.907633 env[1303]: time="2025-03-17T18:40:55.907609854Z" level=info msg="Stop container \"35c3fb32397012954e5ba7bc565432f70d745f57293b2e1ac909d30f4b528371\" with signal terminated" Mar 17 18:40:55.908758 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-41638ae4d7834d78890be8db29e726c764f3a48a85da8fa082a77cf5975baf25-rootfs.mount: Deactivated successfully. Mar 17 18:40:55.914120 systemd-networkd[1079]: lxc_health: Link DOWN Mar 17 18:40:55.914128 systemd-networkd[1079]: lxc_health: Lost carrier Mar 17 18:40:55.921852 env[1303]: time="2025-03-17T18:40:55.921803328Z" level=info msg="shim disconnected" id=41638ae4d7834d78890be8db29e726c764f3a48a85da8fa082a77cf5975baf25 Mar 17 18:40:55.921952 env[1303]: time="2025-03-17T18:40:55.921856300Z" level=warning msg="cleaning up after shim disconnected" id=41638ae4d7834d78890be8db29e726c764f3a48a85da8fa082a77cf5975baf25 namespace=k8s.io Mar 17 18:40:55.921952 env[1303]: time="2025-03-17T18:40:55.921883052Z" level=info msg="cleaning up dead shim" Mar 17 18:40:55.931294 env[1303]: time="2025-03-17T18:40:55.931196752Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:40:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3824 runtime=io.containerd.runc.v2\n" Mar 17 18:40:55.935336 env[1303]: time="2025-03-17T18:40:55.935296553Z" level=info msg="StopContainer for \"41638ae4d7834d78890be8db29e726c764f3a48a85da8fa082a77cf5975baf25\" returns successfully" Mar 17 18:40:55.939214 env[1303]: time="2025-03-17T18:40:55.939177002Z" level=info msg="StopPodSandbox for \"d329f6becee75d8f742653bf893b024f1ea7e58005821977d6a6ec85c735dd78\"" Mar 17 18:40:55.939307 env[1303]: time="2025-03-17T18:40:55.939241376Z" level=info msg="Container to stop \"41638ae4d7834d78890be8db29e726c764f3a48a85da8fa082a77cf5975baf25\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:40:55.942036 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d329f6becee75d8f742653bf893b024f1ea7e58005821977d6a6ec85c735dd78-shm.mount: Deactivated successfully. Mar 17 18:40:55.963432 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-35c3fb32397012954e5ba7bc565432f70d745f57293b2e1ac909d30f4b528371-rootfs.mount: Deactivated successfully. Mar 17 18:40:55.968395 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d329f6becee75d8f742653bf893b024f1ea7e58005821977d6a6ec85c735dd78-rootfs.mount: Deactivated successfully. Mar 17 18:40:55.972710 env[1303]: time="2025-03-17T18:40:55.972661335Z" level=info msg="shim disconnected" id=35c3fb32397012954e5ba7bc565432f70d745f57293b2e1ac909d30f4b528371 Mar 17 18:40:55.972790 env[1303]: time="2025-03-17T18:40:55.972715420Z" level=warning msg="cleaning up after shim disconnected" id=35c3fb32397012954e5ba7bc565432f70d745f57293b2e1ac909d30f4b528371 namespace=k8s.io Mar 17 18:40:55.972790 env[1303]: time="2025-03-17T18:40:55.972731841Z" level=info msg="cleaning up dead shim" Mar 17 18:40:55.973098 env[1303]: time="2025-03-17T18:40:55.973048742Z" level=info msg="shim disconnected" id=d329f6becee75d8f742653bf893b024f1ea7e58005821977d6a6ec85c735dd78 Mar 17 18:40:55.973098 env[1303]: time="2025-03-17T18:40:55.973097125Z" level=warning msg="cleaning up after shim disconnected" id=d329f6becee75d8f742653bf893b024f1ea7e58005821977d6a6ec85c735dd78 namespace=k8s.io Mar 17 18:40:55.973098 env[1303]: time="2025-03-17T18:40:55.973106082Z" level=info msg="cleaning up dead shim" Mar 17 18:40:55.979007 env[1303]: time="2025-03-17T18:40:55.978955475Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:40:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3873 runtime=io.containerd.runc.v2\n" Mar 17 18:40:55.979279 env[1303]: time="2025-03-17T18:40:55.979244943Z" level=info msg="TearDown network for sandbox \"d329f6becee75d8f742653bf893b024f1ea7e58005821977d6a6ec85c735dd78\" successfully" Mar 17 18:40:55.979279 env[1303]: time="2025-03-17T18:40:55.979275923Z" level=info msg="StopPodSandbox for \"d329f6becee75d8f742653bf893b024f1ea7e58005821977d6a6ec85c735dd78\" returns successfully" Mar 17 18:40:55.980601 env[1303]: time="2025-03-17T18:40:55.980523878Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:40:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3872 runtime=io.containerd.runc.v2\n" Mar 17 18:40:55.984470 env[1303]: time="2025-03-17T18:40:55.984426490Z" level=info msg="StopContainer for \"35c3fb32397012954e5ba7bc565432f70d745f57293b2e1ac909d30f4b528371\" returns successfully" Mar 17 18:40:55.984734 env[1303]: time="2025-03-17T18:40:55.984709435Z" level=info msg="StopPodSandbox for \"ae8aa50511bed516d1278c42f49c89861e873b843b86a8a7a776e0d01c5a804a\"" Mar 17 18:40:55.984952 env[1303]: time="2025-03-17T18:40:55.984830358Z" level=info msg="Container to stop \"23833d725ee865bc81e4707cfe66fb14646610e4d1b38e891d8f7877fed13a43\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:40:55.984952 env[1303]: time="2025-03-17T18:40:55.984859003Z" level=info msg="Container to stop \"733b1a6306f7c4d976da26daa144058f23a5ad58d771ea70a37d294f3e9f6a1a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:40:55.984952 env[1303]: time="2025-03-17T18:40:55.984893119Z" level=info msg="Container to stop \"35c3fb32397012954e5ba7bc565432f70d745f57293b2e1ac909d30f4b528371\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:40:55.984952 env[1303]: time="2025-03-17T18:40:55.984904972Z" level=info msg="Container to stop \"4ab3fdc2a52abe381b0dbed697218f9c8529860ae65d8cbfdccd729bb4dd2b0d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:40:55.984952 env[1303]: time="2025-03-17T18:40:55.984915021Z" level=info msg="Container to stop \"ed27693a345fa75bb086745b15b0456812cf9a15fef0a1b1b3c1ad8534a7ac25\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:40:56.009294 env[1303]: time="2025-03-17T18:40:56.009225783Z" level=info msg="shim disconnected" id=ae8aa50511bed516d1278c42f49c89861e873b843b86a8a7a776e0d01c5a804a Mar 17 18:40:56.009516 env[1303]: time="2025-03-17T18:40:56.009496775Z" level=warning msg="cleaning up after shim disconnected" id=ae8aa50511bed516d1278c42f49c89861e873b843b86a8a7a776e0d01c5a804a namespace=k8s.io Mar 17 18:40:56.009610 env[1303]: time="2025-03-17T18:40:56.009592760Z" level=info msg="cleaning up dead shim" Mar 17 18:40:56.015925 env[1303]: time="2025-03-17T18:40:56.015898173Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:40:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3915 runtime=io.containerd.runc.v2\n" Mar 17 18:40:56.016172 env[1303]: time="2025-03-17T18:40:56.016152574Z" level=info msg="TearDown network for sandbox \"ae8aa50511bed516d1278c42f49c89861e873b843b86a8a7a776e0d01c5a804a\" successfully" Mar 17 18:40:56.016221 env[1303]: time="2025-03-17T18:40:56.016172151Z" level=info msg="StopPodSandbox for \"ae8aa50511bed516d1278c42f49c89861e873b843b86a8a7a776e0d01c5a804a\" returns successfully" Mar 17 18:40:56.098960 kubelet[2168]: I0317 18:40:56.098915 2168 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d683e4fe-4fcd-4270-933b-482485f025c0-host-proc-sys-net\") pod \"d683e4fe-4fcd-4270-933b-482485f025c0\" (UID: \"d683e4fe-4fcd-4270-933b-482485f025c0\") " Mar 17 18:40:56.098960 kubelet[2168]: I0317 18:40:56.098952 2168 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d683e4fe-4fcd-4270-933b-482485f025c0-etc-cni-netd\") pod \"d683e4fe-4fcd-4270-933b-482485f025c0\" (UID: \"d683e4fe-4fcd-4270-933b-482485f025c0\") " Mar 17 18:40:56.098960 kubelet[2168]: I0317 18:40:56.098967 2168 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d683e4fe-4fcd-4270-933b-482485f025c0-hostproc\") pod \"d683e4fe-4fcd-4270-933b-482485f025c0\" (UID: \"d683e4fe-4fcd-4270-933b-482485f025c0\") " Mar 17 18:40:56.099478 kubelet[2168]: I0317 18:40:56.098986 2168 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g5jbt\" (UniqueName: \"kubernetes.io/projected/d683e4fe-4fcd-4270-933b-482485f025c0-kube-api-access-g5jbt\") pod \"d683e4fe-4fcd-4270-933b-482485f025c0\" (UID: \"d683e4fe-4fcd-4270-933b-482485f025c0\") " Mar 17 18:40:56.099478 kubelet[2168]: I0317 18:40:56.098983 2168 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d683e4fe-4fcd-4270-933b-482485f025c0-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "d683e4fe-4fcd-4270-933b-482485f025c0" (UID: "d683e4fe-4fcd-4270-933b-482485f025c0"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:40:56.099478 kubelet[2168]: I0317 18:40:56.099003 2168 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d683e4fe-4fcd-4270-933b-482485f025c0-xtables-lock\") pod \"d683e4fe-4fcd-4270-933b-482485f025c0\" (UID: \"d683e4fe-4fcd-4270-933b-482485f025c0\") " Mar 17 18:40:56.099478 kubelet[2168]: I0317 18:40:56.099029 2168 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d683e4fe-4fcd-4270-933b-482485f025c0-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "d683e4fe-4fcd-4270-933b-482485f025c0" (UID: "d683e4fe-4fcd-4270-933b-482485f025c0"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:40:56.099478 kubelet[2168]: I0317 18:40:56.099051 2168 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d683e4fe-4fcd-4270-933b-482485f025c0-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "d683e4fe-4fcd-4270-933b-482485f025c0" (UID: "d683e4fe-4fcd-4270-933b-482485f025c0"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:40:56.099648 kubelet[2168]: I0317 18:40:56.099064 2168 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d683e4fe-4fcd-4270-933b-482485f025c0-hostproc" (OuterVolumeSpecName: "hostproc") pod "d683e4fe-4fcd-4270-933b-482485f025c0" (UID: "d683e4fe-4fcd-4270-933b-482485f025c0"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:40:56.099648 kubelet[2168]: I0317 18:40:56.099065 2168 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d683e4fe-4fcd-4270-933b-482485f025c0-lib-modules\") pod \"d683e4fe-4fcd-4270-933b-482485f025c0\" (UID: \"d683e4fe-4fcd-4270-933b-482485f025c0\") " Mar 17 18:40:56.099648 kubelet[2168]: I0317 18:40:56.099090 2168 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/924c885d-5e49-40d8-890e-e45fb16ba92a-cilium-config-path\") pod \"924c885d-5e49-40d8-890e-e45fb16ba92a\" (UID: \"924c885d-5e49-40d8-890e-e45fb16ba92a\") " Mar 17 18:40:56.099648 kubelet[2168]: I0317 18:40:56.099106 2168 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d683e4fe-4fcd-4270-933b-482485f025c0-cni-path\") pod \"d683e4fe-4fcd-4270-933b-482485f025c0\" (UID: \"d683e4fe-4fcd-4270-933b-482485f025c0\") " Mar 17 18:40:56.099648 kubelet[2168]: I0317 18:40:56.099122 2168 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d683e4fe-4fcd-4270-933b-482485f025c0-hubble-tls\") pod \"d683e4fe-4fcd-4270-933b-482485f025c0\" (UID: \"d683e4fe-4fcd-4270-933b-482485f025c0\") " Mar 17 18:40:56.099648 kubelet[2168]: I0317 18:40:56.099137 2168 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d683e4fe-4fcd-4270-933b-482485f025c0-clustermesh-secrets\") pod \"d683e4fe-4fcd-4270-933b-482485f025c0\" (UID: \"d683e4fe-4fcd-4270-933b-482485f025c0\") " Mar 17 18:40:56.099840 kubelet[2168]: I0317 18:40:56.099152 2168 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d683e4fe-4fcd-4270-933b-482485f025c0-cilium-config-path\") pod \"d683e4fe-4fcd-4270-933b-482485f025c0\" (UID: \"d683e4fe-4fcd-4270-933b-482485f025c0\") " Mar 17 18:40:56.099840 kubelet[2168]: I0317 18:40:56.099166 2168 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d683e4fe-4fcd-4270-933b-482485f025c0-host-proc-sys-kernel\") pod \"d683e4fe-4fcd-4270-933b-482485f025c0\" (UID: \"d683e4fe-4fcd-4270-933b-482485f025c0\") " Mar 17 18:40:56.099840 kubelet[2168]: I0317 18:40:56.099181 2168 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cxtzg\" (UniqueName: \"kubernetes.io/projected/924c885d-5e49-40d8-890e-e45fb16ba92a-kube-api-access-cxtzg\") pod \"924c885d-5e49-40d8-890e-e45fb16ba92a\" (UID: \"924c885d-5e49-40d8-890e-e45fb16ba92a\") " Mar 17 18:40:56.099840 kubelet[2168]: I0317 18:40:56.099194 2168 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d683e4fe-4fcd-4270-933b-482485f025c0-cilium-cgroup\") pod \"d683e4fe-4fcd-4270-933b-482485f025c0\" (UID: \"d683e4fe-4fcd-4270-933b-482485f025c0\") " Mar 17 18:40:56.099840 kubelet[2168]: I0317 18:40:56.099205 2168 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d683e4fe-4fcd-4270-933b-482485f025c0-bpf-maps\") pod \"d683e4fe-4fcd-4270-933b-482485f025c0\" (UID: \"d683e4fe-4fcd-4270-933b-482485f025c0\") " Mar 17 18:40:56.099840 kubelet[2168]: I0317 18:40:56.099216 2168 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d683e4fe-4fcd-4270-933b-482485f025c0-cilium-run\") pod \"d683e4fe-4fcd-4270-933b-482485f025c0\" (UID: \"d683e4fe-4fcd-4270-933b-482485f025c0\") " Mar 17 18:40:56.100055 kubelet[2168]: I0317 18:40:56.099257 2168 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d683e4fe-4fcd-4270-933b-482485f025c0-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Mar 17 18:40:56.100055 kubelet[2168]: I0317 18:40:56.099265 2168 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d683e4fe-4fcd-4270-933b-482485f025c0-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Mar 17 18:40:56.100055 kubelet[2168]: I0317 18:40:56.099284 2168 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d683e4fe-4fcd-4270-933b-482485f025c0-hostproc\") on node \"localhost\" DevicePath \"\"" Mar 17 18:40:56.100055 kubelet[2168]: I0317 18:40:56.099291 2168 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d683e4fe-4fcd-4270-933b-482485f025c0-xtables-lock\") on node \"localhost\" DevicePath \"\"" Mar 17 18:40:56.100055 kubelet[2168]: I0317 18:40:56.099310 2168 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d683e4fe-4fcd-4270-933b-482485f025c0-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "d683e4fe-4fcd-4270-933b-482485f025c0" (UID: "d683e4fe-4fcd-4270-933b-482485f025c0"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:40:56.100230 kubelet[2168]: I0317 18:40:56.100108 2168 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d683e4fe-4fcd-4270-933b-482485f025c0-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d683e4fe-4fcd-4270-933b-482485f025c0" (UID: "d683e4fe-4fcd-4270-933b-482485f025c0"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:40:56.102532 kubelet[2168]: I0317 18:40:56.100332 2168 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d683e4fe-4fcd-4270-933b-482485f025c0-cni-path" (OuterVolumeSpecName: "cni-path") pod "d683e4fe-4fcd-4270-933b-482485f025c0" (UID: "d683e4fe-4fcd-4270-933b-482485f025c0"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:40:56.102532 kubelet[2168]: I0317 18:40:56.100841 2168 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d683e4fe-4fcd-4270-933b-482485f025c0-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "d683e4fe-4fcd-4270-933b-482485f025c0" (UID: "d683e4fe-4fcd-4270-933b-482485f025c0"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:40:56.102532 kubelet[2168]: I0317 18:40:56.101092 2168 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d683e4fe-4fcd-4270-933b-482485f025c0-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "d683e4fe-4fcd-4270-933b-482485f025c0" (UID: "d683e4fe-4fcd-4270-933b-482485f025c0"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:40:56.102532 kubelet[2168]: I0317 18:40:56.101116 2168 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d683e4fe-4fcd-4270-933b-482485f025c0-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "d683e4fe-4fcd-4270-933b-482485f025c0" (UID: "d683e4fe-4fcd-4270-933b-482485f025c0"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:40:56.102532 kubelet[2168]: I0317 18:40:56.102114 2168 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d683e4fe-4fcd-4270-933b-482485f025c0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d683e4fe-4fcd-4270-933b-482485f025c0" (UID: "d683e4fe-4fcd-4270-933b-482485f025c0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 18:40:56.102734 kubelet[2168]: I0317 18:40:56.102315 2168 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/924c885d-5e49-40d8-890e-e45fb16ba92a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "924c885d-5e49-40d8-890e-e45fb16ba92a" (UID: "924c885d-5e49-40d8-890e-e45fb16ba92a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 18:40:56.103206 kubelet[2168]: I0317 18:40:56.103173 2168 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d683e4fe-4fcd-4270-933b-482485f025c0-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "d683e4fe-4fcd-4270-933b-482485f025c0" (UID: "d683e4fe-4fcd-4270-933b-482485f025c0"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 17 18:40:56.103281 kubelet[2168]: I0317 18:40:56.103175 2168 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/924c885d-5e49-40d8-890e-e45fb16ba92a-kube-api-access-cxtzg" (OuterVolumeSpecName: "kube-api-access-cxtzg") pod "924c885d-5e49-40d8-890e-e45fb16ba92a" (UID: "924c885d-5e49-40d8-890e-e45fb16ba92a"). InnerVolumeSpecName "kube-api-access-cxtzg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:40:56.104000 kubelet[2168]: I0317 18:40:56.103972 2168 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d683e4fe-4fcd-4270-933b-482485f025c0-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "d683e4fe-4fcd-4270-933b-482485f025c0" (UID: "d683e4fe-4fcd-4270-933b-482485f025c0"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:40:56.104056 kubelet[2168]: I0317 18:40:56.104019 2168 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d683e4fe-4fcd-4270-933b-482485f025c0-kube-api-access-g5jbt" (OuterVolumeSpecName: "kube-api-access-g5jbt") pod "d683e4fe-4fcd-4270-933b-482485f025c0" (UID: "d683e4fe-4fcd-4270-933b-482485f025c0"). InnerVolumeSpecName "kube-api-access-g5jbt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:40:56.200099 kubelet[2168]: I0317 18:40:56.200067 2168 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d683e4fe-4fcd-4270-933b-482485f025c0-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Mar 17 18:40:56.200099 kubelet[2168]: I0317 18:40:56.200100 2168 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d683e4fe-4fcd-4270-933b-482485f025c0-bpf-maps\") on node \"localhost\" DevicePath \"\"" Mar 17 18:40:56.200309 kubelet[2168]: I0317 18:40:56.200110 2168 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d683e4fe-4fcd-4270-933b-482485f025c0-cilium-run\") on node \"localhost\" DevicePath \"\"" Mar 17 18:40:56.200309 kubelet[2168]: I0317 18:40:56.200121 2168 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-g5jbt\" (UniqueName: \"kubernetes.io/projected/d683e4fe-4fcd-4270-933b-482485f025c0-kube-api-access-g5jbt\") on node \"localhost\" DevicePath \"\"" Mar 17 18:40:56.200309 kubelet[2168]: I0317 18:40:56.200133 2168 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d683e4fe-4fcd-4270-933b-482485f025c0-lib-modules\") on node \"localhost\" DevicePath \"\"" Mar 17 18:40:56.200309 kubelet[2168]: I0317 18:40:56.200144 2168 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/924c885d-5e49-40d8-890e-e45fb16ba92a-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 17 18:40:56.200309 kubelet[2168]: I0317 18:40:56.200154 2168 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d683e4fe-4fcd-4270-933b-482485f025c0-cni-path\") on node \"localhost\" DevicePath \"\"" Mar 17 18:40:56.200309 kubelet[2168]: I0317 18:40:56.200163 2168 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d683e4fe-4fcd-4270-933b-482485f025c0-hubble-tls\") on node \"localhost\" DevicePath \"\"" Mar 17 18:40:56.200309 kubelet[2168]: I0317 18:40:56.200172 2168 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d683e4fe-4fcd-4270-933b-482485f025c0-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Mar 17 18:40:56.200309 kubelet[2168]: I0317 18:40:56.200182 2168 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d683e4fe-4fcd-4270-933b-482485f025c0-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 17 18:40:56.200524 kubelet[2168]: I0317 18:40:56.200192 2168 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d683e4fe-4fcd-4270-933b-482485f025c0-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Mar 17 18:40:56.200524 kubelet[2168]: I0317 18:40:56.200201 2168 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-cxtzg\" (UniqueName: \"kubernetes.io/projected/924c885d-5e49-40d8-890e-e45fb16ba92a-kube-api-access-cxtzg\") on node \"localhost\" DevicePath \"\"" Mar 17 18:40:56.290236 kubelet[2168]: E0317 18:40:56.290204 2168 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 18:40:56.366926 kubelet[2168]: I0317 18:40:56.366891 2168 scope.go:117] "RemoveContainer" containerID="41638ae4d7834d78890be8db29e726c764f3a48a85da8fa082a77cf5975baf25" Mar 17 18:40:56.368113 env[1303]: time="2025-03-17T18:40:56.368078034Z" level=info msg="RemoveContainer for \"41638ae4d7834d78890be8db29e726c764f3a48a85da8fa082a77cf5975baf25\"" Mar 17 18:40:56.371775 env[1303]: time="2025-03-17T18:40:56.371741099Z" level=info msg="RemoveContainer for \"41638ae4d7834d78890be8db29e726c764f3a48a85da8fa082a77cf5975baf25\" returns successfully" Mar 17 18:40:56.372008 kubelet[2168]: I0317 18:40:56.371984 2168 scope.go:117] "RemoveContainer" containerID="41638ae4d7834d78890be8db29e726c764f3a48a85da8fa082a77cf5975baf25" Mar 17 18:40:56.372263 env[1303]: time="2025-03-17T18:40:56.372162380Z" level=error msg="ContainerStatus for \"41638ae4d7834d78890be8db29e726c764f3a48a85da8fa082a77cf5975baf25\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"41638ae4d7834d78890be8db29e726c764f3a48a85da8fa082a77cf5975baf25\": not found" Mar 17 18:40:56.372441 kubelet[2168]: E0317 18:40:56.372371 2168 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"41638ae4d7834d78890be8db29e726c764f3a48a85da8fa082a77cf5975baf25\": not found" containerID="41638ae4d7834d78890be8db29e726c764f3a48a85da8fa082a77cf5975baf25" Mar 17 18:40:56.372560 kubelet[2168]: I0317 18:40:56.372414 2168 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"41638ae4d7834d78890be8db29e726c764f3a48a85da8fa082a77cf5975baf25"} err="failed to get container status \"41638ae4d7834d78890be8db29e726c764f3a48a85da8fa082a77cf5975baf25\": rpc error: code = NotFound desc = an error occurred when try to find container \"41638ae4d7834d78890be8db29e726c764f3a48a85da8fa082a77cf5975baf25\": not found" Mar 17 18:40:56.372560 kubelet[2168]: I0317 18:40:56.372495 2168 scope.go:117] "RemoveContainer" containerID="35c3fb32397012954e5ba7bc565432f70d745f57293b2e1ac909d30f4b528371" Mar 17 18:40:56.373419 env[1303]: time="2025-03-17T18:40:56.373396888Z" level=info msg="RemoveContainer for \"35c3fb32397012954e5ba7bc565432f70d745f57293b2e1ac909d30f4b528371\"" Mar 17 18:40:56.376788 env[1303]: time="2025-03-17T18:40:56.376753803Z" level=info msg="RemoveContainer for \"35c3fb32397012954e5ba7bc565432f70d745f57293b2e1ac909d30f4b528371\" returns successfully" Mar 17 18:40:56.376919 kubelet[2168]: I0317 18:40:56.376900 2168 scope.go:117] "RemoveContainer" containerID="733b1a6306f7c4d976da26daa144058f23a5ad58d771ea70a37d294f3e9f6a1a" Mar 17 18:40:56.378069 env[1303]: time="2025-03-17T18:40:56.378042004Z" level=info msg="RemoveContainer for \"733b1a6306f7c4d976da26daa144058f23a5ad58d771ea70a37d294f3e9f6a1a\"" Mar 17 18:40:56.381399 env[1303]: time="2025-03-17T18:40:56.381364513Z" level=info msg="RemoveContainer for \"733b1a6306f7c4d976da26daa144058f23a5ad58d771ea70a37d294f3e9f6a1a\" returns successfully" Mar 17 18:40:56.381568 kubelet[2168]: I0317 18:40:56.381546 2168 scope.go:117] "RemoveContainer" containerID="ed27693a345fa75bb086745b15b0456812cf9a15fef0a1b1b3c1ad8534a7ac25" Mar 17 18:40:56.382617 env[1303]: time="2025-03-17T18:40:56.382588540Z" level=info msg="RemoveContainer for \"ed27693a345fa75bb086745b15b0456812cf9a15fef0a1b1b3c1ad8534a7ac25\"" Mar 17 18:40:56.386449 env[1303]: time="2025-03-17T18:40:56.386412916Z" level=info msg="RemoveContainer for \"ed27693a345fa75bb086745b15b0456812cf9a15fef0a1b1b3c1ad8534a7ac25\" returns successfully" Mar 17 18:40:56.386609 kubelet[2168]: I0317 18:40:56.386586 2168 scope.go:117] "RemoveContainer" containerID="4ab3fdc2a52abe381b0dbed697218f9c8529860ae65d8cbfdccd729bb4dd2b0d" Mar 17 18:40:56.389705 env[1303]: time="2025-03-17T18:40:56.389454263Z" level=info msg="RemoveContainer for \"4ab3fdc2a52abe381b0dbed697218f9c8529860ae65d8cbfdccd729bb4dd2b0d\"" Mar 17 18:40:56.392421 env[1303]: time="2025-03-17T18:40:56.392388493Z" level=info msg="RemoveContainer for \"4ab3fdc2a52abe381b0dbed697218f9c8529860ae65d8cbfdccd729bb4dd2b0d\" returns successfully" Mar 17 18:40:56.392549 kubelet[2168]: I0317 18:40:56.392528 2168 scope.go:117] "RemoveContainer" containerID="23833d725ee865bc81e4707cfe66fb14646610e4d1b38e891d8f7877fed13a43" Mar 17 18:40:56.393368 env[1303]: time="2025-03-17T18:40:56.393345357Z" level=info msg="RemoveContainer for \"23833d725ee865bc81e4707cfe66fb14646610e4d1b38e891d8f7877fed13a43\"" Mar 17 18:40:56.395907 env[1303]: time="2025-03-17T18:40:56.395857805Z" level=info msg="RemoveContainer for \"23833d725ee865bc81e4707cfe66fb14646610e4d1b38e891d8f7877fed13a43\" returns successfully" Mar 17 18:40:56.396000 kubelet[2168]: I0317 18:40:56.395982 2168 scope.go:117] "RemoveContainer" containerID="35c3fb32397012954e5ba7bc565432f70d745f57293b2e1ac909d30f4b528371" Mar 17 18:40:56.396215 env[1303]: time="2025-03-17T18:40:56.396170518Z" level=error msg="ContainerStatus for \"35c3fb32397012954e5ba7bc565432f70d745f57293b2e1ac909d30f4b528371\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"35c3fb32397012954e5ba7bc565432f70d745f57293b2e1ac909d30f4b528371\": not found" Mar 17 18:40:56.396313 kubelet[2168]: E0317 18:40:56.396296 2168 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"35c3fb32397012954e5ba7bc565432f70d745f57293b2e1ac909d30f4b528371\": not found" containerID="35c3fb32397012954e5ba7bc565432f70d745f57293b2e1ac909d30f4b528371" Mar 17 18:40:56.396357 kubelet[2168]: I0317 18:40:56.396318 2168 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"35c3fb32397012954e5ba7bc565432f70d745f57293b2e1ac909d30f4b528371"} err="failed to get container status \"35c3fb32397012954e5ba7bc565432f70d745f57293b2e1ac909d30f4b528371\": rpc error: code = NotFound desc = an error occurred when try to find container \"35c3fb32397012954e5ba7bc565432f70d745f57293b2e1ac909d30f4b528371\": not found" Mar 17 18:40:56.396357 kubelet[2168]: I0317 18:40:56.396337 2168 scope.go:117] "RemoveContainer" containerID="733b1a6306f7c4d976da26daa144058f23a5ad58d771ea70a37d294f3e9f6a1a" Mar 17 18:40:56.396494 env[1303]: time="2025-03-17T18:40:56.396452120Z" level=error msg="ContainerStatus for \"733b1a6306f7c4d976da26daa144058f23a5ad58d771ea70a37d294f3e9f6a1a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"733b1a6306f7c4d976da26daa144058f23a5ad58d771ea70a37d294f3e9f6a1a\": not found" Mar 17 18:40:56.396582 kubelet[2168]: E0317 18:40:56.396555 2168 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"733b1a6306f7c4d976da26daa144058f23a5ad58d771ea70a37d294f3e9f6a1a\": not found" containerID="733b1a6306f7c4d976da26daa144058f23a5ad58d771ea70a37d294f3e9f6a1a" Mar 17 18:40:56.396645 kubelet[2168]: I0317 18:40:56.396587 2168 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"733b1a6306f7c4d976da26daa144058f23a5ad58d771ea70a37d294f3e9f6a1a"} err="failed to get container status \"733b1a6306f7c4d976da26daa144058f23a5ad58d771ea70a37d294f3e9f6a1a\": rpc error: code = NotFound desc = an error occurred when try to find container \"733b1a6306f7c4d976da26daa144058f23a5ad58d771ea70a37d294f3e9f6a1a\": not found" Mar 17 18:40:56.396645 kubelet[2168]: I0317 18:40:56.396608 2168 scope.go:117] "RemoveContainer" containerID="ed27693a345fa75bb086745b15b0456812cf9a15fef0a1b1b3c1ad8534a7ac25" Mar 17 18:40:56.396760 env[1303]: time="2025-03-17T18:40:56.396728071Z" level=error msg="ContainerStatus for \"ed27693a345fa75bb086745b15b0456812cf9a15fef0a1b1b3c1ad8534a7ac25\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ed27693a345fa75bb086745b15b0456812cf9a15fef0a1b1b3c1ad8534a7ac25\": not found" Mar 17 18:40:56.396846 kubelet[2168]: E0317 18:40:56.396828 2168 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ed27693a345fa75bb086745b15b0456812cf9a15fef0a1b1b3c1ad8534a7ac25\": not found" containerID="ed27693a345fa75bb086745b15b0456812cf9a15fef0a1b1b3c1ad8534a7ac25" Mar 17 18:40:56.396917 kubelet[2168]: I0317 18:40:56.396845 2168 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ed27693a345fa75bb086745b15b0456812cf9a15fef0a1b1b3c1ad8534a7ac25"} err="failed to get container status \"ed27693a345fa75bb086745b15b0456812cf9a15fef0a1b1b3c1ad8534a7ac25\": rpc error: code = NotFound desc = an error occurred when try to find container \"ed27693a345fa75bb086745b15b0456812cf9a15fef0a1b1b3c1ad8534a7ac25\": not found" Mar 17 18:40:56.396917 kubelet[2168]: I0317 18:40:56.396882 2168 scope.go:117] "RemoveContainer" containerID="4ab3fdc2a52abe381b0dbed697218f9c8529860ae65d8cbfdccd729bb4dd2b0d" Mar 17 18:40:56.397080 env[1303]: time="2025-03-17T18:40:56.397033138Z" level=error msg="ContainerStatus for \"4ab3fdc2a52abe381b0dbed697218f9c8529860ae65d8cbfdccd729bb4dd2b0d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4ab3fdc2a52abe381b0dbed697218f9c8529860ae65d8cbfdccd729bb4dd2b0d\": not found" Mar 17 18:40:56.397176 kubelet[2168]: E0317 18:40:56.397160 2168 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4ab3fdc2a52abe381b0dbed697218f9c8529860ae65d8cbfdccd729bb4dd2b0d\": not found" containerID="4ab3fdc2a52abe381b0dbed697218f9c8529860ae65d8cbfdccd729bb4dd2b0d" Mar 17 18:40:56.397226 kubelet[2168]: I0317 18:40:56.397176 2168 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4ab3fdc2a52abe381b0dbed697218f9c8529860ae65d8cbfdccd729bb4dd2b0d"} err="failed to get container status \"4ab3fdc2a52abe381b0dbed697218f9c8529860ae65d8cbfdccd729bb4dd2b0d\": rpc error: code = NotFound desc = an error occurred when try to find container \"4ab3fdc2a52abe381b0dbed697218f9c8529860ae65d8cbfdccd729bb4dd2b0d\": not found" Mar 17 18:40:56.397226 kubelet[2168]: I0317 18:40:56.397188 2168 scope.go:117] "RemoveContainer" containerID="23833d725ee865bc81e4707cfe66fb14646610e4d1b38e891d8f7877fed13a43" Mar 17 18:40:56.397365 env[1303]: time="2025-03-17T18:40:56.397325011Z" level=error msg="ContainerStatus for \"23833d725ee865bc81e4707cfe66fb14646610e4d1b38e891d8f7877fed13a43\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"23833d725ee865bc81e4707cfe66fb14646610e4d1b38e891d8f7877fed13a43\": not found" Mar 17 18:40:56.397430 kubelet[2168]: E0317 18:40:56.397417 2168 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"23833d725ee865bc81e4707cfe66fb14646610e4d1b38e891d8f7877fed13a43\": not found" containerID="23833d725ee865bc81e4707cfe66fb14646610e4d1b38e891d8f7877fed13a43" Mar 17 18:40:56.397456 kubelet[2168]: I0317 18:40:56.397432 2168 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"23833d725ee865bc81e4707cfe66fb14646610e4d1b38e891d8f7877fed13a43"} err="failed to get container status \"23833d725ee865bc81e4707cfe66fb14646610e4d1b38e891d8f7877fed13a43\": rpc error: code = NotFound desc = an error occurred when try to find container \"23833d725ee865bc81e4707cfe66fb14646610e4d1b38e891d8f7877fed13a43\": not found" Mar 17 18:40:56.888029 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ae8aa50511bed516d1278c42f49c89861e873b843b86a8a7a776e0d01c5a804a-rootfs.mount: Deactivated successfully. Mar 17 18:40:56.888175 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ae8aa50511bed516d1278c42f49c89861e873b843b86a8a7a776e0d01c5a804a-shm.mount: Deactivated successfully. Mar 17 18:40:56.888261 systemd[1]: var-lib-kubelet-pods-d683e4fe\x2d4fcd\x2d4270\x2d933b\x2d482485f025c0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dg5jbt.mount: Deactivated successfully. Mar 17 18:40:56.888356 systemd[1]: var-lib-kubelet-pods-d683e4fe\x2d4fcd\x2d4270\x2d933b\x2d482485f025c0-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 17 18:40:56.888452 systemd[1]: var-lib-kubelet-pods-d683e4fe\x2d4fcd\x2d4270\x2d933b\x2d482485f025c0-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 17 18:40:56.888539 systemd[1]: var-lib-kubelet-pods-924c885d\x2d5e49\x2d40d8\x2d890e\x2de45fb16ba92a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcxtzg.mount: Deactivated successfully. Mar 17 18:40:57.516584 sshd[3769]: pam_unix(sshd:session): session closed for user core Mar 17 18:40:57.519753 systemd[1]: Started sshd@22-10.0.0.57:22-10.0.0.1:34100.service. Mar 17 18:40:57.520953 systemd[1]: sshd@21-10.0.0.57:22-10.0.0.1:34086.service: Deactivated successfully. Mar 17 18:40:57.521929 systemd-logind[1289]: Session 22 logged out. Waiting for processes to exit. Mar 17 18:40:57.522018 systemd[1]: session-22.scope: Deactivated successfully. Mar 17 18:40:57.523038 systemd-logind[1289]: Removed session 22. Mar 17 18:40:57.555626 sshd[3934]: Accepted publickey for core from 10.0.0.1 port 34100 ssh2: RSA SHA256:DYcGKLA+BUI3KXBOyjzF6/uTec/cV0nLMAEcssN4/64 Mar 17 18:40:57.556779 sshd[3934]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:40:57.560212 systemd-logind[1289]: New session 23 of user core. Mar 17 18:40:57.560943 systemd[1]: Started session-23.scope. Mar 17 18:40:57.974605 sshd[3934]: pam_unix(sshd:session): session closed for user core Mar 17 18:40:57.977172 systemd[1]: Started sshd@23-10.0.0.57:22-10.0.0.1:34106.service. Mar 17 18:40:57.981554 systemd[1]: sshd@22-10.0.0.57:22-10.0.0.1:34100.service: Deactivated successfully. Mar 17 18:40:57.982370 systemd[1]: session-23.scope: Deactivated successfully. Mar 17 18:40:57.983730 systemd-logind[1289]: Session 23 logged out. Waiting for processes to exit. Mar 17 18:40:57.984669 systemd-logind[1289]: Removed session 23. Mar 17 18:40:58.003737 kubelet[2168]: I0317 18:40:58.003076 2168 topology_manager.go:215] "Topology Admit Handler" podUID="f0e81ca2-cc03-4b00-a620-085e30de48d3" podNamespace="kube-system" podName="cilium-lwhp6" Mar 17 18:40:58.003737 kubelet[2168]: E0317 18:40:58.003135 2168 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d683e4fe-4fcd-4270-933b-482485f025c0" containerName="clean-cilium-state" Mar 17 18:40:58.003737 kubelet[2168]: E0317 18:40:58.003143 2168 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="924c885d-5e49-40d8-890e-e45fb16ba92a" containerName="cilium-operator" Mar 17 18:40:58.003737 kubelet[2168]: E0317 18:40:58.003149 2168 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d683e4fe-4fcd-4270-933b-482485f025c0" containerName="mount-cgroup" Mar 17 18:40:58.003737 kubelet[2168]: E0317 18:40:58.003154 2168 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d683e4fe-4fcd-4270-933b-482485f025c0" containerName="apply-sysctl-overwrites" Mar 17 18:40:58.003737 kubelet[2168]: E0317 18:40:58.003159 2168 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d683e4fe-4fcd-4270-933b-482485f025c0" containerName="mount-bpf-fs" Mar 17 18:40:58.003737 kubelet[2168]: E0317 18:40:58.003164 2168 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d683e4fe-4fcd-4270-933b-482485f025c0" containerName="cilium-agent" Mar 17 18:40:58.003737 kubelet[2168]: I0317 18:40:58.003181 2168 memory_manager.go:354] "RemoveStaleState removing state" podUID="d683e4fe-4fcd-4270-933b-482485f025c0" containerName="cilium-agent" Mar 17 18:40:58.003737 kubelet[2168]: I0317 18:40:58.003187 2168 memory_manager.go:354] "RemoveStaleState removing state" podUID="924c885d-5e49-40d8-890e-e45fb16ba92a" containerName="cilium-operator" Mar 17 18:40:58.018303 sshd[3946]: Accepted publickey for core from 10.0.0.1 port 34106 ssh2: RSA SHA256:DYcGKLA+BUI3KXBOyjzF6/uTec/cV0nLMAEcssN4/64 Mar 17 18:40:58.019884 sshd[3946]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:40:58.025627 systemd[1]: Started session-24.scope. Mar 17 18:40:58.025967 systemd-logind[1289]: New session 24 of user core. Mar 17 18:40:58.111448 kubelet[2168]: I0317 18:40:58.111393 2168 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f0e81ca2-cc03-4b00-a620-085e30de48d3-lib-modules\") pod \"cilium-lwhp6\" (UID: \"f0e81ca2-cc03-4b00-a620-085e30de48d3\") " pod="kube-system/cilium-lwhp6" Mar 17 18:40:58.111448 kubelet[2168]: I0317 18:40:58.111433 2168 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f0e81ca2-cc03-4b00-a620-085e30de48d3-cilium-config-path\") pod \"cilium-lwhp6\" (UID: \"f0e81ca2-cc03-4b00-a620-085e30de48d3\") " pod="kube-system/cilium-lwhp6" Mar 17 18:40:58.111448 kubelet[2168]: I0317 18:40:58.111453 2168 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f0e81ca2-cc03-4b00-a620-085e30de48d3-host-proc-sys-net\") pod \"cilium-lwhp6\" (UID: \"f0e81ca2-cc03-4b00-a620-085e30de48d3\") " pod="kube-system/cilium-lwhp6" Mar 17 18:40:58.111750 kubelet[2168]: I0317 18:40:58.111468 2168 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f0e81ca2-cc03-4b00-a620-085e30de48d3-host-proc-sys-kernel\") pod \"cilium-lwhp6\" (UID: \"f0e81ca2-cc03-4b00-a620-085e30de48d3\") " pod="kube-system/cilium-lwhp6" Mar 17 18:40:58.111750 kubelet[2168]: I0317 18:40:58.111485 2168 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f0e81ca2-cc03-4b00-a620-085e30de48d3-hostproc\") pod \"cilium-lwhp6\" (UID: \"f0e81ca2-cc03-4b00-a620-085e30de48d3\") " pod="kube-system/cilium-lwhp6" Mar 17 18:40:58.111750 kubelet[2168]: I0317 18:40:58.111498 2168 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f0e81ca2-cc03-4b00-a620-085e30de48d3-cni-path\") pod \"cilium-lwhp6\" (UID: \"f0e81ca2-cc03-4b00-a620-085e30de48d3\") " pod="kube-system/cilium-lwhp6" Mar 17 18:40:58.111750 kubelet[2168]: I0317 18:40:58.111514 2168 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f0e81ca2-cc03-4b00-a620-085e30de48d3-hubble-tls\") pod \"cilium-lwhp6\" (UID: \"f0e81ca2-cc03-4b00-a620-085e30de48d3\") " pod="kube-system/cilium-lwhp6" Mar 17 18:40:58.111750 kubelet[2168]: I0317 18:40:58.111529 2168 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pw56\" (UniqueName: \"kubernetes.io/projected/f0e81ca2-cc03-4b00-a620-085e30de48d3-kube-api-access-6pw56\") pod \"cilium-lwhp6\" (UID: \"f0e81ca2-cc03-4b00-a620-085e30de48d3\") " pod="kube-system/cilium-lwhp6" Mar 17 18:40:58.111750 kubelet[2168]: I0317 18:40:58.111544 2168 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f0e81ca2-cc03-4b00-a620-085e30de48d3-clustermesh-secrets\") pod \"cilium-lwhp6\" (UID: \"f0e81ca2-cc03-4b00-a620-085e30de48d3\") " pod="kube-system/cilium-lwhp6" Mar 17 18:40:58.112021 kubelet[2168]: I0317 18:40:58.111562 2168 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f0e81ca2-cc03-4b00-a620-085e30de48d3-cilium-run\") pod \"cilium-lwhp6\" (UID: \"f0e81ca2-cc03-4b00-a620-085e30de48d3\") " pod="kube-system/cilium-lwhp6" Mar 17 18:40:58.112021 kubelet[2168]: I0317 18:40:58.111575 2168 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f0e81ca2-cc03-4b00-a620-085e30de48d3-bpf-maps\") pod \"cilium-lwhp6\" (UID: \"f0e81ca2-cc03-4b00-a620-085e30de48d3\") " pod="kube-system/cilium-lwhp6" Mar 17 18:40:58.112021 kubelet[2168]: I0317 18:40:58.111587 2168 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f0e81ca2-cc03-4b00-a620-085e30de48d3-xtables-lock\") pod \"cilium-lwhp6\" (UID: \"f0e81ca2-cc03-4b00-a620-085e30de48d3\") " pod="kube-system/cilium-lwhp6" Mar 17 18:40:58.112021 kubelet[2168]: I0317 18:40:58.111601 2168 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f0e81ca2-cc03-4b00-a620-085e30de48d3-cilium-cgroup\") pod \"cilium-lwhp6\" (UID: \"f0e81ca2-cc03-4b00-a620-085e30de48d3\") " pod="kube-system/cilium-lwhp6" Mar 17 18:40:58.112021 kubelet[2168]: I0317 18:40:58.111615 2168 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f0e81ca2-cc03-4b00-a620-085e30de48d3-etc-cni-netd\") pod \"cilium-lwhp6\" (UID: \"f0e81ca2-cc03-4b00-a620-085e30de48d3\") " pod="kube-system/cilium-lwhp6" Mar 17 18:40:58.112021 kubelet[2168]: I0317 18:40:58.111627 2168 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f0e81ca2-cc03-4b00-a620-085e30de48d3-cilium-ipsec-secrets\") pod \"cilium-lwhp6\" (UID: \"f0e81ca2-cc03-4b00-a620-085e30de48d3\") " pod="kube-system/cilium-lwhp6" Mar 17 18:40:58.149793 sshd[3946]: pam_unix(sshd:session): session closed for user core Mar 17 18:40:58.152212 systemd[1]: Started sshd@24-10.0.0.57:22-10.0.0.1:34120.service. Mar 17 18:40:58.154807 kubelet[2168]: E0317 18:40:58.154742 2168 pod_workers.go:1298] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-6pw56 lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-lwhp6" podUID="f0e81ca2-cc03-4b00-a620-085e30de48d3" Mar 17 18:40:58.160710 systemd[1]: sshd@23-10.0.0.57:22-10.0.0.1:34106.service: Deactivated successfully. Mar 17 18:40:58.162244 systemd[1]: session-24.scope: Deactivated successfully. Mar 17 18:40:58.164721 systemd-logind[1289]: Session 24 logged out. Waiting for processes to exit. Mar 17 18:40:58.165858 systemd-logind[1289]: Removed session 24. Mar 17 18:40:58.183041 sshd[3960]: Accepted publickey for core from 10.0.0.1 port 34120 ssh2: RSA SHA256:DYcGKLA+BUI3KXBOyjzF6/uTec/cV0nLMAEcssN4/64 Mar 17 18:40:58.184165 sshd[3960]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:40:58.187690 systemd-logind[1289]: New session 25 of user core. Mar 17 18:40:58.188462 systemd[1]: Started session-25.scope. Mar 17 18:40:58.226779 kubelet[2168]: I0317 18:40:58.226669 2168 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="924c885d-5e49-40d8-890e-e45fb16ba92a" path="/var/lib/kubelet/pods/924c885d-5e49-40d8-890e-e45fb16ba92a/volumes" Mar 17 18:40:58.227032 kubelet[2168]: I0317 18:40:58.227015 2168 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d683e4fe-4fcd-4270-933b-482485f025c0" path="/var/lib/kubelet/pods/d683e4fe-4fcd-4270-933b-482485f025c0/volumes" Mar 17 18:40:58.341458 kubelet[2168]: I0317 18:40:58.341408 2168 setters.go:580] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-03-17T18:40:58Z","lastTransitionTime":"2025-03-17T18:40:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 17 18:40:58.413417 kubelet[2168]: I0317 18:40:58.413376 2168 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f0e81ca2-cc03-4b00-a620-085e30de48d3-host-proc-sys-kernel\") pod \"f0e81ca2-cc03-4b00-a620-085e30de48d3\" (UID: \"f0e81ca2-cc03-4b00-a620-085e30de48d3\") " Mar 17 18:40:58.413417 kubelet[2168]: I0317 18:40:58.413421 2168 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f0e81ca2-cc03-4b00-a620-085e30de48d3-cilium-cgroup\") pod \"f0e81ca2-cc03-4b00-a620-085e30de48d3\" (UID: \"f0e81ca2-cc03-4b00-a620-085e30de48d3\") " Mar 17 18:40:58.413616 kubelet[2168]: I0317 18:40:58.413439 2168 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f0e81ca2-cc03-4b00-a620-085e30de48d3-host-proc-sys-net\") pod \"f0e81ca2-cc03-4b00-a620-085e30de48d3\" (UID: \"f0e81ca2-cc03-4b00-a620-085e30de48d3\") " Mar 17 18:40:58.413616 kubelet[2168]: I0317 18:40:58.413461 2168 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6pw56\" (UniqueName: \"kubernetes.io/projected/f0e81ca2-cc03-4b00-a620-085e30de48d3-kube-api-access-6pw56\") pod \"f0e81ca2-cc03-4b00-a620-085e30de48d3\" (UID: \"f0e81ca2-cc03-4b00-a620-085e30de48d3\") " Mar 17 18:40:58.413616 kubelet[2168]: I0317 18:40:58.413530 2168 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f0e81ca2-cc03-4b00-a620-085e30de48d3-clustermesh-secrets\") pod \"f0e81ca2-cc03-4b00-a620-085e30de48d3\" (UID: \"f0e81ca2-cc03-4b00-a620-085e30de48d3\") " Mar 17 18:40:58.413616 kubelet[2168]: I0317 18:40:58.413570 2168 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f0e81ca2-cc03-4b00-a620-085e30de48d3-hostproc\") pod \"f0e81ca2-cc03-4b00-a620-085e30de48d3\" (UID: \"f0e81ca2-cc03-4b00-a620-085e30de48d3\") " Mar 17 18:40:58.413616 kubelet[2168]: I0317 18:40:58.413584 2168 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f0e81ca2-cc03-4b00-a620-085e30de48d3-bpf-maps\") pod \"f0e81ca2-cc03-4b00-a620-085e30de48d3\" (UID: \"f0e81ca2-cc03-4b00-a620-085e30de48d3\") " Mar 17 18:40:58.413616 kubelet[2168]: I0317 18:40:58.413574 2168 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f0e81ca2-cc03-4b00-a620-085e30de48d3-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "f0e81ca2-cc03-4b00-a620-085e30de48d3" (UID: "f0e81ca2-cc03-4b00-a620-085e30de48d3"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:40:58.413750 kubelet[2168]: I0317 18:40:58.413602 2168 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f0e81ca2-cc03-4b00-a620-085e30de48d3-cilium-config-path\") pod \"f0e81ca2-cc03-4b00-a620-085e30de48d3\" (UID: \"f0e81ca2-cc03-4b00-a620-085e30de48d3\") " Mar 17 18:40:58.413750 kubelet[2168]: I0317 18:40:58.413675 2168 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f0e81ca2-cc03-4b00-a620-085e30de48d3-cilium-run\") pod \"f0e81ca2-cc03-4b00-a620-085e30de48d3\" (UID: \"f0e81ca2-cc03-4b00-a620-085e30de48d3\") " Mar 17 18:40:58.413750 kubelet[2168]: I0317 18:40:58.413695 2168 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f0e81ca2-cc03-4b00-a620-085e30de48d3-cni-path\") pod \"f0e81ca2-cc03-4b00-a620-085e30de48d3\" (UID: \"f0e81ca2-cc03-4b00-a620-085e30de48d3\") " Mar 17 18:40:58.413750 kubelet[2168]: I0317 18:40:58.413717 2168 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f0e81ca2-cc03-4b00-a620-085e30de48d3-hubble-tls\") pod \"f0e81ca2-cc03-4b00-a620-085e30de48d3\" (UID: \"f0e81ca2-cc03-4b00-a620-085e30de48d3\") " Mar 17 18:40:58.413750 kubelet[2168]: I0317 18:40:58.413732 2168 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f0e81ca2-cc03-4b00-a620-085e30de48d3-lib-modules\") pod \"f0e81ca2-cc03-4b00-a620-085e30de48d3\" (UID: \"f0e81ca2-cc03-4b00-a620-085e30de48d3\") " Mar 17 18:40:58.413750 kubelet[2168]: I0317 18:40:58.413745 2168 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f0e81ca2-cc03-4b00-a620-085e30de48d3-etc-cni-netd\") pod \"f0e81ca2-cc03-4b00-a620-085e30de48d3\" (UID: \"f0e81ca2-cc03-4b00-a620-085e30de48d3\") " Mar 17 18:40:58.413904 kubelet[2168]: I0317 18:40:58.413767 2168 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f0e81ca2-cc03-4b00-a620-085e30de48d3-cilium-ipsec-secrets\") pod \"f0e81ca2-cc03-4b00-a620-085e30de48d3\" (UID: \"f0e81ca2-cc03-4b00-a620-085e30de48d3\") " Mar 17 18:40:58.413904 kubelet[2168]: I0317 18:40:58.413787 2168 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f0e81ca2-cc03-4b00-a620-085e30de48d3-xtables-lock\") pod \"f0e81ca2-cc03-4b00-a620-085e30de48d3\" (UID: \"f0e81ca2-cc03-4b00-a620-085e30de48d3\") " Mar 17 18:40:58.413904 kubelet[2168]: I0317 18:40:58.413840 2168 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f0e81ca2-cc03-4b00-a620-085e30de48d3-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Mar 17 18:40:58.413904 kubelet[2168]: I0317 18:40:58.413892 2168 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f0e81ca2-cc03-4b00-a620-085e30de48d3-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f0e81ca2-cc03-4b00-a620-085e30de48d3" (UID: "f0e81ca2-cc03-4b00-a620-085e30de48d3"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:40:58.413997 kubelet[2168]: I0317 18:40:58.413918 2168 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f0e81ca2-cc03-4b00-a620-085e30de48d3-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "f0e81ca2-cc03-4b00-a620-085e30de48d3" (UID: "f0e81ca2-cc03-4b00-a620-085e30de48d3"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:40:58.413997 kubelet[2168]: I0317 18:40:58.413936 2168 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f0e81ca2-cc03-4b00-a620-085e30de48d3-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "f0e81ca2-cc03-4b00-a620-085e30de48d3" (UID: "f0e81ca2-cc03-4b00-a620-085e30de48d3"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:40:58.413997 kubelet[2168]: I0317 18:40:58.413950 2168 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f0e81ca2-cc03-4b00-a620-085e30de48d3-cni-path" (OuterVolumeSpecName: "cni-path") pod "f0e81ca2-cc03-4b00-a620-085e30de48d3" (UID: "f0e81ca2-cc03-4b00-a620-085e30de48d3"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:40:58.414152 kubelet[2168]: I0317 18:40:58.414121 2168 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f0e81ca2-cc03-4b00-a620-085e30de48d3-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "f0e81ca2-cc03-4b00-a620-085e30de48d3" (UID: "f0e81ca2-cc03-4b00-a620-085e30de48d3"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:40:58.414251 kubelet[2168]: I0317 18:40:58.414234 2168 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f0e81ca2-cc03-4b00-a620-085e30de48d3-hostproc" (OuterVolumeSpecName: "hostproc") pod "f0e81ca2-cc03-4b00-a620-085e30de48d3" (UID: "f0e81ca2-cc03-4b00-a620-085e30de48d3"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:40:58.414574 kubelet[2168]: I0317 18:40:58.414548 2168 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f0e81ca2-cc03-4b00-a620-085e30de48d3-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "f0e81ca2-cc03-4b00-a620-085e30de48d3" (UID: "f0e81ca2-cc03-4b00-a620-085e30de48d3"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:40:58.414625 kubelet[2168]: I0317 18:40:58.414584 2168 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f0e81ca2-cc03-4b00-a620-085e30de48d3-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "f0e81ca2-cc03-4b00-a620-085e30de48d3" (UID: "f0e81ca2-cc03-4b00-a620-085e30de48d3"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:40:58.414625 kubelet[2168]: I0317 18:40:58.414598 2168 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f0e81ca2-cc03-4b00-a620-085e30de48d3-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f0e81ca2-cc03-4b00-a620-085e30de48d3" (UID: "f0e81ca2-cc03-4b00-a620-085e30de48d3"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:40:58.415303 kubelet[2168]: I0317 18:40:58.415276 2168 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0e81ca2-cc03-4b00-a620-085e30de48d3-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f0e81ca2-cc03-4b00-a620-085e30de48d3" (UID: "f0e81ca2-cc03-4b00-a620-085e30de48d3"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 18:40:58.417494 kubelet[2168]: I0317 18:40:58.416120 2168 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0e81ca2-cc03-4b00-a620-085e30de48d3-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "f0e81ca2-cc03-4b00-a620-085e30de48d3" (UID: "f0e81ca2-cc03-4b00-a620-085e30de48d3"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 17 18:40:58.417494 kubelet[2168]: I0317 18:40:58.417009 2168 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0e81ca2-cc03-4b00-a620-085e30de48d3-kube-api-access-6pw56" (OuterVolumeSpecName: "kube-api-access-6pw56") pod "f0e81ca2-cc03-4b00-a620-085e30de48d3" (UID: "f0e81ca2-cc03-4b00-a620-085e30de48d3"). InnerVolumeSpecName "kube-api-access-6pw56". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:40:58.417681 systemd[1]: var-lib-kubelet-pods-f0e81ca2\x2dcc03\x2d4b00\x2da620\x2d085e30de48d3-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 17 18:40:58.417813 kubelet[2168]: I0317 18:40:58.417685 2168 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0e81ca2-cc03-4b00-a620-085e30de48d3-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "f0e81ca2-cc03-4b00-a620-085e30de48d3" (UID: "f0e81ca2-cc03-4b00-a620-085e30de48d3"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:40:58.418193 kubelet[2168]: I0317 18:40:58.418009 2168 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0e81ca2-cc03-4b00-a620-085e30de48d3-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "f0e81ca2-cc03-4b00-a620-085e30de48d3" (UID: "f0e81ca2-cc03-4b00-a620-085e30de48d3"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 17 18:40:58.419774 systemd[1]: var-lib-kubelet-pods-f0e81ca2\x2dcc03\x2d4b00\x2da620\x2d085e30de48d3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6pw56.mount: Deactivated successfully. Mar 17 18:40:58.419891 systemd[1]: var-lib-kubelet-pods-f0e81ca2\x2dcc03\x2d4b00\x2da620\x2d085e30de48d3-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Mar 17 18:40:58.420000 systemd[1]: var-lib-kubelet-pods-f0e81ca2\x2dcc03\x2d4b00\x2da620\x2d085e30de48d3-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 17 18:40:58.514505 kubelet[2168]: I0317 18:40:58.514381 2168 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f0e81ca2-cc03-4b00-a620-085e30de48d3-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Mar 17 18:40:58.514505 kubelet[2168]: I0317 18:40:58.514413 2168 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-6pw56\" (UniqueName: \"kubernetes.io/projected/f0e81ca2-cc03-4b00-a620-085e30de48d3-kube-api-access-6pw56\") on node \"localhost\" DevicePath \"\"" Mar 17 18:40:58.514505 kubelet[2168]: I0317 18:40:58.514423 2168 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f0e81ca2-cc03-4b00-a620-085e30de48d3-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Mar 17 18:40:58.514505 kubelet[2168]: I0317 18:40:58.514432 2168 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f0e81ca2-cc03-4b00-a620-085e30de48d3-hostproc\") on node \"localhost\" DevicePath \"\"" Mar 17 18:40:58.514505 kubelet[2168]: I0317 18:40:58.514440 2168 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f0e81ca2-cc03-4b00-a620-085e30de48d3-bpf-maps\") on node \"localhost\" DevicePath \"\"" Mar 17 18:40:58.514505 kubelet[2168]: I0317 18:40:58.514447 2168 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f0e81ca2-cc03-4b00-a620-085e30de48d3-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 17 18:40:58.514505 kubelet[2168]: I0317 18:40:58.514454 2168 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f0e81ca2-cc03-4b00-a620-085e30de48d3-cilium-run\") on node \"localhost\" DevicePath \"\"" Mar 17 18:40:58.514505 kubelet[2168]: I0317 18:40:58.514460 2168 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f0e81ca2-cc03-4b00-a620-085e30de48d3-cni-path\") on node \"localhost\" DevicePath \"\"" Mar 17 18:40:58.514832 kubelet[2168]: I0317 18:40:58.514467 2168 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f0e81ca2-cc03-4b00-a620-085e30de48d3-hubble-tls\") on node \"localhost\" DevicePath \"\"" Mar 17 18:40:58.514832 kubelet[2168]: I0317 18:40:58.514473 2168 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f0e81ca2-cc03-4b00-a620-085e30de48d3-lib-modules\") on node \"localhost\" DevicePath \"\"" Mar 17 18:40:58.514832 kubelet[2168]: I0317 18:40:58.514479 2168 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f0e81ca2-cc03-4b00-a620-085e30de48d3-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Mar 17 18:40:58.514832 kubelet[2168]: I0317 18:40:58.514486 2168 reconciler_common.go:289] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f0e81ca2-cc03-4b00-a620-085e30de48d3-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Mar 17 18:40:58.514832 kubelet[2168]: I0317 18:40:58.514493 2168 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f0e81ca2-cc03-4b00-a620-085e30de48d3-xtables-lock\") on node \"localhost\" DevicePath \"\"" Mar 17 18:40:58.514832 kubelet[2168]: I0317 18:40:58.514500 2168 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f0e81ca2-cc03-4b00-a620-085e30de48d3-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Mar 17 18:40:59.521832 kubelet[2168]: I0317 18:40:59.521791 2168 topology_manager.go:215] "Topology Admit Handler" podUID="8a9b89db-8643-4686-abe5-46597d02b0a6" podNamespace="kube-system" podName="cilium-d74ld" Mar 17 18:40:59.620385 kubelet[2168]: I0317 18:40:59.620312 2168 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8a9b89db-8643-4686-abe5-46597d02b0a6-bpf-maps\") pod \"cilium-d74ld\" (UID: \"8a9b89db-8643-4686-abe5-46597d02b0a6\") " pod="kube-system/cilium-d74ld" Mar 17 18:40:59.620385 kubelet[2168]: I0317 18:40:59.620379 2168 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8a9b89db-8643-4686-abe5-46597d02b0a6-lib-modules\") pod \"cilium-d74ld\" (UID: \"8a9b89db-8643-4686-abe5-46597d02b0a6\") " pod="kube-system/cilium-d74ld" Mar 17 18:40:59.620569 kubelet[2168]: I0317 18:40:59.620401 2168 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8a9b89db-8643-4686-abe5-46597d02b0a6-clustermesh-secrets\") pod \"cilium-d74ld\" (UID: \"8a9b89db-8643-4686-abe5-46597d02b0a6\") " pod="kube-system/cilium-d74ld" Mar 17 18:40:59.620569 kubelet[2168]: I0317 18:40:59.620510 2168 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8a9b89db-8643-4686-abe5-46597d02b0a6-host-proc-sys-kernel\") pod \"cilium-d74ld\" (UID: \"8a9b89db-8643-4686-abe5-46597d02b0a6\") " pod="kube-system/cilium-d74ld" Mar 17 18:40:59.620569 kubelet[2168]: I0317 18:40:59.620565 2168 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkv7g\" (UniqueName: \"kubernetes.io/projected/8a9b89db-8643-4686-abe5-46597d02b0a6-kube-api-access-xkv7g\") pod \"cilium-d74ld\" (UID: \"8a9b89db-8643-4686-abe5-46597d02b0a6\") " pod="kube-system/cilium-d74ld" Mar 17 18:40:59.620678 kubelet[2168]: I0317 18:40:59.620598 2168 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8a9b89db-8643-4686-abe5-46597d02b0a6-cilium-run\") pod \"cilium-d74ld\" (UID: \"8a9b89db-8643-4686-abe5-46597d02b0a6\") " pod="kube-system/cilium-d74ld" Mar 17 18:40:59.620678 kubelet[2168]: I0317 18:40:59.620622 2168 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8a9b89db-8643-4686-abe5-46597d02b0a6-hostproc\") pod \"cilium-d74ld\" (UID: \"8a9b89db-8643-4686-abe5-46597d02b0a6\") " pod="kube-system/cilium-d74ld" Mar 17 18:40:59.620678 kubelet[2168]: I0317 18:40:59.620643 2168 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8a9b89db-8643-4686-abe5-46597d02b0a6-cilium-cgroup\") pod \"cilium-d74ld\" (UID: \"8a9b89db-8643-4686-abe5-46597d02b0a6\") " pod="kube-system/cilium-d74ld" Mar 17 18:40:59.620750 kubelet[2168]: I0317 18:40:59.620708 2168 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8a9b89db-8643-4686-abe5-46597d02b0a6-cni-path\") pod \"cilium-d74ld\" (UID: \"8a9b89db-8643-4686-abe5-46597d02b0a6\") " pod="kube-system/cilium-d74ld" Mar 17 18:40:59.620750 kubelet[2168]: I0317 18:40:59.620740 2168 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8a9b89db-8643-4686-abe5-46597d02b0a6-etc-cni-netd\") pod \"cilium-d74ld\" (UID: \"8a9b89db-8643-4686-abe5-46597d02b0a6\") " pod="kube-system/cilium-d74ld" Mar 17 18:40:59.620809 kubelet[2168]: I0317 18:40:59.620760 2168 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8a9b89db-8643-4686-abe5-46597d02b0a6-cilium-ipsec-secrets\") pod \"cilium-d74ld\" (UID: \"8a9b89db-8643-4686-abe5-46597d02b0a6\") " pod="kube-system/cilium-d74ld" Mar 17 18:40:59.620809 kubelet[2168]: I0317 18:40:59.620786 2168 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8a9b89db-8643-4686-abe5-46597d02b0a6-host-proc-sys-net\") pod \"cilium-d74ld\" (UID: \"8a9b89db-8643-4686-abe5-46597d02b0a6\") " pod="kube-system/cilium-d74ld" Mar 17 18:40:59.620809 kubelet[2168]: I0317 18:40:59.620806 2168 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8a9b89db-8643-4686-abe5-46597d02b0a6-hubble-tls\") pod \"cilium-d74ld\" (UID: \"8a9b89db-8643-4686-abe5-46597d02b0a6\") " pod="kube-system/cilium-d74ld" Mar 17 18:40:59.620933 kubelet[2168]: I0317 18:40:59.620823 2168 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8a9b89db-8643-4686-abe5-46597d02b0a6-xtables-lock\") pod \"cilium-d74ld\" (UID: \"8a9b89db-8643-4686-abe5-46597d02b0a6\") " pod="kube-system/cilium-d74ld" Mar 17 18:40:59.620933 kubelet[2168]: I0317 18:40:59.620844 2168 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8a9b89db-8643-4686-abe5-46597d02b0a6-cilium-config-path\") pod \"cilium-d74ld\" (UID: \"8a9b89db-8643-4686-abe5-46597d02b0a6\") " pod="kube-system/cilium-d74ld" Mar 17 18:40:59.826794 kubelet[2168]: E0317 18:40:59.826668 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:40:59.827320 env[1303]: time="2025-03-17T18:40:59.827275127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-d74ld,Uid:8a9b89db-8643-4686-abe5-46597d02b0a6,Namespace:kube-system,Attempt:0,}" Mar 17 18:40:59.840577 env[1303]: time="2025-03-17T18:40:59.840511488Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:40:59.840577 env[1303]: time="2025-03-17T18:40:59.840549581Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:40:59.840577 env[1303]: time="2025-03-17T18:40:59.840561293Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:40:59.840817 env[1303]: time="2025-03-17T18:40:59.840737703Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/83f337589f93caa9068f123a4df1624ed2e1668a0156e1f48562c3a6ab9ebf43 pid=3992 runtime=io.containerd.runc.v2 Mar 17 18:40:59.874215 env[1303]: time="2025-03-17T18:40:59.874142849Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-d74ld,Uid:8a9b89db-8643-4686-abe5-46597d02b0a6,Namespace:kube-system,Attempt:0,} returns sandbox id \"83f337589f93caa9068f123a4df1624ed2e1668a0156e1f48562c3a6ab9ebf43\"" Mar 17 18:40:59.874888 kubelet[2168]: E0317 18:40:59.874837 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:40:59.878150 env[1303]: time="2025-03-17T18:40:59.878105856Z" level=info msg="CreateContainer within sandbox \"83f337589f93caa9068f123a4df1624ed2e1668a0156e1f48562c3a6ab9ebf43\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 18:40:59.892029 env[1303]: time="2025-03-17T18:40:59.891957369Z" level=info msg="CreateContainer within sandbox \"83f337589f93caa9068f123a4df1624ed2e1668a0156e1f48562c3a6ab9ebf43\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"71c969d7d2983f48ada64f3746e44437224d60bbf828e932c9f9133717de7092\"" Mar 17 18:40:59.892661 env[1303]: time="2025-03-17T18:40:59.892628569Z" level=info msg="StartContainer for \"71c969d7d2983f48ada64f3746e44437224d60bbf828e932c9f9133717de7092\"" Mar 17 18:40:59.932762 env[1303]: time="2025-03-17T18:40:59.932684688Z" level=info msg="StartContainer for \"71c969d7d2983f48ada64f3746e44437224d60bbf828e932c9f9133717de7092\" returns successfully" Mar 17 18:40:59.966363 env[1303]: time="2025-03-17T18:40:59.966294116Z" level=info msg="shim disconnected" id=71c969d7d2983f48ada64f3746e44437224d60bbf828e932c9f9133717de7092 Mar 17 18:40:59.966363 env[1303]: time="2025-03-17T18:40:59.966351958Z" level=warning msg="cleaning up after shim disconnected" id=71c969d7d2983f48ada64f3746e44437224d60bbf828e932c9f9133717de7092 namespace=k8s.io Mar 17 18:40:59.966363 env[1303]: time="2025-03-17T18:40:59.966361526Z" level=info msg="cleaning up dead shim" Mar 17 18:40:59.972077 env[1303]: time="2025-03-17T18:40:59.972033417Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:40:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4077 runtime=io.containerd.runc.v2\n" Mar 17 18:41:00.227472 kubelet[2168]: I0317 18:41:00.227448 2168 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0e81ca2-cc03-4b00-a620-085e30de48d3" path="/var/lib/kubelet/pods/f0e81ca2-cc03-4b00-a620-085e30de48d3/volumes" Mar 17 18:41:00.379679 kubelet[2168]: E0317 18:41:00.379650 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:41:00.381259 env[1303]: time="2025-03-17T18:41:00.381229496Z" level=info msg="CreateContainer within sandbox \"83f337589f93caa9068f123a4df1624ed2e1668a0156e1f48562c3a6ab9ebf43\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 18:41:00.439019 env[1303]: time="2025-03-17T18:41:00.438962630Z" level=info msg="CreateContainer within sandbox \"83f337589f93caa9068f123a4df1624ed2e1668a0156e1f48562c3a6ab9ebf43\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"cbc8adde828ab029077c1aaaade4c81875b034d75708495e65e6f9cea89c8e17\"" Mar 17 18:41:00.439534 env[1303]: time="2025-03-17T18:41:00.439506595Z" level=info msg="StartContainer for \"cbc8adde828ab029077c1aaaade4c81875b034d75708495e65e6f9cea89c8e17\"" Mar 17 18:41:00.501131 env[1303]: time="2025-03-17T18:41:00.500769390Z" level=info msg="StartContainer for \"cbc8adde828ab029077c1aaaade4c81875b034d75708495e65e6f9cea89c8e17\" returns successfully" Mar 17 18:41:00.521038 env[1303]: time="2025-03-17T18:41:00.520984460Z" level=info msg="shim disconnected" id=cbc8adde828ab029077c1aaaade4c81875b034d75708495e65e6f9cea89c8e17 Mar 17 18:41:00.521038 env[1303]: time="2025-03-17T18:41:00.521032663Z" level=warning msg="cleaning up after shim disconnected" id=cbc8adde828ab029077c1aaaade4c81875b034d75708495e65e6f9cea89c8e17 namespace=k8s.io Mar 17 18:41:00.521038 env[1303]: time="2025-03-17T18:41:00.521042210Z" level=info msg="cleaning up dead shim" Mar 17 18:41:00.526105 env[1303]: time="2025-03-17T18:41:00.526068916Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:41:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4138 runtime=io.containerd.runc.v2\n" Mar 17 18:41:01.291155 kubelet[2168]: E0317 18:41:01.291122 2168 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 18:41:01.382583 kubelet[2168]: E0317 18:41:01.382558 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:41:01.383763 env[1303]: time="2025-03-17T18:41:01.383723952Z" level=info msg="CreateContainer within sandbox \"83f337589f93caa9068f123a4df1624ed2e1668a0156e1f48562c3a6ab9ebf43\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 18:41:01.402843 env[1303]: time="2025-03-17T18:41:01.402788270Z" level=info msg="CreateContainer within sandbox \"83f337589f93caa9068f123a4df1624ed2e1668a0156e1f48562c3a6ab9ebf43\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2bc490c4502244763f1ec0f852e1318fe18b837886df12352e7a6f391e4f6a3a\"" Mar 17 18:41:01.403383 env[1303]: time="2025-03-17T18:41:01.403302205Z" level=info msg="StartContainer for \"2bc490c4502244763f1ec0f852e1318fe18b837886df12352e7a6f391e4f6a3a\"" Mar 17 18:41:01.461913 env[1303]: time="2025-03-17T18:41:01.461851601Z" level=info msg="StartContainer for \"2bc490c4502244763f1ec0f852e1318fe18b837886df12352e7a6f391e4f6a3a\" returns successfully" Mar 17 18:41:01.478629 env[1303]: time="2025-03-17T18:41:01.478575354Z" level=info msg="shim disconnected" id=2bc490c4502244763f1ec0f852e1318fe18b837886df12352e7a6f391e4f6a3a Mar 17 18:41:01.478629 env[1303]: time="2025-03-17T18:41:01.478621984Z" level=warning msg="cleaning up after shim disconnected" id=2bc490c4502244763f1ec0f852e1318fe18b837886df12352e7a6f391e4f6a3a namespace=k8s.io Mar 17 18:41:01.478629 env[1303]: time="2025-03-17T18:41:01.478631011Z" level=info msg="cleaning up dead shim" Mar 17 18:41:01.492481 env[1303]: time="2025-03-17T18:41:01.492401185Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:41:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4194 runtime=io.containerd.runc.v2\n" Mar 17 18:41:01.726198 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2bc490c4502244763f1ec0f852e1318fe18b837886df12352e7a6f391e4f6a3a-rootfs.mount: Deactivated successfully. Mar 17 18:41:02.386045 kubelet[2168]: E0317 18:41:02.386013 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:41:02.388274 env[1303]: time="2025-03-17T18:41:02.388198356Z" level=info msg="CreateContainer within sandbox \"83f337589f93caa9068f123a4df1624ed2e1668a0156e1f48562c3a6ab9ebf43\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 18:41:02.402829 env[1303]: time="2025-03-17T18:41:02.402756446Z" level=info msg="CreateContainer within sandbox \"83f337589f93caa9068f123a4df1624ed2e1668a0156e1f48562c3a6ab9ebf43\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2fd478a85957155e8188087981f665923eae0f14218f1dbdbe93aab3e8bc0307\"" Mar 17 18:41:02.403538 env[1303]: time="2025-03-17T18:41:02.403509030Z" level=info msg="StartContainer for \"2fd478a85957155e8188087981f665923eae0f14218f1dbdbe93aab3e8bc0307\"" Mar 17 18:41:02.447570 env[1303]: time="2025-03-17T18:41:02.447513093Z" level=info msg="StartContainer for \"2fd478a85957155e8188087981f665923eae0f14218f1dbdbe93aab3e8bc0307\" returns successfully" Mar 17 18:41:02.464800 env[1303]: time="2025-03-17T18:41:02.464747417Z" level=info msg="shim disconnected" id=2fd478a85957155e8188087981f665923eae0f14218f1dbdbe93aab3e8bc0307 Mar 17 18:41:02.464800 env[1303]: time="2025-03-17T18:41:02.464799226Z" level=warning msg="cleaning up after shim disconnected" id=2fd478a85957155e8188087981f665923eae0f14218f1dbdbe93aab3e8bc0307 namespace=k8s.io Mar 17 18:41:02.464800 env[1303]: time="2025-03-17T18:41:02.464808464Z" level=info msg="cleaning up dead shim" Mar 17 18:41:02.470822 env[1303]: time="2025-03-17T18:41:02.470789909Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:41:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4248 runtime=io.containerd.runc.v2\n" Mar 17 18:41:02.725896 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2fd478a85957155e8188087981f665923eae0f14218f1dbdbe93aab3e8bc0307-rootfs.mount: Deactivated successfully. Mar 17 18:41:03.390065 kubelet[2168]: E0317 18:41:03.390039 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:41:03.393200 env[1303]: time="2025-03-17T18:41:03.393137079Z" level=info msg="CreateContainer within sandbox \"83f337589f93caa9068f123a4df1624ed2e1668a0156e1f48562c3a6ab9ebf43\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 18:41:03.407653 env[1303]: time="2025-03-17T18:41:03.407601153Z" level=info msg="CreateContainer within sandbox \"83f337589f93caa9068f123a4df1624ed2e1668a0156e1f48562c3a6ab9ebf43\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a90aed8bad980172598fc34cee4101fdfbb497f297ae27f1e467d253141be0b6\"" Mar 17 18:41:03.408045 env[1303]: time="2025-03-17T18:41:03.408020297Z" level=info msg="StartContainer for \"a90aed8bad980172598fc34cee4101fdfbb497f297ae27f1e467d253141be0b6\"" Mar 17 18:41:03.453215 env[1303]: time="2025-03-17T18:41:03.453174327Z" level=info msg="StartContainer for \"a90aed8bad980172598fc34cee4101fdfbb497f297ae27f1e467d253141be0b6\" returns successfully" Mar 17 18:41:03.684910 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Mar 17 18:41:04.394431 kubelet[2168]: E0317 18:41:04.394398 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:41:04.406795 kubelet[2168]: I0317 18:41:04.406740 2168 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-d74ld" podStartSLOduration=5.406720526 podStartE2EDuration="5.406720526s" podCreationTimestamp="2025-03-17 18:40:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:41:04.406670119 +0000 UTC m=+88.259476466" watchObservedRunningTime="2025-03-17 18:41:04.406720526 +0000 UTC m=+88.259526893" Mar 17 18:41:05.827548 kubelet[2168]: E0317 18:41:05.827520 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:41:06.199203 systemd-networkd[1079]: lxc_health: Link UP Mar 17 18:41:06.209485 systemd-networkd[1079]: lxc_health: Gained carrier Mar 17 18:41:06.209910 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Mar 17 18:41:06.410532 systemd[1]: run-containerd-runc-k8s.io-a90aed8bad980172598fc34cee4101fdfbb497f297ae27f1e467d253141be0b6-runc.5c0eRn.mount: Deactivated successfully. Mar 17 18:41:07.829929 kubelet[2168]: E0317 18:41:07.829896 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:41:08.225075 systemd-networkd[1079]: lxc_health: Gained IPv6LL Mar 17 18:41:08.401006 kubelet[2168]: E0317 18:41:08.400973 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:41:08.570602 systemd[1]: run-containerd-runc-k8s.io-a90aed8bad980172598fc34cee4101fdfbb497f297ae27f1e467d253141be0b6-runc.Deo6rO.mount: Deactivated successfully. Mar 17 18:41:09.230064 kubelet[2168]: E0317 18:41:09.230004 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:41:12.758459 systemd[1]: run-containerd-runc-k8s.io-a90aed8bad980172598fc34cee4101fdfbb497f297ae27f1e467d253141be0b6-runc.8BhUS9.mount: Deactivated successfully. Mar 17 18:41:12.805235 sshd[3960]: pam_unix(sshd:session): session closed for user core Mar 17 18:41:12.807388 systemd[1]: sshd@24-10.0.0.57:22-10.0.0.1:34120.service: Deactivated successfully. Mar 17 18:41:12.808251 systemd-logind[1289]: Session 25 logged out. Waiting for processes to exit. Mar 17 18:41:12.808291 systemd[1]: session-25.scope: Deactivated successfully. Mar 17 18:41:12.809206 systemd-logind[1289]: Removed session 25. Mar 17 18:41:13.225697 kubelet[2168]: E0317 18:41:13.225650 2168 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"