Mar 17 18:42:40.082831 kernel: Linux version 5.15.179-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Mar 17 17:12:34 -00 2025 Mar 17 18:42:40.082853 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=249ccd113f901380672c0d31e18f792e8e0344094c0e39eedc449f039418b31a Mar 17 18:42:40.082863 kernel: BIOS-provided physical RAM map: Mar 17 18:42:40.082869 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 17 18:42:40.082874 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Mar 17 18:42:40.082879 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Mar 17 18:42:40.082886 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Mar 17 18:42:40.082892 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Mar 17 18:42:40.082898 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Mar 17 18:42:40.082906 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Mar 17 18:42:40.082912 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Mar 17 18:42:40.082917 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Mar 17 18:42:40.082923 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Mar 17 18:42:40.082929 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Mar 17 18:42:40.082936 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Mar 17 18:42:40.082944 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Mar 17 18:42:40.082950 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Mar 17 18:42:40.082956 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 17 18:42:40.082962 kernel: NX (Execute Disable) protection: active Mar 17 18:42:40.082968 kernel: e820: update [mem 0x9b475018-0x9b47ec57] usable ==> usable Mar 17 18:42:40.082974 kernel: e820: update [mem 0x9b475018-0x9b47ec57] usable ==> usable Mar 17 18:42:40.082980 kernel: e820: update [mem 0x9b438018-0x9b474e57] usable ==> usable Mar 17 18:42:40.082986 kernel: e820: update [mem 0x9b438018-0x9b474e57] usable ==> usable Mar 17 18:42:40.082992 kernel: extended physical RAM map: Mar 17 18:42:40.082998 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 17 18:42:40.083005 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Mar 17 18:42:40.083011 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Mar 17 18:42:40.083017 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Mar 17 18:42:40.083023 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Mar 17 18:42:40.083030 kernel: reserve setup_data: [mem 0x000000000080c000-0x000000000080ffff] usable Mar 17 18:42:40.083039 kernel: reserve setup_data: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Mar 17 18:42:40.083045 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b438017] usable Mar 17 18:42:40.083051 kernel: reserve setup_data: [mem 0x000000009b438018-0x000000009b474e57] usable Mar 17 18:42:40.083057 kernel: reserve setup_data: [mem 0x000000009b474e58-0x000000009b475017] usable Mar 17 18:42:40.083063 kernel: reserve setup_data: [mem 0x000000009b475018-0x000000009b47ec57] usable Mar 17 18:42:40.083069 kernel: reserve setup_data: [mem 0x000000009b47ec58-0x000000009c8eefff] usable Mar 17 18:42:40.083076 kernel: reserve setup_data: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Mar 17 18:42:40.083082 kernel: reserve setup_data: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Mar 17 18:42:40.083088 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Mar 17 18:42:40.083095 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Mar 17 18:42:40.083104 kernel: reserve setup_data: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Mar 17 18:42:40.083110 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Mar 17 18:42:40.083117 kernel: reserve setup_data: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 17 18:42:40.083125 kernel: efi: EFI v2.70 by EDK II Mar 17 18:42:40.083131 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b673018 RNG=0x9cb73018 Mar 17 18:42:40.083138 kernel: random: crng init done Mar 17 18:42:40.083144 kernel: SMBIOS 2.8 present. Mar 17 18:42:40.083151 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Mar 17 18:42:40.083158 kernel: Hypervisor detected: KVM Mar 17 18:42:40.083164 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 17 18:42:40.083171 kernel: kvm-clock: cpu 0, msr 1e19a001, primary cpu clock Mar 17 18:42:40.083189 kernel: kvm-clock: using sched offset of 5032595946 cycles Mar 17 18:42:40.083201 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 17 18:42:40.083208 kernel: tsc: Detected 2794.748 MHz processor Mar 17 18:42:40.083215 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 17 18:42:40.083222 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 17 18:42:40.083229 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Mar 17 18:42:40.083236 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 17 18:42:40.083242 kernel: Using GB pages for direct mapping Mar 17 18:42:40.083249 kernel: Secure boot disabled Mar 17 18:42:40.083256 kernel: ACPI: Early table checksum verification disabled Mar 17 18:42:40.083264 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Mar 17 18:42:40.083271 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Mar 17 18:42:40.083277 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:42:40.083284 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:42:40.083291 kernel: ACPI: FACS 0x000000009CBDD000 000040 Mar 17 18:42:40.083297 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:42:40.083304 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:42:40.083311 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:42:40.083318 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:42:40.083326 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Mar 17 18:42:40.083333 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Mar 17 18:42:40.083339 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Mar 17 18:42:40.083346 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Mar 17 18:42:40.083352 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Mar 17 18:42:40.083359 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Mar 17 18:42:40.083369 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Mar 17 18:42:40.083375 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Mar 17 18:42:40.083382 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Mar 17 18:42:40.083390 kernel: No NUMA configuration found Mar 17 18:42:40.083397 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Mar 17 18:42:40.083404 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Mar 17 18:42:40.083410 kernel: Zone ranges: Mar 17 18:42:40.083417 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 17 18:42:40.083424 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Mar 17 18:42:40.083430 kernel: Normal empty Mar 17 18:42:40.083437 kernel: Movable zone start for each node Mar 17 18:42:40.083444 kernel: Early memory node ranges Mar 17 18:42:40.083452 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Mar 17 18:42:40.083458 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Mar 17 18:42:40.083465 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Mar 17 18:42:40.083472 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Mar 17 18:42:40.083478 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Mar 17 18:42:40.083485 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Mar 17 18:42:40.083491 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Mar 17 18:42:40.083498 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 17 18:42:40.083505 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Mar 17 18:42:40.083512 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Mar 17 18:42:40.083520 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 17 18:42:40.083526 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Mar 17 18:42:40.083533 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Mar 17 18:42:40.083540 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Mar 17 18:42:40.083547 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 17 18:42:40.083553 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 17 18:42:40.083560 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 17 18:42:40.083567 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 17 18:42:40.083574 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 17 18:42:40.083582 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 17 18:42:40.083604 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 17 18:42:40.083611 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 17 18:42:40.083621 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 17 18:42:40.083627 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 17 18:42:40.083634 kernel: TSC deadline timer available Mar 17 18:42:40.083641 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Mar 17 18:42:40.083647 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 17 18:42:40.083654 kernel: kvm-guest: setup PV sched yield Mar 17 18:42:40.083662 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Mar 17 18:42:40.083669 kernel: Booting paravirtualized kernel on KVM Mar 17 18:42:40.083681 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 17 18:42:40.083690 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Mar 17 18:42:40.083697 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 Mar 17 18:42:40.083704 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 Mar 17 18:42:40.083711 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 17 18:42:40.083721 kernel: kvm-guest: setup async PF for cpu 0 Mar 17 18:42:40.083728 kernel: kvm-guest: stealtime: cpu 0, msr 9b21c0c0 Mar 17 18:42:40.083735 kernel: kvm-guest: PV spinlocks enabled Mar 17 18:42:40.083742 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 17 18:42:40.083749 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Mar 17 18:42:40.083758 kernel: Policy zone: DMA32 Mar 17 18:42:40.083766 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=249ccd113f901380672c0d31e18f792e8e0344094c0e39eedc449f039418b31a Mar 17 18:42:40.083782 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 17 18:42:40.083789 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 17 18:42:40.083798 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 17 18:42:40.083806 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 17 18:42:40.083813 kernel: Memory: 2397432K/2567000K available (12294K kernel code, 2278K rwdata, 13724K rodata, 47472K init, 4108K bss, 169308K reserved, 0K cma-reserved) Mar 17 18:42:40.083820 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 17 18:42:40.083827 kernel: ftrace: allocating 34580 entries in 136 pages Mar 17 18:42:40.083835 kernel: ftrace: allocated 136 pages with 2 groups Mar 17 18:42:40.083842 kernel: rcu: Hierarchical RCU implementation. Mar 17 18:42:40.083860 kernel: rcu: RCU event tracing is enabled. Mar 17 18:42:40.083868 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 17 18:42:40.083877 kernel: Rude variant of Tasks RCU enabled. Mar 17 18:42:40.083886 kernel: Tracing variant of Tasks RCU enabled. Mar 17 18:42:40.083894 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 17 18:42:40.083903 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 17 18:42:40.083910 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 17 18:42:40.083917 kernel: Console: colour dummy device 80x25 Mar 17 18:42:40.083924 kernel: printk: console [ttyS0] enabled Mar 17 18:42:40.083931 kernel: ACPI: Core revision 20210730 Mar 17 18:42:40.083938 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 17 18:42:40.083947 kernel: APIC: Switch to symmetric I/O mode setup Mar 17 18:42:40.083954 kernel: x2apic enabled Mar 17 18:42:40.083961 kernel: Switched APIC routing to physical x2apic. Mar 17 18:42:40.083968 kernel: kvm-guest: setup PV IPIs Mar 17 18:42:40.083975 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 17 18:42:40.083982 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 17 18:42:40.083989 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Mar 17 18:42:40.083996 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 17 18:42:40.084003 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 17 18:42:40.084012 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 17 18:42:40.084019 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 17 18:42:40.084026 kernel: Spectre V2 : Mitigation: Retpolines Mar 17 18:42:40.084033 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Mar 17 18:42:40.084040 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Mar 17 18:42:40.084047 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Mar 17 18:42:40.084054 kernel: RETBleed: Mitigation: untrained return thunk Mar 17 18:42:40.084064 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Mar 17 18:42:40.084071 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Mar 17 18:42:40.084079 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 17 18:42:40.084089 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 17 18:42:40.084096 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 17 18:42:40.084104 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 17 18:42:40.084111 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Mar 17 18:42:40.084118 kernel: Freeing SMP alternatives memory: 32K Mar 17 18:42:40.084125 kernel: pid_max: default: 32768 minimum: 301 Mar 17 18:42:40.084132 kernel: LSM: Security Framework initializing Mar 17 18:42:40.084138 kernel: SELinux: Initializing. Mar 17 18:42:40.084148 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 18:42:40.084155 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 18:42:40.084162 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Mar 17 18:42:40.084169 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Mar 17 18:42:40.084185 kernel: ... version: 0 Mar 17 18:42:40.084193 kernel: ... bit width: 48 Mar 17 18:42:40.084199 kernel: ... generic registers: 6 Mar 17 18:42:40.084207 kernel: ... value mask: 0000ffffffffffff Mar 17 18:42:40.084214 kernel: ... max period: 00007fffffffffff Mar 17 18:42:40.084223 kernel: ... fixed-purpose events: 0 Mar 17 18:42:40.084230 kernel: ... event mask: 000000000000003f Mar 17 18:42:40.084237 kernel: signal: max sigframe size: 1776 Mar 17 18:42:40.084244 kernel: rcu: Hierarchical SRCU implementation. Mar 17 18:42:40.084251 kernel: smp: Bringing up secondary CPUs ... Mar 17 18:42:40.084258 kernel: x86: Booting SMP configuration: Mar 17 18:42:40.084265 kernel: .... node #0, CPUs: #1 Mar 17 18:42:40.084272 kernel: kvm-clock: cpu 1, msr 1e19a041, secondary cpu clock Mar 17 18:42:40.084279 kernel: kvm-guest: setup async PF for cpu 1 Mar 17 18:42:40.084287 kernel: kvm-guest: stealtime: cpu 1, msr 9b29c0c0 Mar 17 18:42:40.084294 kernel: #2 Mar 17 18:42:40.084302 kernel: kvm-clock: cpu 2, msr 1e19a081, secondary cpu clock Mar 17 18:42:40.084309 kernel: kvm-guest: setup async PF for cpu 2 Mar 17 18:42:40.084316 kernel: kvm-guest: stealtime: cpu 2, msr 9b31c0c0 Mar 17 18:42:40.084323 kernel: #3 Mar 17 18:42:40.084330 kernel: kvm-clock: cpu 3, msr 1e19a0c1, secondary cpu clock Mar 17 18:42:40.084336 kernel: kvm-guest: setup async PF for cpu 3 Mar 17 18:42:40.084343 kernel: kvm-guest: stealtime: cpu 3, msr 9b39c0c0 Mar 17 18:42:40.084352 kernel: smp: Brought up 1 node, 4 CPUs Mar 17 18:42:40.084359 kernel: smpboot: Max logical packages: 1 Mar 17 18:42:40.084366 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Mar 17 18:42:40.084373 kernel: devtmpfs: initialized Mar 17 18:42:40.084380 kernel: x86/mm: Memory block size: 128MB Mar 17 18:42:40.084387 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Mar 17 18:42:40.084394 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Mar 17 18:42:40.084401 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Mar 17 18:42:40.084409 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Mar 17 18:42:40.084417 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Mar 17 18:42:40.084424 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 17 18:42:40.084431 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 17 18:42:40.084439 kernel: pinctrl core: initialized pinctrl subsystem Mar 17 18:42:40.084446 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 17 18:42:40.084453 kernel: audit: initializing netlink subsys (disabled) Mar 17 18:42:40.084460 kernel: audit: type=2000 audit(1742236959.345:1): state=initialized audit_enabled=0 res=1 Mar 17 18:42:40.084467 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 17 18:42:40.084474 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 17 18:42:40.084483 kernel: cpuidle: using governor menu Mar 17 18:42:40.084490 kernel: ACPI: bus type PCI registered Mar 17 18:42:40.084497 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 17 18:42:40.084504 kernel: dca service started, version 1.12.1 Mar 17 18:42:40.084511 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 17 18:42:40.084519 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 Mar 17 18:42:40.084526 kernel: PCI: Using configuration type 1 for base access Mar 17 18:42:40.084533 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 17 18:42:40.084540 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Mar 17 18:42:40.084549 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Mar 17 18:42:40.084556 kernel: ACPI: Added _OSI(Module Device) Mar 17 18:42:40.084563 kernel: ACPI: Added _OSI(Processor Device) Mar 17 18:42:40.084570 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 17 18:42:40.084577 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 17 18:42:40.084584 kernel: ACPI: Added _OSI(Linux-Dell-Video) Mar 17 18:42:40.084591 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Mar 17 18:42:40.084598 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Mar 17 18:42:40.084605 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 17 18:42:40.084614 kernel: ACPI: Interpreter enabled Mar 17 18:42:40.084621 kernel: ACPI: PM: (supports S0 S3 S5) Mar 17 18:42:40.084628 kernel: ACPI: Using IOAPIC for interrupt routing Mar 17 18:42:40.084636 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 17 18:42:40.084643 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 17 18:42:40.084650 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 17 18:42:40.084856 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 17 18:42:40.084977 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 17 18:42:40.085133 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 17 18:42:40.085143 kernel: PCI host bridge to bus 0000:00 Mar 17 18:42:40.085252 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 17 18:42:40.085323 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 17 18:42:40.085391 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 17 18:42:40.085457 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Mar 17 18:42:40.085523 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 17 18:42:40.085595 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Mar 17 18:42:40.085666 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 17 18:42:40.085763 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 17 18:42:40.085866 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 17 18:42:40.085942 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Mar 17 18:42:40.086018 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Mar 17 18:42:40.088304 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Mar 17 18:42:40.088434 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Mar 17 18:42:40.088513 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 17 18:42:40.088610 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Mar 17 18:42:40.088695 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Mar 17 18:42:40.088785 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Mar 17 18:42:40.088865 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Mar 17 18:42:40.088963 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Mar 17 18:42:40.089043 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Mar 17 18:42:40.089120 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Mar 17 18:42:40.089213 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Mar 17 18:42:40.089309 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 17 18:42:40.089389 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Mar 17 18:42:40.089466 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Mar 17 18:42:40.089548 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Mar 17 18:42:40.089625 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Mar 17 18:42:40.089720 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 17 18:42:40.089807 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 17 18:42:40.089920 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 17 18:42:40.090038 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Mar 17 18:42:40.090236 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Mar 17 18:42:40.090368 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 17 18:42:40.090450 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Mar 17 18:42:40.090460 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 17 18:42:40.090469 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 17 18:42:40.090477 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 17 18:42:40.090484 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 17 18:42:40.090492 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 17 18:42:40.090499 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 17 18:42:40.090510 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 17 18:42:40.090518 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 17 18:42:40.090526 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 17 18:42:40.090534 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 17 18:42:40.090541 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 17 18:42:40.090548 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 17 18:42:40.090556 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 17 18:42:40.090563 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 17 18:42:40.090571 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 17 18:42:40.090579 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 17 18:42:40.090587 kernel: iommu: Default domain type: Translated Mar 17 18:42:40.090595 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 17 18:42:40.090672 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 17 18:42:40.090750 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 17 18:42:40.090837 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 17 18:42:40.090848 kernel: vgaarb: loaded Mar 17 18:42:40.090856 kernel: pps_core: LinuxPPS API ver. 1 registered Mar 17 18:42:40.090864 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Mar 17 18:42:40.090875 kernel: PTP clock support registered Mar 17 18:42:40.090884 kernel: Registered efivars operations Mar 17 18:42:40.090892 kernel: PCI: Using ACPI for IRQ routing Mar 17 18:42:40.090901 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 17 18:42:40.090910 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Mar 17 18:42:40.090917 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Mar 17 18:42:40.090925 kernel: e820: reserve RAM buffer [mem 0x9b438018-0x9bffffff] Mar 17 18:42:40.090933 kernel: e820: reserve RAM buffer [mem 0x9b475018-0x9bffffff] Mar 17 18:42:40.090940 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Mar 17 18:42:40.090949 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Mar 17 18:42:40.090956 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 17 18:42:40.090964 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 17 18:42:40.090972 kernel: clocksource: Switched to clocksource kvm-clock Mar 17 18:42:40.090979 kernel: VFS: Disk quotas dquot_6.6.0 Mar 17 18:42:40.090987 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 17 18:42:40.090995 kernel: pnp: PnP ACPI init Mar 17 18:42:40.091100 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 17 18:42:40.091116 kernel: pnp: PnP ACPI: found 6 devices Mar 17 18:42:40.091123 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 17 18:42:40.091131 kernel: NET: Registered PF_INET protocol family Mar 17 18:42:40.091139 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 17 18:42:40.091146 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 17 18:42:40.091154 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 17 18:42:40.091162 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 17 18:42:40.091170 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Mar 17 18:42:40.091192 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 17 18:42:40.091200 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 18:42:40.091208 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 18:42:40.091216 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 17 18:42:40.091223 kernel: NET: Registered PF_XDP protocol family Mar 17 18:42:40.091314 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Mar 17 18:42:40.091412 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Mar 17 18:42:40.091496 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 17 18:42:40.091573 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 17 18:42:40.091644 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 17 18:42:40.094027 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Mar 17 18:42:40.094124 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 17 18:42:40.094212 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Mar 17 18:42:40.094224 kernel: PCI: CLS 0 bytes, default 64 Mar 17 18:42:40.094232 kernel: Initialise system trusted keyrings Mar 17 18:42:40.094240 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 17 18:42:40.094248 kernel: Key type asymmetric registered Mar 17 18:42:40.094260 kernel: Asymmetric key parser 'x509' registered Mar 17 18:42:40.094268 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Mar 17 18:42:40.094288 kernel: io scheduler mq-deadline registered Mar 17 18:42:40.094298 kernel: io scheduler kyber registered Mar 17 18:42:40.094306 kernel: io scheduler bfq registered Mar 17 18:42:40.094314 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 17 18:42:40.094323 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 17 18:42:40.094336 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 17 18:42:40.094345 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 17 18:42:40.094355 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 17 18:42:40.094363 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 17 18:42:40.094371 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 17 18:42:40.094379 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 17 18:42:40.094387 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 17 18:42:40.094395 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 17 18:42:40.094501 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 17 18:42:40.094575 kernel: rtc_cmos 00:04: registered as rtc0 Mar 17 18:42:40.094648 kernel: rtc_cmos 00:04: setting system clock to 2025-03-17T18:42:39 UTC (1742236959) Mar 17 18:42:40.094715 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 17 18:42:40.094725 kernel: efifb: probing for efifb Mar 17 18:42:40.094733 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Mar 17 18:42:40.094741 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Mar 17 18:42:40.094749 kernel: efifb: scrolling: redraw Mar 17 18:42:40.094757 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 17 18:42:40.094765 kernel: Console: switching to colour frame buffer device 160x50 Mar 17 18:42:40.094781 kernel: fb0: EFI VGA frame buffer device Mar 17 18:42:40.094792 kernel: pstore: Registered efi as persistent store backend Mar 17 18:42:40.094802 kernel: NET: Registered PF_INET6 protocol family Mar 17 18:42:40.094810 kernel: Segment Routing with IPv6 Mar 17 18:42:40.094820 kernel: In-situ OAM (IOAM) with IPv6 Mar 17 18:42:40.094827 kernel: NET: Registered PF_PACKET protocol family Mar 17 18:42:40.094837 kernel: Key type dns_resolver registered Mar 17 18:42:40.094845 kernel: IPI shorthand broadcast: enabled Mar 17 18:42:40.094853 kernel: sched_clock: Marking stable (565001579, 126906166)->(713972834, -22065089) Mar 17 18:42:40.094861 kernel: registered taskstats version 1 Mar 17 18:42:40.094869 kernel: Loading compiled-in X.509 certificates Mar 17 18:42:40.094877 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.179-flatcar: d5b956bbabb2d386c0246a969032c0de9eaa8220' Mar 17 18:42:40.094885 kernel: Key type .fscrypt registered Mar 17 18:42:40.094893 kernel: Key type fscrypt-provisioning registered Mar 17 18:42:40.094901 kernel: pstore: Using crash dump compression: deflate Mar 17 18:42:40.094911 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 17 18:42:40.094919 kernel: ima: Allocated hash algorithm: sha1 Mar 17 18:42:40.094926 kernel: ima: No architecture policies found Mar 17 18:42:40.094934 kernel: clk: Disabling unused clocks Mar 17 18:42:40.094942 kernel: Freeing unused kernel image (initmem) memory: 47472K Mar 17 18:42:40.094950 kernel: Write protecting the kernel read-only data: 28672k Mar 17 18:42:40.094958 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Mar 17 18:42:40.094966 kernel: Freeing unused kernel image (rodata/data gap) memory: 612K Mar 17 18:42:40.094974 kernel: Run /init as init process Mar 17 18:42:40.094983 kernel: with arguments: Mar 17 18:42:40.094991 kernel: /init Mar 17 18:42:40.094999 kernel: with environment: Mar 17 18:42:40.095006 kernel: HOME=/ Mar 17 18:42:40.095013 kernel: TERM=linux Mar 17 18:42:40.095021 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 17 18:42:40.095032 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Mar 17 18:42:40.095043 systemd[1]: Detected virtualization kvm. Mar 17 18:42:40.095053 systemd[1]: Detected architecture x86-64. Mar 17 18:42:40.095061 systemd[1]: Running in initrd. Mar 17 18:42:40.095069 systemd[1]: No hostname configured, using default hostname. Mar 17 18:42:40.095077 systemd[1]: Hostname set to . Mar 17 18:42:40.095086 systemd[1]: Initializing machine ID from VM UUID. Mar 17 18:42:40.095094 systemd[1]: Queued start job for default target initrd.target. Mar 17 18:42:40.095102 systemd[1]: Started systemd-ask-password-console.path. Mar 17 18:42:40.095110 systemd[1]: Reached target cryptsetup.target. Mar 17 18:42:40.095120 systemd[1]: Reached target paths.target. Mar 17 18:42:40.095128 systemd[1]: Reached target slices.target. Mar 17 18:42:40.095136 systemd[1]: Reached target swap.target. Mar 17 18:42:40.095144 systemd[1]: Reached target timers.target. Mar 17 18:42:40.095153 systemd[1]: Listening on iscsid.socket. Mar 17 18:42:40.095161 systemd[1]: Listening on iscsiuio.socket. Mar 17 18:42:40.095170 systemd[1]: Listening on systemd-journald-audit.socket. Mar 17 18:42:40.095189 systemd[1]: Listening on systemd-journald-dev-log.socket. Mar 17 18:42:40.095199 systemd[1]: Listening on systemd-journald.socket. Mar 17 18:42:40.095208 systemd[1]: Listening on systemd-networkd.socket. Mar 17 18:42:40.095216 systemd[1]: Listening on systemd-udevd-control.socket. Mar 17 18:42:40.095224 systemd[1]: Listening on systemd-udevd-kernel.socket. Mar 17 18:42:40.095232 systemd[1]: Reached target sockets.target. Mar 17 18:42:40.095241 systemd[1]: Starting kmod-static-nodes.service... Mar 17 18:42:40.095249 systemd[1]: Finished network-cleanup.service. Mar 17 18:42:40.095257 systemd[1]: Starting systemd-fsck-usr.service... Mar 17 18:42:40.095265 systemd[1]: Starting systemd-journald.service... Mar 17 18:42:40.095275 systemd[1]: Starting systemd-modules-load.service... Mar 17 18:42:40.095283 systemd[1]: Starting systemd-resolved.service... Mar 17 18:42:40.095292 systemd[1]: Starting systemd-vconsole-setup.service... Mar 17 18:42:40.095300 systemd[1]: Finished kmod-static-nodes.service. Mar 17 18:42:40.095308 systemd[1]: Finished systemd-fsck-usr.service. Mar 17 18:42:40.095316 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Mar 17 18:42:40.095324 systemd[1]: Finished systemd-vconsole-setup.service. Mar 17 18:42:40.095333 kernel: audit: type=1130 audit(1742236960.086:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:40.095343 systemd[1]: Starting dracut-cmdline-ask.service... Mar 17 18:42:40.095352 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Mar 17 18:42:40.095361 kernel: audit: type=1130 audit(1742236960.093:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:40.095373 systemd-journald[198]: Journal started Mar 17 18:42:40.095426 systemd-journald[198]: Runtime Journal (/run/log/journal/6fb008e601dd4961b14189540f1d871e) is 6.0M, max 48.4M, 42.4M free. Mar 17 18:42:40.086000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:40.093000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:40.080143 systemd-modules-load[199]: Inserted module 'overlay' Mar 17 18:42:40.098449 systemd[1]: Started systemd-journald.service. Mar 17 18:42:40.099000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:40.103210 kernel: audit: type=1130 audit(1742236960.099:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:40.111568 systemd-resolved[200]: Positive Trust Anchors: Mar 17 18:42:40.111605 systemd-resolved[200]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 18:42:40.111633 systemd-resolved[200]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Mar 17 18:42:40.114159 systemd-resolved[200]: Defaulting to hostname 'linux'. Mar 17 18:42:40.130991 kernel: audit: type=1130 audit(1742236960.120:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:40.131019 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 17 18:42:40.131029 kernel: audit: type=1130 audit(1742236960.125:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:40.131038 kernel: Bridge firewalling registered Mar 17 18:42:40.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:40.125000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:40.114211 systemd[1]: Finished dracut-cmdline-ask.service. Mar 17 18:42:40.120466 systemd[1]: Started systemd-resolved.service. Mar 17 18:42:40.125878 systemd[1]: Reached target nss-lookup.target. Mar 17 18:42:40.130997 systemd-modules-load[199]: Inserted module 'br_netfilter' Mar 17 18:42:40.131747 systemd[1]: Starting dracut-cmdline.service... Mar 17 18:42:40.140623 dracut-cmdline[218]: dracut-dracut-053 Mar 17 18:42:40.142831 dracut-cmdline[218]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=249ccd113f901380672c0d31e18f792e8e0344094c0e39eedc449f039418b31a Mar 17 18:42:40.149211 kernel: SCSI subsystem initialized Mar 17 18:42:40.161283 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 17 18:42:40.161326 kernel: device-mapper: uevent: version 1.0.3 Mar 17 18:42:40.161337 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Mar 17 18:42:40.163901 systemd-modules-load[199]: Inserted module 'dm_multipath' Mar 17 18:42:40.164589 systemd[1]: Finished systemd-modules-load.service. Mar 17 18:42:40.169568 kernel: audit: type=1130 audit(1742236960.165:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:40.165000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:40.166063 systemd[1]: Starting systemd-sysctl.service... Mar 17 18:42:40.176000 systemd[1]: Finished systemd-sysctl.service. Mar 17 18:42:40.180317 kernel: audit: type=1130 audit(1742236960.176:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:40.176000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:40.211199 kernel: Loading iSCSI transport class v2.0-870. Mar 17 18:42:40.227231 kernel: iscsi: registered transport (tcp) Mar 17 18:42:40.249207 kernel: iscsi: registered transport (qla4xxx) Mar 17 18:42:40.249231 kernel: QLogic iSCSI HBA Driver Mar 17 18:42:40.296596 systemd[1]: Finished dracut-cmdline.service. Mar 17 18:42:40.302008 kernel: audit: type=1130 audit(1742236960.297:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:40.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:40.298430 systemd[1]: Starting dracut-pre-udev.service... Mar 17 18:42:40.345208 kernel: raid6: avx2x4 gen() 28003 MB/s Mar 17 18:42:40.362203 kernel: raid6: avx2x4 xor() 7270 MB/s Mar 17 18:42:40.379201 kernel: raid6: avx2x2 gen() 30309 MB/s Mar 17 18:42:40.396202 kernel: raid6: avx2x2 xor() 19244 MB/s Mar 17 18:42:40.413211 kernel: raid6: avx2x1 gen() 24833 MB/s Mar 17 18:42:40.430224 kernel: raid6: avx2x1 xor() 13196 MB/s Mar 17 18:42:40.447210 kernel: raid6: sse2x4 gen() 13697 MB/s Mar 17 18:42:40.464204 kernel: raid6: sse2x4 xor() 6331 MB/s Mar 17 18:42:40.481217 kernel: raid6: sse2x2 gen() 14217 MB/s Mar 17 18:42:40.498213 kernel: raid6: sse2x2 xor() 8221 MB/s Mar 17 18:42:40.515200 kernel: raid6: sse2x1 gen() 9783 MB/s Mar 17 18:42:40.532627 kernel: raid6: sse2x1 xor() 7485 MB/s Mar 17 18:42:40.532661 kernel: raid6: using algorithm avx2x2 gen() 30309 MB/s Mar 17 18:42:40.532671 kernel: raid6: .... xor() 19244 MB/s, rmw enabled Mar 17 18:42:40.533342 kernel: raid6: using avx2x2 recovery algorithm Mar 17 18:42:40.546230 kernel: xor: automatically using best checksumming function avx Mar 17 18:42:40.638213 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Mar 17 18:42:40.649169 systemd[1]: Finished dracut-pre-udev.service. Mar 17 18:42:40.650000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:40.653000 audit: BPF prog-id=7 op=LOAD Mar 17 18:42:40.653000 audit: BPF prog-id=8 op=LOAD Mar 17 18:42:40.654214 kernel: audit: type=1130 audit(1742236960.650:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:40.654478 systemd[1]: Starting systemd-udevd.service... Mar 17 18:42:40.667487 systemd-udevd[402]: Using default interface naming scheme 'v252'. Mar 17 18:42:40.671522 systemd[1]: Started systemd-udevd.service. Mar 17 18:42:40.672000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:40.673831 systemd[1]: Starting dracut-pre-trigger.service... Mar 17 18:42:40.686654 dracut-pre-trigger[411]: rd.md=0: removing MD RAID activation Mar 17 18:42:40.714044 systemd[1]: Finished dracut-pre-trigger.service. Mar 17 18:42:40.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:40.716584 systemd[1]: Starting systemd-udev-trigger.service... Mar 17 18:42:40.757090 systemd[1]: Finished systemd-udev-trigger.service. Mar 17 18:42:40.758000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:40.790856 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 17 18:42:40.796676 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 17 18:42:40.796689 kernel: GPT:9289727 != 19775487 Mar 17 18:42:40.796698 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 17 18:42:40.796707 kernel: GPT:9289727 != 19775487 Mar 17 18:42:40.796715 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 17 18:42:40.796724 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 18:42:40.800198 kernel: cryptd: max_cpu_qlen set to 1000 Mar 17 18:42:40.807847 kernel: libata version 3.00 loaded. Mar 17 18:42:40.810683 kernel: AVX2 version of gcm_enc/dec engaged. Mar 17 18:42:40.810706 kernel: AES CTR mode by8 optimization enabled Mar 17 18:42:40.815324 kernel: ahci 0000:00:1f.2: version 3.0 Mar 17 18:42:40.829678 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 17 18:42:40.829692 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 17 18:42:40.829802 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 17 18:42:40.829884 kernel: scsi host0: ahci Mar 17 18:42:40.830005 kernel: scsi host1: ahci Mar 17 18:42:40.830103 kernel: scsi host2: ahci Mar 17 18:42:40.830210 kernel: scsi host3: ahci Mar 17 18:42:40.830304 kernel: scsi host4: ahci Mar 17 18:42:40.830392 kernel: scsi host5: ahci Mar 17 18:42:40.830497 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Mar 17 18:42:40.830507 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Mar 17 18:42:40.830520 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Mar 17 18:42:40.830529 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Mar 17 18:42:40.830538 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Mar 17 18:42:40.830547 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Mar 17 18:42:40.838884 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Mar 17 18:42:40.842209 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (450) Mar 17 18:42:40.842713 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Mar 17 18:42:40.848323 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Mar 17 18:42:40.859444 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Mar 17 18:42:40.864331 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Mar 17 18:42:40.866895 systemd[1]: Starting disk-uuid.service... Mar 17 18:42:40.873723 disk-uuid[528]: Primary Header is updated. Mar 17 18:42:40.873723 disk-uuid[528]: Secondary Entries is updated. Mar 17 18:42:40.873723 disk-uuid[528]: Secondary Header is updated. Mar 17 18:42:40.877197 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 18:42:40.884197 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 18:42:41.137226 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 17 18:42:41.137311 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 17 18:42:41.138203 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 17 18:42:41.139835 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 17 18:42:41.139912 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 17 18:42:41.141251 kernel: ata3.00: applying bridge limits Mar 17 18:42:41.142214 kernel: ata3.00: configured for UDMA/100 Mar 17 18:42:41.142239 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 17 18:42:41.148224 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 17 18:42:41.148249 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 17 18:42:41.180632 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 17 18:42:41.198140 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 17 18:42:41.198162 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 17 18:42:41.883024 disk-uuid[529]: The operation has completed successfully. Mar 17 18:42:41.884458 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 18:42:41.905128 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 17 18:42:41.905000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:41.905000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:41.905235 systemd[1]: Finished disk-uuid.service. Mar 17 18:42:41.909400 systemd[1]: Starting verity-setup.service... Mar 17 18:42:41.923208 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 17 18:42:41.944588 systemd[1]: Found device dev-mapper-usr.device. Mar 17 18:42:41.946372 systemd[1]: Mounting sysusr-usr.mount... Mar 17 18:42:41.948273 systemd[1]: Finished verity-setup.service. Mar 17 18:42:41.948000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:42.010107 systemd[1]: Mounted sysusr-usr.mount. Mar 17 18:42:42.011545 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Mar 17 18:42:42.010679 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Mar 17 18:42:42.011419 systemd[1]: Starting ignition-setup.service... Mar 17 18:42:42.015082 systemd[1]: Starting parse-ip-for-networkd.service... Mar 17 18:42:42.021608 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 18:42:42.021661 kernel: BTRFS info (device vda6): using free space tree Mar 17 18:42:42.021672 kernel: BTRFS info (device vda6): has skinny extents Mar 17 18:42:42.030694 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 17 18:42:42.039010 systemd[1]: Finished ignition-setup.service. Mar 17 18:42:42.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:42.041299 systemd[1]: Starting ignition-fetch-offline.service... Mar 17 18:42:42.088737 systemd[1]: Finished parse-ip-for-networkd.service. Mar 17 18:42:42.090000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:42.091000 audit: BPF prog-id=9 op=LOAD Mar 17 18:42:42.091956 systemd[1]: Starting systemd-networkd.service... Mar 17 18:42:42.103476 ignition[640]: Ignition 2.14.0 Mar 17 18:42:42.103487 ignition[640]: Stage: fetch-offline Mar 17 18:42:42.103555 ignition[640]: no configs at "/usr/lib/ignition/base.d" Mar 17 18:42:42.103566 ignition[640]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 18:42:42.103681 ignition[640]: parsed url from cmdline: "" Mar 17 18:42:42.103684 ignition[640]: no config URL provided Mar 17 18:42:42.103689 ignition[640]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 18:42:42.103695 ignition[640]: no config at "/usr/lib/ignition/user.ign" Mar 17 18:42:42.103737 ignition[640]: op(1): [started] loading QEMU firmware config module Mar 17 18:42:42.113000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:42.103742 ignition[640]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 17 18:42:42.112974 systemd-networkd[721]: lo: Link UP Mar 17 18:42:42.111311 ignition[640]: op(1): [finished] loading QEMU firmware config module Mar 17 18:42:42.112977 systemd-networkd[721]: lo: Gained carrier Mar 17 18:42:42.111333 ignition[640]: QEMU firmware config was not found. Ignoring... Mar 17 18:42:42.113417 systemd-networkd[721]: Enumeration completed Mar 17 18:42:42.113487 systemd[1]: Started systemd-networkd.service. Mar 17 18:42:42.114009 systemd[1]: Reached target network.target. Mar 17 18:42:42.114747 systemd[1]: Starting iscsiuio.service... Mar 17 18:42:42.115781 systemd-networkd[721]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 18:42:42.116820 systemd-networkd[721]: eth0: Link UP Mar 17 18:42:42.116823 systemd-networkd[721]: eth0: Gained carrier Mar 17 18:42:42.159329 ignition[640]: parsing config with SHA512: c9a164c0819af7218275456b5d5d07224e4446231464ee33fa990e50e9d6d94aec8b84fb99e5ed0a9e956647d5b9558258639d08a79077021916e114e077a420 Mar 17 18:42:42.173569 unknown[640]: fetched base config from "system" Mar 17 18:42:42.174570 unknown[640]: fetched user config from "qemu" Mar 17 18:42:42.175971 ignition[640]: fetch-offline: fetch-offline passed Mar 17 18:42:42.176958 ignition[640]: Ignition finished successfully Mar 17 18:42:42.178774 systemd[1]: Finished ignition-fetch-offline.service. Mar 17 18:42:42.179000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:42.179772 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 17 18:42:42.180426 systemd[1]: Starting ignition-kargs.service... Mar 17 18:42:42.192090 ignition[727]: Ignition 2.14.0 Mar 17 18:42:42.192101 ignition[727]: Stage: kargs Mar 17 18:42:42.193000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:42.192660 systemd[1]: Started iscsiuio.service. Mar 17 18:42:42.192208 ignition[727]: no configs at "/usr/lib/ignition/base.d" Mar 17 18:42:42.194579 systemd[1]: Starting iscsid.service... Mar 17 18:42:42.198514 iscsid[734]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Mar 17 18:42:42.198514 iscsid[734]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Mar 17 18:42:42.198514 iscsid[734]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Mar 17 18:42:42.198514 iscsid[734]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Mar 17 18:42:42.198514 iscsid[734]: If using hardware iscsi like qla4xxx this message can be ignored. Mar 17 18:42:42.198514 iscsid[734]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Mar 17 18:42:42.198514 iscsid[734]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Mar 17 18:42:42.199000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:42.205000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:42.192218 ignition[727]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 18:42:42.198658 systemd[1]: Finished ignition-kargs.service. Mar 17 18:42:42.193169 ignition[727]: kargs: kargs passed Mar 17 18:42:42.200232 systemd[1]: Starting ignition-disks.service... Mar 17 18:42:42.193218 ignition[727]: Ignition finished successfully Mar 17 18:42:42.201465 systemd[1]: Started iscsid.service. Mar 17 18:42:42.205955 systemd[1]: Starting dracut-initqueue.service... Mar 17 18:42:42.207318 systemd-networkd[721]: eth0: DHCPv4 address 10.0.0.108/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 17 18:42:42.221000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:42.220283 ignition[735]: Ignition 2.14.0 Mar 17 18:42:42.220630 systemd[1]: Finished dracut-initqueue.service. Mar 17 18:42:42.220289 ignition[735]: Stage: disks Mar 17 18:42:42.221669 systemd[1]: Reached target remote-fs-pre.target. Mar 17 18:42:42.220406 ignition[735]: no configs at "/usr/lib/ignition/base.d" Mar 17 18:42:42.223105 systemd[1]: Reached target remote-cryptsetup.target. Mar 17 18:42:42.220416 ignition[735]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 18:42:42.224748 systemd[1]: Reached target remote-fs.target. Mar 17 18:42:42.221400 ignition[735]: disks: disks passed Mar 17 18:42:42.221435 ignition[735]: Ignition finished successfully Mar 17 18:42:42.232204 systemd[1]: Starting dracut-pre-mount.service... Mar 17 18:42:42.233860 systemd[1]: Finished ignition-disks.service. Mar 17 18:42:42.235000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:42.235574 systemd[1]: Reached target initrd-root-device.target. Mar 17 18:42:42.237303 systemd[1]: Reached target local-fs-pre.target. Mar 17 18:42:42.239034 systemd[1]: Reached target local-fs.target. Mar 17 18:42:42.240558 systemd[1]: Reached target sysinit.target. Mar 17 18:42:42.242038 systemd[1]: Reached target basic.target. Mar 17 18:42:42.243739 systemd[1]: Finished dracut-pre-mount.service. Mar 17 18:42:42.245000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:42.245965 systemd[1]: Starting systemd-fsck-root.service... Mar 17 18:42:42.257403 systemd-fsck[757]: ROOT: clean, 623/553520 files, 56022/553472 blocks Mar 17 18:42:42.263071 systemd[1]: Finished systemd-fsck-root.service. Mar 17 18:42:42.265000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:42.265802 systemd[1]: Mounting sysroot.mount... Mar 17 18:42:42.272192 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Mar 17 18:42:42.272544 systemd[1]: Mounted sysroot.mount. Mar 17 18:42:42.273920 systemd[1]: Reached target initrd-root-fs.target. Mar 17 18:42:42.276287 systemd[1]: Mounting sysroot-usr.mount... Mar 17 18:42:42.278241 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Mar 17 18:42:42.278280 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 17 18:42:42.278301 systemd[1]: Reached target ignition-diskful.target. Mar 17 18:42:42.283597 systemd[1]: Mounted sysroot-usr.mount. Mar 17 18:42:42.285853 systemd[1]: Starting initrd-setup-root.service... Mar 17 18:42:42.290478 initrd-setup-root[767]: cut: /sysroot/etc/passwd: No such file or directory Mar 17 18:42:42.294313 initrd-setup-root[775]: cut: /sysroot/etc/group: No such file or directory Mar 17 18:42:42.298308 initrd-setup-root[783]: cut: /sysroot/etc/shadow: No such file or directory Mar 17 18:42:42.301252 initrd-setup-root[791]: cut: /sysroot/etc/gshadow: No such file or directory Mar 17 18:42:42.327057 systemd[1]: Finished initrd-setup-root.service. Mar 17 18:42:42.326000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:42.328772 systemd[1]: Starting ignition-mount.service... Mar 17 18:42:42.329826 systemd[1]: Starting sysroot-boot.service... Mar 17 18:42:42.337042 bash[808]: umount: /sysroot/usr/share/oem: not mounted. Mar 17 18:42:42.344639 ignition[810]: INFO : Ignition 2.14.0 Mar 17 18:42:42.344639 ignition[810]: INFO : Stage: mount Mar 17 18:42:42.346338 ignition[810]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 18:42:42.346338 ignition[810]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 18:42:42.346338 ignition[810]: INFO : mount: mount passed Mar 17 18:42:42.346338 ignition[810]: INFO : Ignition finished successfully Mar 17 18:42:42.346000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:42.346511 systemd[1]: Finished ignition-mount.service. Mar 17 18:42:42.351486 systemd[1]: Finished sysroot-boot.service. Mar 17 18:42:42.351000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:42.958465 systemd[1]: Mounting sysroot-usr-share-oem.mount... Mar 17 18:42:42.975195 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (818) Mar 17 18:42:42.977284 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 18:42:42.977370 kernel: BTRFS info (device vda6): using free space tree Mar 17 18:42:42.977380 kernel: BTRFS info (device vda6): has skinny extents Mar 17 18:42:42.981465 systemd[1]: Mounted sysroot-usr-share-oem.mount. Mar 17 18:42:42.983125 systemd[1]: Starting ignition-files.service... Mar 17 18:42:43.000852 ignition[838]: INFO : Ignition 2.14.0 Mar 17 18:42:43.000852 ignition[838]: INFO : Stage: files Mar 17 18:42:43.002558 ignition[838]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 18:42:43.002558 ignition[838]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 18:42:43.002558 ignition[838]: DEBUG : files: compiled without relabeling support, skipping Mar 17 18:42:43.006209 ignition[838]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 17 18:42:43.006209 ignition[838]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 17 18:42:43.006209 ignition[838]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 17 18:42:43.006209 ignition[838]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 17 18:42:43.006209 ignition[838]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 17 18:42:43.006159 unknown[838]: wrote ssh authorized keys file for user: core Mar 17 18:42:43.014130 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Mar 17 18:42:43.014130 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Mar 17 18:42:43.077793 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 17 18:42:43.239517 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Mar 17 18:42:43.241635 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 17 18:42:43.241635 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Mar 17 18:42:43.458484 systemd-networkd[721]: eth0: Gained IPv6LL Mar 17 18:42:43.611220 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 17 18:42:43.753101 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 17 18:42:43.753101 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 17 18:42:43.756771 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 17 18:42:43.756771 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 17 18:42:43.760215 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 17 18:42:43.760215 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 18:42:43.763567 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 18:42:43.765492 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 18:42:43.767281 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 18:42:43.768936 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 18:42:43.770669 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 18:42:43.772345 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Mar 17 18:42:43.774723 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Mar 17 18:42:43.777077 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Mar 17 18:42:43.779128 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Mar 17 18:42:44.081704 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 17 18:42:44.667674 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Mar 17 18:42:44.667674 ignition[838]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 17 18:42:44.680666 ignition[838]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 18:42:44.680666 ignition[838]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 18:42:44.680666 ignition[838]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 17 18:42:44.680666 ignition[838]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Mar 17 18:42:44.680666 ignition[838]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 17 18:42:44.680666 ignition[838]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 17 18:42:44.680666 ignition[838]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Mar 17 18:42:44.680666 ignition[838]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Mar 17 18:42:44.680666 ignition[838]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 17 18:42:44.757857 ignition[838]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 17 18:42:44.759518 ignition[838]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Mar 17 18:42:44.759518 ignition[838]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Mar 17 18:42:44.759518 ignition[838]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Mar 17 18:42:44.759518 ignition[838]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 17 18:42:44.759518 ignition[838]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 17 18:42:44.759518 ignition[838]: INFO : files: files passed Mar 17 18:42:44.759518 ignition[838]: INFO : Ignition finished successfully Mar 17 18:42:44.768818 systemd[1]: Finished ignition-files.service. Mar 17 18:42:44.770000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:44.771191 kernel: kauditd_printk_skb: 23 callbacks suppressed Mar 17 18:42:44.771213 kernel: audit: type=1130 audit(1742236964.770:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:44.771252 systemd[1]: Starting initrd-setup-root-after-ignition.service... Mar 17 18:42:44.774950 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Mar 17 18:42:44.777929 initrd-setup-root-after-ignition[862]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Mar 17 18:42:44.779422 initrd-setup-root-after-ignition[864]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 18:42:44.781060 systemd[1]: Starting ignition-quench.service... Mar 17 18:42:44.782825 systemd[1]: Finished initrd-setup-root-after-ignition.service. Mar 17 18:42:44.784000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:44.785006 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 17 18:42:44.788802 kernel: audit: type=1130 audit(1742236964.784:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:44.785079 systemd[1]: Finished ignition-quench.service. Mar 17 18:42:44.790000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:44.790414 systemd[1]: Reached target ignition-complete.target. Mar 17 18:42:44.797570 kernel: audit: type=1130 audit(1742236964.790:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:44.797584 kernel: audit: type=1131 audit(1742236964.790:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:44.790000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:44.798148 systemd[1]: Starting initrd-parse-etc.service... Mar 17 18:42:44.809683 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 17 18:42:44.810671 systemd[1]: Finished initrd-parse-etc.service. Mar 17 18:42:44.812000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:44.812326 systemd[1]: Reached target initrd-fs.target. Mar 17 18:42:44.819436 kernel: audit: type=1130 audit(1742236964.812:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:44.819456 kernel: audit: type=1131 audit(1742236964.812:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:44.812000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:44.819398 systemd[1]: Reached target initrd.target. Mar 17 18:42:44.820860 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Mar 17 18:42:44.822718 systemd[1]: Starting dracut-pre-pivot.service... Mar 17 18:42:44.832006 systemd[1]: Finished dracut-pre-pivot.service. Mar 17 18:42:44.833000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:44.834256 systemd[1]: Starting initrd-cleanup.service... Mar 17 18:42:44.872897 kernel: audit: type=1130 audit(1742236964.833:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:44.877471 systemd[1]: Stopped target nss-lookup.target. Mar 17 18:42:44.879121 systemd[1]: Stopped target remote-cryptsetup.target. Mar 17 18:42:44.880908 systemd[1]: Stopped target timers.target. Mar 17 18:42:44.882423 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 17 18:42:44.883421 systemd[1]: Stopped dracut-pre-pivot.service. Mar 17 18:42:44.884000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:44.885126 systemd[1]: Stopped target initrd.target. Mar 17 18:42:44.889332 kernel: audit: type=1131 audit(1742236964.884:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:44.889380 systemd[1]: Stopped target basic.target. Mar 17 18:42:44.890874 systemd[1]: Stopped target ignition-complete.target. Mar 17 18:42:44.913730 systemd[1]: Stopped target ignition-diskful.target. Mar 17 18:42:44.915470 systemd[1]: Stopped target initrd-root-device.target. Mar 17 18:42:44.917244 systemd[1]: Stopped target remote-fs.target. Mar 17 18:42:44.918840 systemd[1]: Stopped target remote-fs-pre.target. Mar 17 18:42:44.920525 systemd[1]: Stopped target sysinit.target. Mar 17 18:42:44.922055 systemd[1]: Stopped target local-fs.target. Mar 17 18:42:44.923631 systemd[1]: Stopped target local-fs-pre.target. Mar 17 18:42:44.925276 systemd[1]: Stopped target swap.target. Mar 17 18:42:44.926717 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 17 18:42:44.927709 systemd[1]: Stopped dracut-pre-mount.service. Mar 17 18:42:44.929000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:44.929384 systemd[1]: Stopped target cryptsetup.target. Mar 17 18:42:44.933660 kernel: audit: type=1131 audit(1742236964.929:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:44.933700 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 17 18:42:44.934687 systemd[1]: Stopped dracut-initqueue.service. Mar 17 18:42:44.936000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:44.936351 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 17 18:42:44.940139 kernel: audit: type=1131 audit(1742236964.936:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:44.936446 systemd[1]: Stopped ignition-fetch-offline.service. Mar 17 18:42:44.941000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:44.941879 systemd[1]: Stopped target paths.target. Mar 17 18:42:44.943362 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 17 18:42:44.948219 systemd[1]: Stopped systemd-ask-password-console.path. Mar 17 18:42:44.950028 systemd[1]: Stopped target slices.target. Mar 17 18:42:44.951542 systemd[1]: Stopped target sockets.target. Mar 17 18:42:44.953073 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 17 18:42:44.954253 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Mar 17 18:42:44.956000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:44.956238 systemd[1]: ignition-files.service: Deactivated successfully. Mar 17 18:42:44.957198 systemd[1]: Stopped ignition-files.service. Mar 17 18:42:44.958000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:44.959877 systemd[1]: Stopping ignition-mount.service... Mar 17 18:42:44.961540 systemd[1]: Stopping iscsid.service... Mar 17 18:42:44.963010 iscsid[734]: iscsid shutting down. Mar 17 18:42:44.965685 systemd[1]: Stopping sysroot-boot.service... Mar 17 18:42:44.966201 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 17 18:42:44.967000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:44.967000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:44.966348 systemd[1]: Stopped systemd-udev-trigger.service. Mar 17 18:42:44.971145 ignition[879]: INFO : Ignition 2.14.0 Mar 17 18:42:44.971145 ignition[879]: INFO : Stage: umount Mar 17 18:42:44.971145 ignition[879]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 18:42:44.971145 ignition[879]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 18:42:44.971145 ignition[879]: INFO : umount: umount passed Mar 17 18:42:44.971145 ignition[879]: INFO : Ignition finished successfully Mar 17 18:42:44.970000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:44.970000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:44.975000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:44.977000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:44.980000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:44.981000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:44.968047 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 17 18:42:44.982000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:44.968133 systemd[1]: Stopped dracut-pre-trigger.service. Mar 17 18:42:44.970806 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 17 18:42:44.970883 systemd[1]: Finished initrd-cleanup.service. Mar 17 18:42:44.986000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:44.973548 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 17 18:42:44.973643 systemd[1]: Stopped ignition-mount.service. Mar 17 18:42:44.975365 systemd[1]: iscsid.service: Deactivated successfully. Mar 17 18:42:44.975440 systemd[1]: Stopped iscsid.service. Mar 17 18:42:44.977327 systemd[1]: iscsid.socket: Deactivated successfully. Mar 17 18:42:44.977353 systemd[1]: Closed iscsid.socket. Mar 17 18:42:44.978702 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 17 18:42:44.996000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:44.978737 systemd[1]: Stopped ignition-disks.service. Mar 17 18:42:44.980509 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 17 18:42:45.021000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:44.980542 systemd[1]: Stopped ignition-kargs.service. Mar 17 18:42:45.052000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:45.055000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:44.981424 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 17 18:42:44.981456 systemd[1]: Stopped ignition-setup.service. Mar 17 18:42:44.982368 systemd[1]: Stopping iscsiuio.service... Mar 17 18:42:45.060000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:44.984763 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 17 18:42:44.985209 systemd[1]: iscsiuio.service: Deactivated successfully. Mar 17 18:42:45.064000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:44.985274 systemd[1]: Stopped iscsiuio.service. Mar 17 18:42:45.064000 audit: BPF prog-id=6 op=UNLOAD Mar 17 18:42:45.065000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:44.986835 systemd[1]: Stopped target network.target. Mar 17 18:42:44.988322 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 17 18:42:44.988352 systemd[1]: Closed iscsiuio.socket. Mar 17 18:42:44.989136 systemd[1]: Stopping systemd-networkd.service... Mar 17 18:42:45.086000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:45.088000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:44.990858 systemd[1]: Stopping systemd-resolved.service... Mar 17 18:42:44.993217 systemd-networkd[721]: eth0: DHCPv6 lease lost Mar 17 18:42:45.090000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:45.090000 audit: BPF prog-id=9 op=UNLOAD Mar 17 18:42:44.994914 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 17 18:42:45.093000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:44.995029 systemd[1]: Stopped systemd-networkd.service. Mar 17 18:42:44.997323 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 17 18:42:45.095000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:45.097000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:44.997379 systemd[1]: Closed systemd-networkd.socket. Mar 17 18:42:45.018657 systemd[1]: Stopping network-cleanup.service... Mar 17 18:42:45.019558 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 17 18:42:45.099000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:45.099000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:45.019601 systemd[1]: Stopped parse-ip-for-networkd.service. Mar 17 18:42:45.021480 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 18:42:45.021514 systemd[1]: Stopped systemd-sysctl.service. Mar 17 18:42:45.053890 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 17 18:42:45.053923 systemd[1]: Stopped systemd-modules-load.service. Mar 17 18:42:45.055475 systemd[1]: Stopping systemd-udevd.service... Mar 17 18:42:45.059220 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 17 18:42:45.059679 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 17 18:42:45.059777 systemd[1]: Stopped systemd-resolved.service. Mar 17 18:42:45.062809 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 17 18:42:45.062941 systemd[1]: Stopped systemd-udevd.service. Mar 17 18:42:45.064642 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 17 18:42:45.064717 systemd[1]: Stopped network-cleanup.service. Mar 17 18:42:45.066220 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 17 18:42:45.066252 systemd[1]: Closed systemd-udevd-control.socket. Mar 17 18:42:45.067807 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 17 18:42:45.067835 systemd[1]: Closed systemd-udevd-kernel.socket. Mar 17 18:42:45.069456 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 17 18:42:45.069490 systemd[1]: Stopped dracut-pre-udev.service. Mar 17 18:42:45.087097 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 17 18:42:45.087132 systemd[1]: Stopped dracut-cmdline.service. Mar 17 18:42:45.088682 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 18:42:45.088715 systemd[1]: Stopped dracut-cmdline-ask.service. Mar 17 18:42:45.090948 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Mar 17 18:42:45.092013 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 17 18:42:45.092054 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Mar 17 18:42:45.093987 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 17 18:42:45.094023 systemd[1]: Stopped kmod-static-nodes.service. Mar 17 18:42:45.095649 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 18:42:45.095683 systemd[1]: Stopped systemd-vconsole-setup.service. Mar 17 18:42:45.098226 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 17 18:42:45.098574 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 17 18:42:45.098652 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Mar 17 18:42:45.167623 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 17 18:42:45.167000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:45.167000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:45.167709 systemd[1]: Stopped sysroot-boot.service. Mar 17 18:42:45.168035 systemd[1]: Reached target initrd-switch-root.target. Mar 17 18:42:45.168190 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 17 18:42:45.168224 systemd[1]: Stopped initrd-setup-root.service. Mar 17 18:42:45.168941 systemd[1]: Starting initrd-switch-root.service... Mar 17 18:42:45.184890 systemd[1]: Switching root. Mar 17 18:42:45.209300 systemd-journald[198]: Journal stopped Mar 17 18:42:48.969044 systemd-journald[198]: Received SIGTERM from PID 1 (systemd). Mar 17 18:42:48.969103 kernel: SELinux: Class mctp_socket not defined in policy. Mar 17 18:42:48.969121 kernel: SELinux: Class anon_inode not defined in policy. Mar 17 18:42:48.969136 kernel: SELinux: the above unknown classes and permissions will be allowed Mar 17 18:42:48.969145 kernel: SELinux: policy capability network_peer_controls=1 Mar 17 18:42:48.969169 kernel: SELinux: policy capability open_perms=1 Mar 17 18:42:48.969192 kernel: SELinux: policy capability extended_socket_class=1 Mar 17 18:42:48.969201 kernel: SELinux: policy capability always_check_network=0 Mar 17 18:42:48.969215 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 17 18:42:48.969225 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 17 18:42:48.969234 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 17 18:42:48.969244 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 17 18:42:48.969259 systemd[1]: Successfully loaded SELinux policy in 45.711ms. Mar 17 18:42:48.969274 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 8.029ms. Mar 17 18:42:48.969285 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Mar 17 18:42:48.969298 systemd[1]: Detected virtualization kvm. Mar 17 18:42:48.969308 systemd[1]: Detected architecture x86-64. Mar 17 18:42:48.969319 systemd[1]: Detected first boot. Mar 17 18:42:48.969329 systemd[1]: Initializing machine ID from VM UUID. Mar 17 18:42:48.969340 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Mar 17 18:42:48.969354 systemd[1]: Populated /etc with preset unit settings. Mar 17 18:42:48.969364 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:42:48.969376 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:42:48.969388 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:42:48.969404 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 17 18:42:48.969414 systemd[1]: Stopped initrd-switch-root.service. Mar 17 18:42:48.969426 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 17 18:42:48.969437 systemd[1]: Created slice system-addon\x2dconfig.slice. Mar 17 18:42:48.969447 systemd[1]: Created slice system-addon\x2drun.slice. Mar 17 18:42:48.969457 systemd[1]: Created slice system-getty.slice. Mar 17 18:42:48.969468 systemd[1]: Created slice system-modprobe.slice. Mar 17 18:42:48.969478 systemd[1]: Created slice system-serial\x2dgetty.slice. Mar 17 18:42:48.969489 systemd[1]: Created slice system-system\x2dcloudinit.slice. Mar 17 18:42:48.969507 systemd[1]: Created slice system-systemd\x2dfsck.slice. Mar 17 18:42:48.969523 systemd[1]: Created slice user.slice. Mar 17 18:42:48.969533 systemd[1]: Started systemd-ask-password-console.path. Mar 17 18:42:48.969543 systemd[1]: Started systemd-ask-password-wall.path. Mar 17 18:42:48.969554 systemd[1]: Set up automount boot.automount. Mar 17 18:42:48.969564 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Mar 17 18:42:48.969575 systemd[1]: Stopped target initrd-switch-root.target. Mar 17 18:42:48.969585 systemd[1]: Stopped target initrd-fs.target. Mar 17 18:42:48.969595 systemd[1]: Stopped target initrd-root-fs.target. Mar 17 18:42:48.969607 systemd[1]: Reached target integritysetup.target. Mar 17 18:42:48.969618 systemd[1]: Reached target remote-cryptsetup.target. Mar 17 18:42:48.969628 systemd[1]: Reached target remote-fs.target. Mar 17 18:42:48.969642 systemd[1]: Reached target slices.target. Mar 17 18:42:48.969653 systemd[1]: Reached target swap.target. Mar 17 18:42:48.969663 systemd[1]: Reached target torcx.target. Mar 17 18:42:48.969673 systemd[1]: Reached target veritysetup.target. Mar 17 18:42:48.969684 systemd[1]: Listening on systemd-coredump.socket. Mar 17 18:42:48.969694 systemd[1]: Listening on systemd-initctl.socket. Mar 17 18:42:48.969704 systemd[1]: Listening on systemd-networkd.socket. Mar 17 18:42:48.969717 systemd[1]: Listening on systemd-udevd-control.socket. Mar 17 18:42:48.969727 systemd[1]: Listening on systemd-udevd-kernel.socket. Mar 17 18:42:48.969738 systemd[1]: Listening on systemd-userdbd.socket. Mar 17 18:42:48.969748 systemd[1]: Mounting dev-hugepages.mount... Mar 17 18:42:48.969758 systemd[1]: Mounting dev-mqueue.mount... Mar 17 18:42:48.969769 systemd[1]: Mounting media.mount... Mar 17 18:42:48.969779 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:42:48.969789 systemd[1]: Mounting sys-kernel-debug.mount... Mar 17 18:42:48.969799 systemd[1]: Mounting sys-kernel-tracing.mount... Mar 17 18:42:48.969811 systemd[1]: Mounting tmp.mount... Mar 17 18:42:48.969821 systemd[1]: Starting flatcar-tmpfiles.service... Mar 17 18:42:48.969832 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:42:48.969842 systemd[1]: Starting kmod-static-nodes.service... Mar 17 18:42:48.969852 systemd[1]: Starting modprobe@configfs.service... Mar 17 18:42:48.969867 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:42:48.969877 systemd[1]: Starting modprobe@drm.service... Mar 17 18:42:48.969887 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:42:48.969898 systemd[1]: Starting modprobe@fuse.service... Mar 17 18:42:48.969909 systemd[1]: Starting modprobe@loop.service... Mar 17 18:42:48.969920 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 17 18:42:48.969930 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 17 18:42:48.969940 systemd[1]: Stopped systemd-fsck-root.service. Mar 17 18:42:48.969951 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 17 18:42:48.969961 kernel: loop: module loaded Mar 17 18:42:48.969970 systemd[1]: Stopped systemd-fsck-usr.service. Mar 17 18:42:48.969981 systemd[1]: Stopped systemd-journald.service. Mar 17 18:42:48.969990 kernel: fuse: init (API version 7.34) Mar 17 18:42:48.970002 systemd[1]: Starting systemd-journald.service... Mar 17 18:42:48.970012 systemd[1]: Starting systemd-modules-load.service... Mar 17 18:42:48.970022 systemd[1]: Starting systemd-network-generator.service... Mar 17 18:42:48.970033 systemd[1]: Starting systemd-remount-fs.service... Mar 17 18:42:48.970043 systemd[1]: Starting systemd-udev-trigger.service... Mar 17 18:42:48.970054 systemd[1]: verity-setup.service: Deactivated successfully. Mar 17 18:42:48.970066 systemd-journald[997]: Journal started Mar 17 18:42:48.970106 systemd-journald[997]: Runtime Journal (/run/log/journal/6fb008e601dd4961b14189540f1d871e) is 6.0M, max 48.4M, 42.4M free. Mar 17 18:42:45.283000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 17 18:42:45.779000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Mar 17 18:42:45.779000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Mar 17 18:42:45.779000 audit: BPF prog-id=10 op=LOAD Mar 17 18:42:45.779000 audit: BPF prog-id=10 op=UNLOAD Mar 17 18:42:45.779000 audit: BPF prog-id=11 op=LOAD Mar 17 18:42:45.779000 audit: BPF prog-id=11 op=UNLOAD Mar 17 18:42:45.812000 audit[912]: AVC avc: denied { associate } for pid=912 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Mar 17 18:42:45.812000 audit[912]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001878dc a1=c00002ae40 a2=c000029100 a3=32 items=0 ppid=895 pid=912 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:42:45.812000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Mar 17 18:42:45.814000 audit[912]: AVC avc: denied { associate } for pid=912 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Mar 17 18:42:45.814000 audit[912]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001879b5 a2=1ed a3=0 items=2 ppid=895 pid=912 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:42:45.814000 audit: CWD cwd="/" Mar 17 18:42:45.814000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:45.814000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:45.814000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Mar 17 18:42:48.842000 audit: BPF prog-id=12 op=LOAD Mar 17 18:42:48.842000 audit: BPF prog-id=3 op=UNLOAD Mar 17 18:42:48.842000 audit: BPF prog-id=13 op=LOAD Mar 17 18:42:48.842000 audit: BPF prog-id=14 op=LOAD Mar 17 18:42:48.842000 audit: BPF prog-id=4 op=UNLOAD Mar 17 18:42:48.842000 audit: BPF prog-id=5 op=UNLOAD Mar 17 18:42:48.843000 audit: BPF prog-id=15 op=LOAD Mar 17 18:42:48.843000 audit: BPF prog-id=12 op=UNLOAD Mar 17 18:42:48.843000 audit: BPF prog-id=16 op=LOAD Mar 17 18:42:48.843000 audit: BPF prog-id=17 op=LOAD Mar 17 18:42:48.843000 audit: BPF prog-id=13 op=UNLOAD Mar 17 18:42:48.843000 audit: BPF prog-id=14 op=UNLOAD Mar 17 18:42:48.844000 audit: BPF prog-id=18 op=LOAD Mar 17 18:42:48.844000 audit: BPF prog-id=15 op=UNLOAD Mar 17 18:42:48.844000 audit: BPF prog-id=19 op=LOAD Mar 17 18:42:48.844000 audit: BPF prog-id=20 op=LOAD Mar 17 18:42:48.844000 audit: BPF prog-id=16 op=UNLOAD Mar 17 18:42:48.844000 audit: BPF prog-id=17 op=UNLOAD Mar 17 18:42:48.845000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:48.848000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:48.848000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:48.855000 audit: BPF prog-id=18 op=UNLOAD Mar 17 18:42:48.947000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:48.950000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:48.952000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:48.952000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:48.953000 audit: BPF prog-id=21 op=LOAD Mar 17 18:42:48.953000 audit: BPF prog-id=22 op=LOAD Mar 17 18:42:48.953000 audit: BPF prog-id=23 op=LOAD Mar 17 18:42:48.953000 audit: BPF prog-id=19 op=UNLOAD Mar 17 18:42:48.953000 audit: BPF prog-id=20 op=UNLOAD Mar 17 18:42:48.967000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Mar 17 18:42:48.967000 audit[997]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffe09a8b8e0 a2=4000 a3=7ffe09a8b97c items=0 ppid=1 pid=997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:42:48.967000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Mar 17 18:42:45.811070 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-03-17T18:42:45Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 18:42:48.840419 systemd[1]: Queued start job for default target multi-user.target. Mar 17 18:42:45.811439 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-03-17T18:42:45Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Mar 17 18:42:48.840433 systemd[1]: Unnecessary job was removed for dev-vda6.device. Mar 17 18:42:45.811498 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-03-17T18:42:45Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Mar 17 18:42:48.844996 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 17 18:42:45.811584 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-03-17T18:42:45Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Mar 17 18:42:48.971679 systemd[1]: Stopped verity-setup.service. Mar 17 18:42:45.811603 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-03-17T18:42:45Z" level=debug msg="skipped missing lower profile" missing profile=oem Mar 17 18:42:45.811649 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-03-17T18:42:45Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Mar 17 18:42:45.811663 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-03-17T18:42:45Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Mar 17 18:42:45.811917 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-03-17T18:42:45Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Mar 17 18:42:45.811981 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-03-17T18:42:45Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Mar 17 18:42:45.812007 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-03-17T18:42:45Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Mar 17 18:42:45.812419 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-03-17T18:42:45Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Mar 17 18:42:45.812465 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-03-17T18:42:45Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Mar 17 18:42:45.812485 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-03-17T18:42:45Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.7: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.7 Mar 17 18:42:45.812499 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-03-17T18:42:45Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Mar 17 18:42:45.812533 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-03-17T18:42:45Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.7: no such file or directory" path=/var/lib/torcx/store/3510.3.7 Mar 17 18:42:48.972000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:45.812546 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-03-17T18:42:45Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Mar 17 18:42:48.561724 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-03-17T18:42:48Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Mar 17 18:42:48.562214 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-03-17T18:42:48Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Mar 17 18:42:48.562348 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-03-17T18:42:48Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Mar 17 18:42:48.562544 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-03-17T18:42:48Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Mar 17 18:42:48.562593 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-03-17T18:42:48Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Mar 17 18:42:48.562668 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-03-17T18:42:48Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Mar 17 18:42:48.975213 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:42:48.977364 systemd[1]: Started systemd-journald.service. Mar 17 18:42:48.977000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:48.977946 systemd[1]: Mounted dev-hugepages.mount. Mar 17 18:42:48.978819 systemd[1]: Mounted dev-mqueue.mount. Mar 17 18:42:48.979649 systemd[1]: Mounted media.mount. Mar 17 18:42:48.980422 systemd[1]: Mounted sys-kernel-debug.mount. Mar 17 18:42:48.981337 systemd[1]: Mounted sys-kernel-tracing.mount. Mar 17 18:42:48.982268 systemd[1]: Mounted tmp.mount. Mar 17 18:42:48.983384 systemd[1]: Finished flatcar-tmpfiles.service. Mar 17 18:42:48.984000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:48.984529 systemd[1]: Finished kmod-static-nodes.service. Mar 17 18:42:48.985000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:48.985624 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 17 18:42:48.985784 systemd[1]: Finished modprobe@configfs.service. Mar 17 18:42:48.986000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:48.986000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:48.986860 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:42:48.986997 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:42:48.987000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:48.987000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:48.988135 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 18:42:48.988280 systemd[1]: Finished modprobe@drm.service. Mar 17 18:42:48.988000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:48.988000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:48.989363 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:42:48.989617 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:42:48.990000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:48.990000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:48.990758 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 17 18:42:48.990950 systemd[1]: Finished modprobe@fuse.service. Mar 17 18:42:48.991000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:48.991000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:48.991991 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:42:48.992163 systemd[1]: Finished modprobe@loop.service. Mar 17 18:42:48.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:48.992000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:48.993331 systemd[1]: Finished systemd-modules-load.service. Mar 17 18:42:48.994000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:48.994517 systemd[1]: Finished systemd-network-generator.service. Mar 17 18:42:48.995000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:48.995726 systemd[1]: Finished systemd-remount-fs.service. Mar 17 18:42:48.996000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:48.996963 systemd[1]: Reached target network-pre.target. Mar 17 18:42:48.998923 systemd[1]: Mounting sys-fs-fuse-connections.mount... Mar 17 18:42:49.000769 systemd[1]: Mounting sys-kernel-config.mount... Mar 17 18:42:49.001710 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 17 18:42:49.003574 systemd[1]: Starting systemd-hwdb-update.service... Mar 17 18:42:49.005663 systemd[1]: Starting systemd-journal-flush.service... Mar 17 18:42:49.006581 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:42:49.007804 systemd[1]: Starting systemd-random-seed.service... Mar 17 18:42:49.008731 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:42:49.009821 systemd[1]: Starting systemd-sysctl.service... Mar 17 18:42:49.012057 systemd[1]: Starting systemd-sysusers.service... Mar 17 18:42:49.015233 systemd-journald[997]: Time spent on flushing to /var/log/journal/6fb008e601dd4961b14189540f1d871e is 24.820ms for 1177 entries. Mar 17 18:42:49.015233 systemd-journald[997]: System Journal (/var/log/journal/6fb008e601dd4961b14189540f1d871e) is 8.0M, max 195.6M, 187.6M free. Mar 17 18:42:49.051689 systemd-journald[997]: Received client request to flush runtime journal. Mar 17 18:42:49.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:49.042000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:49.045000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:49.047000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:49.015761 systemd[1]: Mounted sys-fs-fuse-connections.mount. Mar 17 18:42:49.017675 systemd[1]: Mounted sys-kernel-config.mount. Mar 17 18:42:49.022322 systemd[1]: Finished systemd-random-seed.service. Mar 17 18:42:49.023380 systemd[1]: Reached target first-boot-complete.target. Mar 17 18:42:49.041721 systemd[1]: Finished systemd-udev-trigger.service. Mar 17 18:42:49.044094 systemd[1]: Starting systemd-udev-settle.service... Mar 17 18:42:49.045294 systemd[1]: Finished systemd-sysctl.service. Mar 17 18:42:49.047201 systemd[1]: Finished systemd-sysusers.service. Mar 17 18:42:49.049335 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Mar 17 18:42:49.053800 udevadm[1018]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 17 18:42:49.054222 systemd[1]: Finished systemd-journal-flush.service. Mar 17 18:42:49.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:49.064095 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Mar 17 18:42:49.065000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:49.491154 systemd[1]: Finished systemd-hwdb-update.service. Mar 17 18:42:49.492000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:49.492000 audit: BPF prog-id=24 op=LOAD Mar 17 18:42:49.492000 audit: BPF prog-id=25 op=LOAD Mar 17 18:42:49.492000 audit: BPF prog-id=7 op=UNLOAD Mar 17 18:42:49.492000 audit: BPF prog-id=8 op=UNLOAD Mar 17 18:42:49.493683 systemd[1]: Starting systemd-udevd.service... Mar 17 18:42:49.509744 systemd-udevd[1022]: Using default interface naming scheme 'v252'. Mar 17 18:42:49.522680 systemd[1]: Started systemd-udevd.service. Mar 17 18:42:49.523000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:49.525000 audit: BPF prog-id=26 op=LOAD Mar 17 18:42:49.526419 systemd[1]: Starting systemd-networkd.service... Mar 17 18:42:49.530000 audit: BPF prog-id=27 op=LOAD Mar 17 18:42:49.530000 audit: BPF prog-id=28 op=LOAD Mar 17 18:42:49.530000 audit: BPF prog-id=29 op=LOAD Mar 17 18:42:49.531391 systemd[1]: Starting systemd-userdbd.service... Mar 17 18:42:49.556653 systemd[1]: Started systemd-userdbd.service. Mar 17 18:42:49.557000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:49.560791 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Mar 17 18:42:49.581815 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Mar 17 18:42:49.598221 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 17 18:42:49.602110 systemd-networkd[1029]: lo: Link UP Mar 17 18:42:49.602406 kernel: ACPI: button: Power Button [PWRF] Mar 17 18:42:49.602121 systemd-networkd[1029]: lo: Gained carrier Mar 17 18:42:49.602528 systemd-networkd[1029]: Enumeration completed Mar 17 18:42:49.602651 systemd[1]: Started systemd-networkd.service. Mar 17 18:42:49.602800 systemd-networkd[1029]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 18:42:49.603000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:49.604487 systemd-networkd[1029]: eth0: Link UP Mar 17 18:42:49.604496 systemd-networkd[1029]: eth0: Gained carrier Mar 17 18:42:49.615332 systemd-networkd[1029]: eth0: DHCPv4 address 10.0.0.108/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 17 18:42:49.620000 audit[1041]: AVC avc: denied { confidentiality } for pid=1041 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Mar 17 18:42:49.620000 audit[1041]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55cf0fcc4fa0 a1=338ac a2=7f4966cf5bc5 a3=5 items=110 ppid=1022 pid=1041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:42:49.620000 audit: CWD cwd="/" Mar 17 18:42:49.620000 audit: PATH item=0 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=1 name=(null) inode=14691 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=2 name=(null) inode=14691 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=3 name=(null) inode=14692 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=4 name=(null) inode=14691 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=5 name=(null) inode=14693 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=6 name=(null) inode=14691 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=7 name=(null) inode=14694 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=8 name=(null) inode=14694 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=9 name=(null) inode=14695 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=10 name=(null) inode=14694 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=11 name=(null) inode=14696 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=12 name=(null) inode=14694 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=13 name=(null) inode=14697 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=14 name=(null) inode=14694 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=15 name=(null) inode=14698 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=16 name=(null) inode=14694 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=17 name=(null) inode=14699 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=18 name=(null) inode=14691 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=19 name=(null) inode=14700 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=20 name=(null) inode=14700 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=21 name=(null) inode=14701 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=22 name=(null) inode=14700 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=23 name=(null) inode=14702 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=24 name=(null) inode=14700 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=25 name=(null) inode=14703 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=26 name=(null) inode=14700 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=27 name=(null) inode=14704 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=28 name=(null) inode=14700 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=29 name=(null) inode=14705 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=30 name=(null) inode=14691 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=31 name=(null) inode=14706 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=32 name=(null) inode=14706 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=33 name=(null) inode=14707 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=34 name=(null) inode=14706 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=35 name=(null) inode=14708 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=36 name=(null) inode=14706 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=37 name=(null) inode=14709 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=38 name=(null) inode=14706 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=39 name=(null) inode=14710 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=40 name=(null) inode=14706 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=41 name=(null) inode=14711 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=42 name=(null) inode=14691 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=43 name=(null) inode=14712 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=44 name=(null) inode=14712 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=45 name=(null) inode=14713 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=46 name=(null) inode=14712 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=47 name=(null) inode=14714 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=48 name=(null) inode=14712 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=49 name=(null) inode=14715 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=50 name=(null) inode=14712 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=51 name=(null) inode=14716 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=52 name=(null) inode=14712 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=53 name=(null) inode=14717 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=54 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=55 name=(null) inode=14718 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=56 name=(null) inode=14718 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=57 name=(null) inode=14719 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=58 name=(null) inode=14718 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=59 name=(null) inode=14720 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=60 name=(null) inode=14718 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=61 name=(null) inode=14721 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=62 name=(null) inode=14721 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=63 name=(null) inode=14722 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=64 name=(null) inode=14721 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=65 name=(null) inode=14723 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=66 name=(null) inode=14721 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=67 name=(null) inode=14724 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=68 name=(null) inode=14721 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=69 name=(null) inode=14725 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=70 name=(null) inode=14721 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=71 name=(null) inode=14726 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=72 name=(null) inode=14718 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=73 name=(null) inode=14727 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=74 name=(null) inode=14727 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=75 name=(null) inode=14728 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=76 name=(null) inode=14727 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=77 name=(null) inode=14729 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=78 name=(null) inode=14727 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=79 name=(null) inode=14730 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=80 name=(null) inode=14727 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=81 name=(null) inode=14731 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=82 name=(null) inode=14727 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=83 name=(null) inode=14732 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=84 name=(null) inode=14718 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=85 name=(null) inode=14733 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=86 name=(null) inode=14733 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=87 name=(null) inode=14734 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=88 name=(null) inode=14733 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=89 name=(null) inode=14735 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=90 name=(null) inode=14733 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=91 name=(null) inode=14736 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=92 name=(null) inode=14733 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=93 name=(null) inode=14737 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=94 name=(null) inode=14733 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=95 name=(null) inode=14738 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=96 name=(null) inode=14718 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=97 name=(null) inode=14739 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=98 name=(null) inode=14739 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=99 name=(null) inode=14740 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=100 name=(null) inode=14739 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=101 name=(null) inode=14741 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=102 name=(null) inode=14739 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=103 name=(null) inode=14742 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=104 name=(null) inode=14739 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=105 name=(null) inode=14743 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=106 name=(null) inode=14739 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=107 name=(null) inode=14744 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PATH item=109 name=(null) inode=14745 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:42:49.620000 audit: PROCTITLE proctitle="(udev-worker)" Mar 17 18:42:49.631196 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 17 18:42:49.656944 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Mar 17 18:42:49.660584 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 17 18:42:49.660700 kernel: mousedev: PS/2 mouse device common for all mice Mar 17 18:42:49.660728 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 17 18:42:49.660841 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 17 18:42:49.702253 kernel: kvm: Nested Virtualization enabled Mar 17 18:42:49.702333 kernel: SVM: kvm: Nested Paging enabled Mar 17 18:42:49.702349 kernel: SVM: Virtual VMLOAD VMSAVE supported Mar 17 18:42:49.703441 kernel: SVM: Virtual GIF supported Mar 17 18:42:49.718212 kernel: EDAC MC: Ver: 3.0.0 Mar 17 18:42:49.743597 systemd[1]: Finished systemd-udev-settle.service. Mar 17 18:42:49.744000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:49.745934 systemd[1]: Starting lvm2-activation-early.service... Mar 17 18:42:49.755109 lvm[1058]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 18:42:49.783042 systemd[1]: Finished lvm2-activation-early.service. Mar 17 18:42:49.783000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:49.784186 systemd[1]: Reached target cryptsetup.target. Mar 17 18:42:49.784994 kernel: kauditd_printk_skb: 234 callbacks suppressed Mar 17 18:42:49.785032 kernel: audit: type=1130 audit(1742236969.783:156): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:49.790562 systemd[1]: Starting lvm2-activation.service... Mar 17 18:42:49.794425 lvm[1059]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 18:42:49.823529 systemd[1]: Finished lvm2-activation.service. Mar 17 18:42:49.824000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:49.824610 systemd[1]: Reached target local-fs-pre.target. Mar 17 18:42:49.828197 kernel: audit: type=1130 audit(1742236969.824:157): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:49.828740 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 17 18:42:49.828765 systemd[1]: Reached target local-fs.target. Mar 17 18:42:49.829616 systemd[1]: Reached target machines.target. Mar 17 18:42:49.831749 systemd[1]: Starting ldconfig.service... Mar 17 18:42:49.832743 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:42:49.832785 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:42:49.833664 systemd[1]: Starting systemd-boot-update.service... Mar 17 18:42:49.835682 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Mar 17 18:42:49.837915 systemd[1]: Starting systemd-machine-id-commit.service... Mar 17 18:42:49.843432 systemd[1]: Starting systemd-sysext.service... Mar 17 18:42:49.845044 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1061 (bootctl) Mar 17 18:42:49.846075 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Mar 17 18:42:49.847710 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Mar 17 18:42:49.848000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:49.854162 kernel: audit: type=1130 audit(1742236969.848:158): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:49.852869 systemd[1]: Unmounting usr-share-oem.mount... Mar 17 18:42:49.856275 systemd[1]: usr-share-oem.mount: Deactivated successfully. Mar 17 18:42:49.856522 systemd[1]: Unmounted usr-share-oem.mount. Mar 17 18:42:49.867239 kernel: loop0: detected capacity change from 0 to 205544 Mar 17 18:42:49.890134 systemd-fsck[1070]: fsck.fat 4.2 (2021-01-31) Mar 17 18:42:49.890134 systemd-fsck[1070]: /dev/vda1: 790 files, 119319/258078 clusters Mar 17 18:42:49.892270 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Mar 17 18:42:49.894000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:49.896104 systemd[1]: Mounting boot.mount... Mar 17 18:42:49.899204 kernel: audit: type=1130 audit(1742236969.894:159): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:50.044103 systemd[1]: Mounted boot.mount. Mar 17 18:42:50.054211 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 17 18:42:50.056964 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 17 18:42:50.057557 systemd[1]: Finished systemd-machine-id-commit.service. Mar 17 18:42:50.059000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:50.059564 systemd[1]: Finished systemd-boot-update.service. Mar 17 18:42:50.064205 kernel: audit: type=1130 audit(1742236970.059:160): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:50.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:50.069216 kernel: audit: type=1130 audit(1742236970.064:161): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:50.075205 kernel: loop1: detected capacity change from 0 to 205544 Mar 17 18:42:50.079234 (sd-sysext)[1074]: Using extensions 'kubernetes'. Mar 17 18:42:50.079566 (sd-sysext)[1074]: Merged extensions into '/usr'. Mar 17 18:42:50.097153 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:42:50.098929 systemd[1]: Mounting usr-share-oem.mount... Mar 17 18:42:50.100303 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:42:50.101733 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:42:50.103717 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:42:50.105731 systemd[1]: Starting modprobe@loop.service... Mar 17 18:42:50.106777 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:42:50.106977 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:42:50.107118 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:42:50.108245 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:42:50.108415 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:42:50.109000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:50.109969 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:42:50.110094 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:42:50.113214 kernel: audit: type=1130 audit(1742236970.109:162): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:50.113281 kernel: audit: type=1131 audit(1742236970.109:163): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:50.109000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:50.118000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:50.118732 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:42:50.118895 systemd[1]: Finished modprobe@loop.service. Mar 17 18:42:50.118000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:50.122223 kernel: audit: type=1130 audit(1742236970.118:164): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:50.122295 kernel: audit: type=1131 audit(1742236970.118:165): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:50.127000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:50.127000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:50.128402 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:42:50.128653 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:42:50.140775 ldconfig[1060]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 17 18:42:50.148591 systemd[1]: Finished ldconfig.service. Mar 17 18:42:50.149000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:50.174040 systemd[1]: Mounted usr-share-oem.mount. Mar 17 18:42:50.176044 systemd[1]: Finished systemd-sysext.service. Mar 17 18:42:50.176000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:50.178013 systemd[1]: Starting ensure-sysext.service... Mar 17 18:42:50.179633 systemd[1]: Starting systemd-tmpfiles-setup.service... Mar 17 18:42:50.184391 systemd[1]: Reloading. Mar 17 18:42:50.189816 systemd-tmpfiles[1081]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Mar 17 18:42:50.190617 systemd-tmpfiles[1081]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 17 18:42:50.192069 systemd-tmpfiles[1081]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 17 18:42:50.254837 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2025-03-17T18:42:50Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 18:42:50.255230 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2025-03-17T18:42:50Z" level=info msg="torcx already run" Mar 17 18:42:50.312214 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:42:50.312231 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:42:50.329568 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:42:50.380000 audit: BPF prog-id=30 op=LOAD Mar 17 18:42:50.380000 audit: BPF prog-id=27 op=UNLOAD Mar 17 18:42:50.380000 audit: BPF prog-id=31 op=LOAD Mar 17 18:42:50.380000 audit: BPF prog-id=32 op=LOAD Mar 17 18:42:50.380000 audit: BPF prog-id=28 op=UNLOAD Mar 17 18:42:50.380000 audit: BPF prog-id=29 op=UNLOAD Mar 17 18:42:50.381000 audit: BPF prog-id=33 op=LOAD Mar 17 18:42:50.381000 audit: BPF prog-id=34 op=LOAD Mar 17 18:42:50.381000 audit: BPF prog-id=24 op=UNLOAD Mar 17 18:42:50.381000 audit: BPF prog-id=25 op=UNLOAD Mar 17 18:42:50.382000 audit: BPF prog-id=35 op=LOAD Mar 17 18:42:50.382000 audit: BPF prog-id=26 op=UNLOAD Mar 17 18:42:50.382000 audit: BPF prog-id=36 op=LOAD Mar 17 18:42:50.382000 audit: BPF prog-id=21 op=UNLOAD Mar 17 18:42:50.382000 audit: BPF prog-id=37 op=LOAD Mar 17 18:42:50.382000 audit: BPF prog-id=38 op=LOAD Mar 17 18:42:50.382000 audit: BPF prog-id=22 op=UNLOAD Mar 17 18:42:50.382000 audit: BPF prog-id=23 op=UNLOAD Mar 17 18:42:50.386287 systemd[1]: Finished systemd-tmpfiles-setup.service. Mar 17 18:42:50.387000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:50.390713 systemd[1]: Starting audit-rules.service... Mar 17 18:42:50.392866 systemd[1]: Starting clean-ca-certificates.service... Mar 17 18:42:50.394993 systemd[1]: Starting systemd-journal-catalog-update.service... Mar 17 18:42:50.396000 audit: BPF prog-id=39 op=LOAD Mar 17 18:42:50.397847 systemd[1]: Starting systemd-resolved.service... Mar 17 18:42:50.400000 audit: BPF prog-id=40 op=LOAD Mar 17 18:42:50.401748 systemd[1]: Starting systemd-timesyncd.service... Mar 17 18:42:50.403911 systemd[1]: Starting systemd-update-utmp.service... Mar 17 18:42:50.405343 systemd[1]: Finished clean-ca-certificates.service. Mar 17 18:42:50.406000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:50.409000 audit[1155]: SYSTEM_BOOT pid=1155 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Mar 17 18:42:50.408478 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 18:42:50.412862 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:42:50.413218 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:42:50.415032 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:42:50.417523 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:42:50.419945 systemd[1]: Starting modprobe@loop.service... Mar 17 18:42:50.421124 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:42:50.421316 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:42:50.421470 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 18:42:50.421581 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:42:50.423051 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:42:50.423169 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:42:50.424000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:50.424000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:50.424620 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:42:50.424719 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:42:50.425000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:50.425000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:50.426142 systemd[1]: Finished systemd-journal-catalog-update.service. Mar 17 18:42:50.427000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:50.427679 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:42:50.427779 systemd[1]: Finished modprobe@loop.service. Mar 17 18:42:50.428000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:50.428000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:50.431762 systemd[1]: Finished systemd-update-utmp.service. Mar 17 18:42:50.432000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:42:50.433738 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:42:50.433925 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:42:50.435162 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:42:50.436977 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:42:50.438761 systemd[1]: Starting modprobe@loop.service... Mar 17 18:42:50.439979 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:42:50.439000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Mar 17 18:42:50.439000 audit[1168]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd5a75c4b0 a2=420 a3=0 items=0 ppid=1144 pid=1168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:42:50.439000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Mar 17 18:42:50.440097 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:42:50.440294 augenrules[1168]: No rules Mar 17 18:42:50.441279 systemd[1]: Starting systemd-update-done.service... Mar 17 18:42:50.442187 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 18:42:50.442277 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:42:50.443249 systemd[1]: Finished audit-rules.service. Mar 17 18:42:50.444536 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:42:50.444653 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:42:50.445918 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:42:50.446025 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:42:50.447327 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:42:50.447464 systemd[1]: Finished modprobe@loop.service. Mar 17 18:42:50.448849 systemd[1]: Finished systemd-update-done.service. Mar 17 18:42:50.452529 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:42:50.452783 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:42:50.454262 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:42:50.455929 systemd[1]: Starting modprobe@drm.service... Mar 17 18:42:50.457722 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:42:50.459650 systemd[1]: Starting modprobe@loop.service... Mar 17 18:42:50.460634 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:42:50.460745 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:42:50.461784 systemd[1]: Starting systemd-networkd-wait-online.service... Mar 17 18:42:50.462806 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 18:42:50.462899 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:42:50.463812 systemd[1]: Started systemd-timesyncd.service. Mar 17 18:42:50.464042 systemd-timesyncd[1154]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 17 18:42:50.464409 systemd-timesyncd[1154]: Initial clock synchronization to Mon 2025-03-17 18:42:50.814530 UTC. Mar 17 18:42:50.465709 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:42:50.465859 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:42:50.467239 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 18:42:50.467305 systemd-resolved[1150]: Positive Trust Anchors: Mar 17 18:42:50.467319 systemd-resolved[1150]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 18:42:50.467344 systemd[1]: Finished modprobe@drm.service. Mar 17 18:42:50.467363 systemd-resolved[1150]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Mar 17 18:42:50.468660 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:42:50.468760 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:42:50.470162 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:42:50.470279 systemd[1]: Finished modprobe@loop.service. Mar 17 18:42:50.471882 systemd[1]: Reached target time-set.target. Mar 17 18:42:50.473067 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:42:50.473113 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:42:50.473439 systemd[1]: Finished ensure-sysext.service. Mar 17 18:42:50.478015 systemd-resolved[1150]: Defaulting to hostname 'linux'. Mar 17 18:42:50.480053 systemd[1]: Started systemd-resolved.service. Mar 17 18:42:50.481043 systemd[1]: Reached target network.target. Mar 17 18:42:50.481901 systemd[1]: Reached target nss-lookup.target. Mar 17 18:42:50.482826 systemd[1]: Reached target sysinit.target. Mar 17 18:42:50.483752 systemd[1]: Started motdgen.path. Mar 17 18:42:50.484570 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Mar 17 18:42:50.485846 systemd[1]: Started logrotate.timer. Mar 17 18:42:50.486725 systemd[1]: Started mdadm.timer. Mar 17 18:42:50.487473 systemd[1]: Started systemd-tmpfiles-clean.timer. Mar 17 18:42:50.488407 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 17 18:42:50.488438 systemd[1]: Reached target paths.target. Mar 17 18:42:50.489267 systemd[1]: Reached target timers.target. Mar 17 18:42:50.490514 systemd[1]: Listening on dbus.socket. Mar 17 18:42:50.492240 systemd[1]: Starting docker.socket... Mar 17 18:42:50.495639 systemd[1]: Listening on sshd.socket. Mar 17 18:42:50.496593 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:42:50.496976 systemd[1]: Listening on docker.socket. Mar 17 18:42:50.497828 systemd[1]: Reached target sockets.target. Mar 17 18:42:50.498640 systemd[1]: Reached target basic.target. Mar 17 18:42:50.499440 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Mar 17 18:42:50.499474 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Mar 17 18:42:50.500561 systemd[1]: Starting containerd.service... Mar 17 18:42:50.502540 systemd[1]: Starting dbus.service... Mar 17 18:42:50.504209 systemd[1]: Starting enable-oem-cloudinit.service... Mar 17 18:42:50.506425 systemd[1]: Starting extend-filesystems.service... Mar 17 18:42:50.507490 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Mar 17 18:42:50.508726 systemd[1]: Starting motdgen.service... Mar 17 18:42:50.512412 jq[1186]: false Mar 17 18:42:50.510728 systemd[1]: Starting prepare-helm.service... Mar 17 18:42:50.512956 systemd[1]: Starting ssh-key-proc-cmdline.service... Mar 17 18:42:50.515361 systemd[1]: Starting sshd-keygen.service... Mar 17 18:42:50.520420 extend-filesystems[1187]: Found loop1 Mar 17 18:42:50.521494 extend-filesystems[1187]: Found sr0 Mar 17 18:42:50.521494 extend-filesystems[1187]: Found vda Mar 17 18:42:50.521494 extend-filesystems[1187]: Found vda1 Mar 17 18:42:50.521494 extend-filesystems[1187]: Found vda2 Mar 17 18:42:50.521494 extend-filesystems[1187]: Found vda3 Mar 17 18:42:50.521494 extend-filesystems[1187]: Found usr Mar 17 18:42:50.526262 systemd[1]: Starting systemd-logind.service... Mar 17 18:42:50.527633 extend-filesystems[1187]: Found vda4 Mar 17 18:42:50.527633 extend-filesystems[1187]: Found vda6 Mar 17 18:42:50.527633 extend-filesystems[1187]: Found vda7 Mar 17 18:42:50.527633 extend-filesystems[1187]: Found vda9 Mar 17 18:42:50.527633 extend-filesystems[1187]: Checking size of /dev/vda9 Mar 17 18:42:50.532310 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:42:50.532384 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 17 18:42:50.532940 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 17 18:42:50.533808 systemd[1]: Starting update-engine.service... Mar 17 18:42:50.548790 systemd[1]: Starting update-ssh-keys-after-ignition.service... Mar 17 18:42:50.548729 dbus-daemon[1185]: [system] SELinux support is enabled Mar 17 18:42:50.550529 systemd[1]: Started dbus.service. Mar 17 18:42:50.551246 jq[1207]: true Mar 17 18:42:50.554016 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 17 18:42:50.554259 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Mar 17 18:42:50.554633 systemd[1]: motdgen.service: Deactivated successfully. Mar 17 18:42:50.554766 systemd[1]: Finished motdgen.service. Mar 17 18:42:50.556484 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 17 18:42:50.558534 systemd[1]: Finished ssh-key-proc-cmdline.service. Mar 17 18:42:50.565074 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 17 18:42:50.565129 systemd[1]: Reached target system-config.target. Mar 17 18:42:50.566480 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 17 18:42:50.566509 systemd[1]: Reached target user-config.target. Mar 17 18:42:50.576681 tar[1209]: linux-amd64/helm Mar 17 18:42:50.580743 jq[1213]: true Mar 17 18:42:50.600364 extend-filesystems[1187]: Resized partition /dev/vda9 Mar 17 18:42:50.605266 extend-filesystems[1225]: resize2fs 1.46.5 (30-Dec-2021) Mar 17 18:42:50.610231 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 17 18:42:50.650808 systemd-logind[1205]: Watching system buttons on /dev/input/event1 (Power Button) Mar 17 18:42:50.650835 systemd-logind[1205]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 17 18:42:50.651227 systemd-logind[1205]: New seat seat0. Mar 17 18:42:50.654149 systemd[1]: Started systemd-logind.service. Mar 17 18:42:50.660206 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 17 18:42:50.686290 extend-filesystems[1225]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 17 18:42:50.686290 extend-filesystems[1225]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 17 18:42:50.686290 extend-filesystems[1225]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 17 18:42:50.690506 extend-filesystems[1187]: Resized filesystem in /dev/vda9 Mar 17 18:42:50.692341 update_engine[1206]: I0317 18:42:50.689154 1206 main.cc:92] Flatcar Update Engine starting Mar 17 18:42:50.693352 env[1214]: time="2025-03-17T18:42:50.686870666Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Mar 17 18:42:50.689634 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 17 18:42:50.699304 update_engine[1206]: I0317 18:42:50.695134 1206 update_check_scheduler.cc:74] Next update check in 3m42s Mar 17 18:42:50.689793 systemd[1]: Finished extend-filesystems.service. Mar 17 18:42:50.694629 systemd[1]: Started update-engine.service. Mar 17 18:42:50.697474 systemd[1]: Started locksmithd.service. Mar 17 18:42:50.704153 bash[1235]: Updated "/home/core/.ssh/authorized_keys" Mar 17 18:42:50.705020 systemd[1]: Finished update-ssh-keys-after-ignition.service. Mar 17 18:42:50.713682 env[1214]: time="2025-03-17T18:42:50.713650207Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 17 18:42:50.713883 env[1214]: time="2025-03-17T18:42:50.713864339Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:42:50.715570 env[1214]: time="2025-03-17T18:42:50.715544249Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.179-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 17 18:42:50.715675 env[1214]: time="2025-03-17T18:42:50.715656129Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:42:50.715943 env[1214]: time="2025-03-17T18:42:50.715920996Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 18:42:50.716028 env[1214]: time="2025-03-17T18:42:50.716009141Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 17 18:42:50.716111 env[1214]: time="2025-03-17T18:42:50.716090013Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Mar 17 18:42:50.716212 env[1214]: time="2025-03-17T18:42:50.716173649Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 17 18:42:50.716351 env[1214]: time="2025-03-17T18:42:50.716332648Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:42:50.716728 env[1214]: time="2025-03-17T18:42:50.716708332Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:42:50.716916 env[1214]: time="2025-03-17T18:42:50.716893860Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 18:42:50.717000 env[1214]: time="2025-03-17T18:42:50.716981194Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 17 18:42:50.717115 env[1214]: time="2025-03-17T18:42:50.717095568Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Mar 17 18:42:50.717239 env[1214]: time="2025-03-17T18:42:50.717220994Z" level=info msg="metadata content store policy set" policy=shared Mar 17 18:42:50.731797 env[1214]: time="2025-03-17T18:42:50.731744921Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 17 18:42:50.731851 env[1214]: time="2025-03-17T18:42:50.731798401Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 17 18:42:50.731851 env[1214]: time="2025-03-17T18:42:50.731816555Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 17 18:42:50.731902 env[1214]: time="2025-03-17T18:42:50.731876839Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 17 18:42:50.731902 env[1214]: time="2025-03-17T18:42:50.731894782Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 17 18:42:50.731943 env[1214]: time="2025-03-17T18:42:50.731908909Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 17 18:42:50.731943 env[1214]: time="2025-03-17T18:42:50.731921322Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 17 18:42:50.731981 env[1214]: time="2025-03-17T18:42:50.731969823Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 17 18:42:50.732002 env[1214]: time="2025-03-17T18:42:50.731987636Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Mar 17 18:42:50.732026 env[1214]: time="2025-03-17T18:42:50.732002394Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 17 18:42:50.732026 env[1214]: time="2025-03-17T18:42:50.732016190Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 17 18:42:50.732064 env[1214]: time="2025-03-17T18:42:50.732028403Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 17 18:42:50.732212 env[1214]: time="2025-03-17T18:42:50.732194083Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 17 18:42:50.732315 env[1214]: time="2025-03-17T18:42:50.732292899Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 17 18:42:50.732575 env[1214]: time="2025-03-17T18:42:50.732553708Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 17 18:42:50.732604 env[1214]: time="2025-03-17T18:42:50.732586089Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 17 18:42:50.732604 env[1214]: time="2025-03-17T18:42:50.732599494Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 17 18:42:50.732682 env[1214]: time="2025-03-17T18:42:50.732661530Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 17 18:42:50.732682 env[1214]: time="2025-03-17T18:42:50.732679955Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 17 18:42:50.732725 env[1214]: time="2025-03-17T18:42:50.732693550Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 17 18:42:50.732725 env[1214]: time="2025-03-17T18:42:50.732703920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 17 18:42:50.732764 env[1214]: time="2025-03-17T18:42:50.732735028Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 17 18:42:50.732764 env[1214]: time="2025-03-17T18:42:50.732749245Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 17 18:42:50.732764 env[1214]: time="2025-03-17T18:42:50.732760115Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 17 18:42:50.732824 env[1214]: time="2025-03-17T18:42:50.732771226Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 17 18:42:50.732824 env[1214]: time="2025-03-17T18:42:50.732784501Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 17 18:42:50.732937 env[1214]: time="2025-03-17T18:42:50.732911739Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 17 18:42:50.732937 env[1214]: time="2025-03-17T18:42:50.732933410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 17 18:42:50.732986 env[1214]: time="2025-03-17T18:42:50.732944972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 17 18:42:50.732986 env[1214]: time="2025-03-17T18:42:50.732956964Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 17 18:42:50.732986 env[1214]: time="2025-03-17T18:42:50.732971221Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Mar 17 18:42:50.732986 env[1214]: time="2025-03-17T18:42:50.732981440Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 17 18:42:50.733065 env[1214]: time="2025-03-17T18:42:50.733012298Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Mar 17 18:42:50.733065 env[1214]: time="2025-03-17T18:42:50.733050249Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 17 18:42:50.733349 env[1214]: time="2025-03-17T18:42:50.733295208Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 17 18:42:50.733349 env[1214]: time="2025-03-17T18:42:50.733357475Z" level=info msg="Connect containerd service" Mar 17 18:42:50.735939 env[1214]: time="2025-03-17T18:42:50.733400326Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 17 18:42:50.735939 env[1214]: time="2025-03-17T18:42:50.733980944Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 18:42:50.735939 env[1214]: time="2025-03-17T18:42:50.734337724Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 17 18:42:50.735939 env[1214]: time="2025-03-17T18:42:50.734405210Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 17 18:42:50.735939 env[1214]: time="2025-03-17T18:42:50.734821270Z" level=info msg="containerd successfully booted in 0.049481s" Mar 17 18:42:50.735939 env[1214]: time="2025-03-17T18:42:50.735919219Z" level=info msg="Start subscribing containerd event" Mar 17 18:42:50.734529 systemd[1]: Started containerd.service. Mar 17 18:42:50.736120 env[1214]: time="2025-03-17T18:42:50.735963663Z" level=info msg="Start recovering state" Mar 17 18:42:50.736120 env[1214]: time="2025-03-17T18:42:50.736022022Z" level=info msg="Start event monitor" Mar 17 18:42:50.736120 env[1214]: time="2025-03-17T18:42:50.736047260Z" level=info msg="Start snapshots syncer" Mar 17 18:42:50.736120 env[1214]: time="2025-03-17T18:42:50.736058270Z" level=info msg="Start cni network conf syncer for default" Mar 17 18:42:50.736120 env[1214]: time="2025-03-17T18:42:50.736064963Z" level=info msg="Start streaming server" Mar 17 18:42:50.783062 sshd_keygen[1203]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 17 18:42:50.791509 locksmithd[1241]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 17 18:42:50.809285 systemd[1]: Finished sshd-keygen.service. Mar 17 18:42:50.811726 systemd[1]: Starting issuegen.service... Mar 17 18:42:50.816402 systemd[1]: issuegen.service: Deactivated successfully. Mar 17 18:42:50.816539 systemd[1]: Finished issuegen.service. Mar 17 18:42:50.818547 systemd[1]: Starting systemd-user-sessions.service... Mar 17 18:42:50.828815 systemd[1]: Finished systemd-user-sessions.service. Mar 17 18:42:50.831727 systemd[1]: Started getty@tty1.service. Mar 17 18:42:50.835729 systemd[1]: Started serial-getty@ttyS0.service. Mar 17 18:42:50.837215 systemd[1]: Reached target getty.target. Mar 17 18:42:51.010902 systemd-networkd[1029]: eth0: Gained IPv6LL Mar 17 18:42:51.015516 systemd[1]: Finished systemd-networkd-wait-online.service. Mar 17 18:42:51.017269 systemd[1]: Reached target network-online.target. Mar 17 18:42:51.021152 systemd[1]: Starting kubelet.service... Mar 17 18:42:51.086903 tar[1209]: linux-amd64/LICENSE Mar 17 18:42:51.086903 tar[1209]: linux-amd64/README.md Mar 17 18:42:51.092139 systemd[1]: Finished prepare-helm.service. Mar 17 18:42:51.913042 systemd[1]: Started kubelet.service. Mar 17 18:42:51.914427 systemd[1]: Reached target multi-user.target. Mar 17 18:42:51.916600 systemd[1]: Starting systemd-update-utmp-runlevel.service... Mar 17 18:42:51.925478 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Mar 17 18:42:51.925669 systemd[1]: Finished systemd-update-utmp-runlevel.service. Mar 17 18:42:51.927086 systemd[1]: Startup finished in 977ms (kernel) + 5.304s (initrd) + 6.691s (userspace) = 12.973s. Mar 17 18:42:52.510555 kubelet[1266]: E0317 18:42:52.510478 1266 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:42:52.512486 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:42:52.512621 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:42:52.512867 systemd[1]: kubelet.service: Consumed 1.322s CPU time. Mar 17 18:42:53.860870 systemd[1]: Created slice system-sshd.slice. Mar 17 18:42:53.862194 systemd[1]: Started sshd@0-10.0.0.108:22-10.0.0.1:36628.service. Mar 17 18:42:53.905975 sshd[1275]: Accepted publickey for core from 10.0.0.1 port 36628 ssh2: RSA SHA256:DYcGKLA+BUI3KXBOyjzF6/uTec/cV0nLMAEcssN4/64 Mar 17 18:42:53.907434 sshd[1275]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:42:53.916608 systemd-logind[1205]: New session 1 of user core. Mar 17 18:42:53.917560 systemd[1]: Created slice user-500.slice. Mar 17 18:42:53.918736 systemd[1]: Starting user-runtime-dir@500.service... Mar 17 18:42:53.927722 systemd[1]: Finished user-runtime-dir@500.service. Mar 17 18:42:53.929085 systemd[1]: Starting user@500.service... Mar 17 18:42:53.931637 (systemd)[1278]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:42:54.002507 systemd[1278]: Queued start job for default target default.target. Mar 17 18:42:54.002948 systemd[1278]: Reached target paths.target. Mar 17 18:42:54.002964 systemd[1278]: Reached target sockets.target. Mar 17 18:42:54.002976 systemd[1278]: Reached target timers.target. Mar 17 18:42:54.002986 systemd[1278]: Reached target basic.target. Mar 17 18:42:54.003020 systemd[1278]: Reached target default.target. Mar 17 18:42:54.003042 systemd[1278]: Startup finished in 64ms. Mar 17 18:42:54.003169 systemd[1]: Started user@500.service. Mar 17 18:42:54.004253 systemd[1]: Started session-1.scope. Mar 17 18:42:54.057829 systemd[1]: Started sshd@1-10.0.0.108:22-10.0.0.1:36644.service. Mar 17 18:42:54.101090 sshd[1287]: Accepted publickey for core from 10.0.0.1 port 36644 ssh2: RSA SHA256:DYcGKLA+BUI3KXBOyjzF6/uTec/cV0nLMAEcssN4/64 Mar 17 18:42:54.102481 sshd[1287]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:42:54.106682 systemd-logind[1205]: New session 2 of user core. Mar 17 18:42:54.107840 systemd[1]: Started session-2.scope. Mar 17 18:42:54.165278 sshd[1287]: pam_unix(sshd:session): session closed for user core Mar 17 18:42:54.168119 systemd[1]: sshd@1-10.0.0.108:22-10.0.0.1:36644.service: Deactivated successfully. Mar 17 18:42:54.168700 systemd[1]: session-2.scope: Deactivated successfully. Mar 17 18:42:54.169234 systemd-logind[1205]: Session 2 logged out. Waiting for processes to exit. Mar 17 18:42:54.170299 systemd[1]: Started sshd@2-10.0.0.108:22-10.0.0.1:36660.service. Mar 17 18:42:54.170950 systemd-logind[1205]: Removed session 2. Mar 17 18:42:54.210388 sshd[1293]: Accepted publickey for core from 10.0.0.1 port 36660 ssh2: RSA SHA256:DYcGKLA+BUI3KXBOyjzF6/uTec/cV0nLMAEcssN4/64 Mar 17 18:42:54.211957 sshd[1293]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:42:54.215326 systemd-logind[1205]: New session 3 of user core. Mar 17 18:42:54.216180 systemd[1]: Started session-3.scope. Mar 17 18:42:54.267337 sshd[1293]: pam_unix(sshd:session): session closed for user core Mar 17 18:42:54.270610 systemd[1]: sshd@2-10.0.0.108:22-10.0.0.1:36660.service: Deactivated successfully. Mar 17 18:42:54.271141 systemd[1]: session-3.scope: Deactivated successfully. Mar 17 18:42:54.271685 systemd-logind[1205]: Session 3 logged out. Waiting for processes to exit. Mar 17 18:42:54.272850 systemd[1]: Started sshd@3-10.0.0.108:22-10.0.0.1:36672.service. Mar 17 18:42:54.273687 systemd-logind[1205]: Removed session 3. Mar 17 18:42:54.311217 sshd[1299]: Accepted publickey for core from 10.0.0.1 port 36672 ssh2: RSA SHA256:DYcGKLA+BUI3KXBOyjzF6/uTec/cV0nLMAEcssN4/64 Mar 17 18:42:54.312492 sshd[1299]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:42:54.316025 systemd-logind[1205]: New session 4 of user core. Mar 17 18:42:54.316830 systemd[1]: Started session-4.scope. Mar 17 18:42:54.373660 sshd[1299]: pam_unix(sshd:session): session closed for user core Mar 17 18:42:54.376573 systemd[1]: sshd@3-10.0.0.108:22-10.0.0.1:36672.service: Deactivated successfully. Mar 17 18:42:54.377111 systemd[1]: session-4.scope: Deactivated successfully. Mar 17 18:42:54.377704 systemd-logind[1205]: Session 4 logged out. Waiting for processes to exit. Mar 17 18:42:54.378885 systemd[1]: Started sshd@4-10.0.0.108:22-10.0.0.1:36676.service. Mar 17 18:42:54.379641 systemd-logind[1205]: Removed session 4. Mar 17 18:42:54.416987 sshd[1305]: Accepted publickey for core from 10.0.0.1 port 36676 ssh2: RSA SHA256:DYcGKLA+BUI3KXBOyjzF6/uTec/cV0nLMAEcssN4/64 Mar 17 18:42:54.418114 sshd[1305]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:42:54.421414 systemd-logind[1205]: New session 5 of user core. Mar 17 18:42:54.422172 systemd[1]: Started session-5.scope. Mar 17 18:42:54.480075 sudo[1308]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 17 18:42:54.480312 sudo[1308]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Mar 17 18:42:54.515888 systemd[1]: Starting docker.service... Mar 17 18:42:54.572861 env[1320]: time="2025-03-17T18:42:54.572792342Z" level=info msg="Starting up" Mar 17 18:42:54.574625 env[1320]: time="2025-03-17T18:42:54.574583797Z" level=info msg="parsed scheme: \"unix\"" module=grpc Mar 17 18:42:54.574625 env[1320]: time="2025-03-17T18:42:54.574614633Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Mar 17 18:42:54.574721 env[1320]: time="2025-03-17T18:42:54.574639538Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Mar 17 18:42:54.574721 env[1320]: time="2025-03-17T18:42:54.574650470Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Mar 17 18:42:54.577043 env[1320]: time="2025-03-17T18:42:54.576990562Z" level=info msg="parsed scheme: \"unix\"" module=grpc Mar 17 18:42:54.577043 env[1320]: time="2025-03-17T18:42:54.577020294Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Mar 17 18:42:54.577043 env[1320]: time="2025-03-17T18:42:54.577043096Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Mar 17 18:42:54.577043 env[1320]: time="2025-03-17T18:42:54.577053327Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Mar 17 18:42:55.466207 env[1320]: time="2025-03-17T18:42:55.466100968Z" level=info msg="Loading containers: start." Mar 17 18:42:55.594304 kernel: Initializing XFRM netlink socket Mar 17 18:42:55.625204 env[1320]: time="2025-03-17T18:42:55.625145626Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Mar 17 18:42:55.679479 systemd-networkd[1029]: docker0: Link UP Mar 17 18:42:55.760492 env[1320]: time="2025-03-17T18:42:55.760407616Z" level=info msg="Loading containers: done." Mar 17 18:42:55.769347 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck591617064-merged.mount: Deactivated successfully. Mar 17 18:42:55.771768 env[1320]: time="2025-03-17T18:42:55.771720840Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 17 18:42:55.771926 env[1320]: time="2025-03-17T18:42:55.771900498Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Mar 17 18:42:55.772016 env[1320]: time="2025-03-17T18:42:55.771994442Z" level=info msg="Daemon has completed initialization" Mar 17 18:42:55.791171 systemd[1]: Started docker.service. Mar 17 18:42:55.798611 env[1320]: time="2025-03-17T18:42:55.798538074Z" level=info msg="API listen on /run/docker.sock" Mar 17 18:42:56.569230 env[1214]: time="2025-03-17T18:42:56.569170036Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.7\"" Mar 17 18:42:57.251508 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount603854039.mount: Deactivated successfully. Mar 17 18:42:58.825733 env[1214]: time="2025-03-17T18:42:58.825661723Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:58.827483 env[1214]: time="2025-03-17T18:42:58.827453071Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f084bc047a8cf7c8484d47c51e70e646dde3977d916f282feb99207b7b9241af,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:58.829202 env[1214]: time="2025-03-17T18:42:58.829154939Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:58.830793 env[1214]: time="2025-03-17T18:42:58.830761124Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:22c19cc70fe5806d0a2cb28a6b6b33fd34e6f9e50616bdf6d53649bcfafbc277,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:42:58.831498 env[1214]: time="2025-03-17T18:42:58.831460963Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.7\" returns image reference \"sha256:f084bc047a8cf7c8484d47c51e70e646dde3977d916f282feb99207b7b9241af\"" Mar 17 18:42:58.832796 env[1214]: time="2025-03-17T18:42:58.832773885Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.7\"" Mar 17 18:43:00.331051 env[1214]: time="2025-03-17T18:43:00.330982365Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:43:00.332849 env[1214]: time="2025-03-17T18:43:00.332801386Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:652dcad615a9a0c252c253860d5b5b7bfebd3efe159dc033a8555bc15a6d1985,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:43:00.334511 env[1214]: time="2025-03-17T18:43:00.334487990Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:43:00.336021 env[1214]: time="2025-03-17T18:43:00.335994103Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:6abe7a0accecf29db6ebab18a10f844678ffed693d79e2e51a18a6f2b4530cbb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:43:00.336882 env[1214]: time="2025-03-17T18:43:00.336845064Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.7\" returns image reference \"sha256:652dcad615a9a0c252c253860d5b5b7bfebd3efe159dc033a8555bc15a6d1985\"" Mar 17 18:43:00.337466 env[1214]: time="2025-03-17T18:43:00.337446265Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.7\"" Mar 17 18:43:02.124345 env[1214]: time="2025-03-17T18:43:02.124261732Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:43:02.126134 env[1214]: time="2025-03-17T18:43:02.126086848Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7f1f6a63d8aa14cf61d0045e912ad312b4ade24637cecccc933b163582eae68c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:43:02.127822 env[1214]: time="2025-03-17T18:43:02.127794421Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:43:02.129453 env[1214]: time="2025-03-17T18:43:02.129414984Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:fb80249bcb77ee72b1c9fa5b70bc28a83ed107c9ca71957841ad91db379963bf,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:43:02.130121 env[1214]: time="2025-03-17T18:43:02.130088925Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.7\" returns image reference \"sha256:7f1f6a63d8aa14cf61d0045e912ad312b4ade24637cecccc933b163582eae68c\"" Mar 17 18:43:02.130655 env[1214]: time="2025-03-17T18:43:02.130613468Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.7\"" Mar 17 18:43:02.763718 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 17 18:43:02.763985 systemd[1]: Stopped kubelet.service. Mar 17 18:43:02.764037 systemd[1]: kubelet.service: Consumed 1.322s CPU time. Mar 17 18:43:02.765917 systemd[1]: Starting kubelet.service... Mar 17 18:43:02.875322 systemd[1]: Started kubelet.service. Mar 17 18:43:03.064693 kubelet[1455]: E0317 18:43:03.064565 1455 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:43:03.067726 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:43:03.067862 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:43:03.560596 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount52971874.mount: Deactivated successfully. Mar 17 18:43:04.628882 env[1214]: time="2025-03-17T18:43:04.628793606Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:43:04.630983 env[1214]: time="2025-03-17T18:43:04.630897909Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:dcfc039c372ea285997a302d60e58a75b80905b4c4dba969993b9b22e8ac66d1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:43:04.632529 env[1214]: time="2025-03-17T18:43:04.632468195Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:43:04.633999 env[1214]: time="2025-03-17T18:43:04.633957572Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:e5839270c96c3ad1bea1dce4935126d3281297527f3655408d2970aa4b5cf178,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:43:04.634433 env[1214]: time="2025-03-17T18:43:04.634377653Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.7\" returns image reference \"sha256:dcfc039c372ea285997a302d60e58a75b80905b4c4dba969993b9b22e8ac66d1\"" Mar 17 18:43:04.634930 env[1214]: time="2025-03-17T18:43:04.634890619Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Mar 17 18:43:05.146421 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3658564344.mount: Deactivated successfully. Mar 17 18:43:06.426565 env[1214]: time="2025-03-17T18:43:06.426477225Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:43:06.428630 env[1214]: time="2025-03-17T18:43:06.428561100Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:43:06.430521 env[1214]: time="2025-03-17T18:43:06.430461706Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:43:06.432506 env[1214]: time="2025-03-17T18:43:06.432472635Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:43:06.433462 env[1214]: time="2025-03-17T18:43:06.433420494Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Mar 17 18:43:06.434063 env[1214]: time="2025-03-17T18:43:06.434030941Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 17 18:43:06.890614 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount630760360.mount: Deactivated successfully. Mar 17 18:43:06.896375 env[1214]: time="2025-03-17T18:43:06.896328375Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:43:06.898159 env[1214]: time="2025-03-17T18:43:06.898103499Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:43:06.899544 env[1214]: time="2025-03-17T18:43:06.899515633Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:43:06.900776 env[1214]: time="2025-03-17T18:43:06.900735235Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:43:06.901194 env[1214]: time="2025-03-17T18:43:06.901143225Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Mar 17 18:43:06.901751 env[1214]: time="2025-03-17T18:43:06.901722843Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Mar 17 18:43:07.430063 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1038950361.mount: Deactivated successfully. Mar 17 18:43:11.379275 env[1214]: time="2025-03-17T18:43:11.379215325Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:43:11.381112 env[1214]: time="2025-03-17T18:43:11.381080944Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:43:11.383267 env[1214]: time="2025-03-17T18:43:11.383225086Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:43:11.384999 env[1214]: time="2025-03-17T18:43:11.384933968Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:43:11.385875 env[1214]: time="2025-03-17T18:43:11.385834365Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Mar 17 18:43:13.318815 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 17 18:43:13.319007 systemd[1]: Stopped kubelet.service. Mar 17 18:43:13.320458 systemd[1]: Starting kubelet.service... Mar 17 18:43:13.433503 systemd[1]: Started kubelet.service. Mar 17 18:43:13.577012 kubelet[1485]: E0317 18:43:13.576843 1485 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:43:13.579099 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:43:13.579291 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:43:14.566330 systemd[1]: Stopped kubelet.service. Mar 17 18:43:14.568606 systemd[1]: Starting kubelet.service... Mar 17 18:43:14.587782 systemd[1]: Reloading. Mar 17 18:43:14.649633 /usr/lib/systemd/system-generators/torcx-generator[1519]: time="2025-03-17T18:43:14Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 18:43:14.649666 /usr/lib/systemd/system-generators/torcx-generator[1519]: time="2025-03-17T18:43:14Z" level=info msg="torcx already run" Mar 17 18:43:15.326350 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:43:15.326367 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:43:15.343506 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:43:15.420517 systemd[1]: Started kubelet.service. Mar 17 18:43:15.421914 systemd[1]: Stopping kubelet.service... Mar 17 18:43:15.422256 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 18:43:15.422482 systemd[1]: Stopped kubelet.service. Mar 17 18:43:15.424124 systemd[1]: Starting kubelet.service... Mar 17 18:43:15.503745 systemd[1]: Started kubelet.service. Mar 17 18:43:15.549024 kubelet[1567]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:43:15.549024 kubelet[1567]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 18:43:15.549024 kubelet[1567]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:43:15.549593 kubelet[1567]: I0317 18:43:15.549048 1567 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 18:43:15.712389 kubelet[1567]: I0317 18:43:15.712224 1567 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Mar 17 18:43:15.712389 kubelet[1567]: I0317 18:43:15.712280 1567 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 18:43:15.712685 kubelet[1567]: I0317 18:43:15.712651 1567 server.go:929] "Client rotation is on, will bootstrap in background" Mar 17 18:43:15.744411 kubelet[1567]: E0317 18:43:15.744312 1567 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.108:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.108:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:43:15.746689 kubelet[1567]: I0317 18:43:15.746625 1567 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 18:43:15.753928 kubelet[1567]: E0317 18:43:15.753893 1567 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 17 18:43:15.753928 kubelet[1567]: I0317 18:43:15.753929 1567 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 17 18:43:15.759706 kubelet[1567]: I0317 18:43:15.759673 1567 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 18:43:15.760717 kubelet[1567]: I0317 18:43:15.760692 1567 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 17 18:43:15.760857 kubelet[1567]: I0317 18:43:15.760811 1567 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 18:43:15.761059 kubelet[1567]: I0317 18:43:15.760852 1567 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 17 18:43:15.761156 kubelet[1567]: I0317 18:43:15.761061 1567 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 18:43:15.761156 kubelet[1567]: I0317 18:43:15.761071 1567 container_manager_linux.go:300] "Creating device plugin manager" Mar 17 18:43:15.761222 kubelet[1567]: I0317 18:43:15.761199 1567 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:43:15.768058 kubelet[1567]: I0317 18:43:15.768023 1567 kubelet.go:408] "Attempting to sync node with API server" Mar 17 18:43:15.768058 kubelet[1567]: I0317 18:43:15.768049 1567 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 18:43:15.768171 kubelet[1567]: I0317 18:43:15.768088 1567 kubelet.go:314] "Adding apiserver pod source" Mar 17 18:43:15.768171 kubelet[1567]: I0317 18:43:15.768104 1567 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 18:43:15.768684 kubelet[1567]: W0317 18:43:15.768617 1567 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.108:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.108:6443: connect: connection refused Mar 17 18:43:15.768729 kubelet[1567]: E0317 18:43:15.768691 1567 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.108:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.108:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:43:15.772884 kubelet[1567]: W0317 18:43:15.772835 1567 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.108:6443: connect: connection refused Mar 17 18:43:15.772934 kubelet[1567]: E0317 18:43:15.772895 1567 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.108:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:43:15.780109 kubelet[1567]: I0317 18:43:15.780083 1567 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Mar 17 18:43:15.782461 kubelet[1567]: I0317 18:43:15.782436 1567 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 18:43:15.783061 kubelet[1567]: W0317 18:43:15.783032 1567 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 17 18:43:15.784072 kubelet[1567]: I0317 18:43:15.784042 1567 server.go:1269] "Started kubelet" Mar 17 18:43:15.784228 kubelet[1567]: I0317 18:43:15.784196 1567 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 18:43:15.784578 kubelet[1567]: I0317 18:43:15.784535 1567 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 18:43:15.784835 kubelet[1567]: I0317 18:43:15.784815 1567 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 18:43:15.790386 kubelet[1567]: I0317 18:43:15.790358 1567 server.go:460] "Adding debug handlers to kubelet server" Mar 17 18:43:15.806409 kubelet[1567]: E0317 18:43:15.806387 1567 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 18:43:15.822059 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Mar 17 18:43:15.822442 kubelet[1567]: I0317 18:43:15.822400 1567 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 18:43:15.824145 kubelet[1567]: E0317 18:43:15.822741 1567 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.108:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.108:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.182dab51203ff305 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-03-17 18:43:15.784012549 +0000 UTC m=+0.275943754,LastTimestamp:2025-03-17 18:43:15.784012549 +0000 UTC m=+0.275943754,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 17 18:43:15.824376 kubelet[1567]: I0317 18:43:15.824284 1567 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 17 18:43:15.825649 kubelet[1567]: E0317 18:43:15.825602 1567 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 18:43:15.825764 kubelet[1567]: I0317 18:43:15.825743 1567 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 17 18:43:15.825884 kubelet[1567]: I0317 18:43:15.825862 1567 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 17 18:43:15.826004 kubelet[1567]: I0317 18:43:15.825985 1567 reconciler.go:26] "Reconciler: start to sync state" Mar 17 18:43:15.826544 kubelet[1567]: I0317 18:43:15.826508 1567 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 18:43:15.826759 kubelet[1567]: W0317 18:43:15.826502 1567 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.108:6443: connect: connection refused Mar 17 18:43:15.826817 kubelet[1567]: E0317 18:43:15.826769 1567 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.108:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:43:15.826817 kubelet[1567]: E0317 18:43:15.825984 1567 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.108:6443: connect: connection refused" interval="200ms" Mar 17 18:43:15.827961 kubelet[1567]: I0317 18:43:15.827914 1567 factory.go:221] Registration of the containerd container factory successfully Mar 17 18:43:15.827961 kubelet[1567]: I0317 18:43:15.827941 1567 factory.go:221] Registration of the systemd container factory successfully Mar 17 18:43:15.837347 kubelet[1567]: I0317 18:43:15.837314 1567 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 18:43:15.837347 kubelet[1567]: I0317 18:43:15.837330 1567 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 18:43:15.837347 kubelet[1567]: I0317 18:43:15.837348 1567 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:43:15.841992 kubelet[1567]: I0317 18:43:15.841941 1567 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 18:43:15.843035 kubelet[1567]: I0317 18:43:15.843002 1567 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 18:43:15.843085 kubelet[1567]: I0317 18:43:15.843050 1567 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 18:43:15.843085 kubelet[1567]: I0317 18:43:15.843071 1567 kubelet.go:2321] "Starting kubelet main sync loop" Mar 17 18:43:15.843152 kubelet[1567]: E0317 18:43:15.843115 1567 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 18:43:15.844132 kubelet[1567]: W0317 18:43:15.844091 1567 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.108:6443: connect: connection refused Mar 17 18:43:15.844207 kubelet[1567]: E0317 18:43:15.844147 1567 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.108:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:43:15.925817 kubelet[1567]: E0317 18:43:15.925748 1567 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 18:43:15.944221 kubelet[1567]: E0317 18:43:15.944157 1567 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 17 18:43:16.027007 kubelet[1567]: E0317 18:43:16.026857 1567 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 18:43:16.027196 kubelet[1567]: E0317 18:43:16.027158 1567 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.108:6443: connect: connection refused" interval="400ms" Mar 17 18:43:16.127867 kubelet[1567]: E0317 18:43:16.127802 1567 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 18:43:16.145076 kubelet[1567]: E0317 18:43:16.145015 1567 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 17 18:43:16.228665 kubelet[1567]: E0317 18:43:16.228586 1567 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 18:43:16.329024 kubelet[1567]: E0317 18:43:16.328824 1567 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 18:43:16.428897 kubelet[1567]: E0317 18:43:16.428817 1567 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.108:6443: connect: connection refused" interval="800ms" Mar 17 18:43:16.428897 kubelet[1567]: E0317 18:43:16.428904 1567 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 18:43:16.484984 kubelet[1567]: I0317 18:43:16.484901 1567 policy_none.go:49] "None policy: Start" Mar 17 18:43:16.485985 kubelet[1567]: I0317 18:43:16.485960 1567 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 18:43:16.486037 kubelet[1567]: I0317 18:43:16.486015 1567 state_mem.go:35] "Initializing new in-memory state store" Mar 17 18:43:16.494339 systemd[1]: Created slice kubepods.slice. Mar 17 18:43:16.498411 systemd[1]: Created slice kubepods-burstable.slice. Mar 17 18:43:16.500462 systemd[1]: Created slice kubepods-besteffort.slice. Mar 17 18:43:16.512766 kubelet[1567]: I0317 18:43:16.512737 1567 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 18:43:16.512974 kubelet[1567]: I0317 18:43:16.512952 1567 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 17 18:43:16.513021 kubelet[1567]: I0317 18:43:16.512977 1567 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 18:43:16.513275 kubelet[1567]: I0317 18:43:16.513258 1567 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 18:43:16.515259 kubelet[1567]: E0317 18:43:16.515238 1567 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 17 18:43:16.552293 systemd[1]: Created slice kubepods-burstable-pod6f32907a07e55aea05abdc5cd284a8d5.slice. Mar 17 18:43:16.561212 systemd[1]: Created slice kubepods-burstable-pod6bfa2120b4384a3c4fe0aa40a0a6c276.slice. Mar 17 18:43:16.568612 systemd[1]: Created slice kubepods-burstable-pod60762308083b5ef6c837b1be48ec53d6.slice. Mar 17 18:43:16.615516 kubelet[1567]: I0317 18:43:16.615378 1567 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Mar 17 18:43:16.615876 kubelet[1567]: E0317 18:43:16.615788 1567 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.108:6443/api/v1/nodes\": dial tcp 10.0.0.108:6443: connect: connection refused" node="localhost" Mar 17 18:43:16.630204 kubelet[1567]: I0317 18:43:16.630156 1567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6bfa2120b4384a3c4fe0aa40a0a6c276-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6bfa2120b4384a3c4fe0aa40a0a6c276\") " pod="kube-system/kube-apiserver-localhost" Mar 17 18:43:16.630274 kubelet[1567]: I0317 18:43:16.630205 1567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6bfa2120b4384a3c4fe0aa40a0a6c276-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6bfa2120b4384a3c4fe0aa40a0a6c276\") " pod="kube-system/kube-apiserver-localhost" Mar 17 18:43:16.630274 kubelet[1567]: I0317 18:43:16.630233 1567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6bfa2120b4384a3c4fe0aa40a0a6c276-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6bfa2120b4384a3c4fe0aa40a0a6c276\") " pod="kube-system/kube-apiserver-localhost" Mar 17 18:43:16.630336 kubelet[1567]: I0317 18:43:16.630283 1567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 18:43:16.630336 kubelet[1567]: I0317 18:43:16.630307 1567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 18:43:16.630408 kubelet[1567]: I0317 18:43:16.630336 1567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6f32907a07e55aea05abdc5cd284a8d5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6f32907a07e55aea05abdc5cd284a8d5\") " pod="kube-system/kube-scheduler-localhost" Mar 17 18:43:16.630408 kubelet[1567]: I0317 18:43:16.630356 1567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 18:43:16.630408 kubelet[1567]: I0317 18:43:16.630373 1567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 18:43:16.630408 kubelet[1567]: I0317 18:43:16.630391 1567 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 18:43:16.817632 kubelet[1567]: I0317 18:43:16.817612 1567 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Mar 17 18:43:16.817850 kubelet[1567]: E0317 18:43:16.817819 1567 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.108:6443/api/v1/nodes\": dial tcp 10.0.0.108:6443: connect: connection refused" node="localhost" Mar 17 18:43:16.859375 kubelet[1567]: E0317 18:43:16.859317 1567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:43:16.860212 kubelet[1567]: W0317 18:43:16.859805 1567 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.108:6443: connect: connection refused Mar 17 18:43:16.860212 kubelet[1567]: E0317 18:43:16.859850 1567 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.108:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:43:16.860298 env[1214]: time="2025-03-17T18:43:16.859946831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6f32907a07e55aea05abdc5cd284a8d5,Namespace:kube-system,Attempt:0,}" Mar 17 18:43:16.868301 kubelet[1567]: E0317 18:43:16.868229 1567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:43:16.871296 kubelet[1567]: E0317 18:43:16.871272 1567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:43:16.871469 env[1214]: time="2025-03-17T18:43:16.871432276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6bfa2120b4384a3c4fe0aa40a0a6c276,Namespace:kube-system,Attempt:0,}" Mar 17 18:43:16.871588 env[1214]: time="2025-03-17T18:43:16.871556762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:60762308083b5ef6c837b1be48ec53d6,Namespace:kube-system,Attempt:0,}" Mar 17 18:43:17.029915 kubelet[1567]: W0317 18:43:17.029809 1567 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.108:6443: connect: connection refused Mar 17 18:43:17.030250 kubelet[1567]: E0317 18:43:17.029930 1567 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.108:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:43:17.132014 kubelet[1567]: W0317 18:43:17.131890 1567 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.108:6443: connect: connection refused Mar 17 18:43:17.132014 kubelet[1567]: E0317 18:43:17.131955 1567 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.108:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:43:17.219110 kubelet[1567]: I0317 18:43:17.219077 1567 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Mar 17 18:43:17.219405 kubelet[1567]: E0317 18:43:17.219369 1567 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.108:6443/api/v1/nodes\": dial tcp 10.0.0.108:6443: connect: connection refused" node="localhost" Mar 17 18:43:17.229849 kubelet[1567]: E0317 18:43:17.229815 1567 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.108:6443: connect: connection refused" interval="1.6s" Mar 17 18:43:17.240388 kubelet[1567]: W0317 18:43:17.240333 1567 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.108:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.108:6443: connect: connection refused Mar 17 18:43:17.240447 kubelet[1567]: E0317 18:43:17.240395 1567 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.108:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.108:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:43:17.458993 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2789259669.mount: Deactivated successfully. Mar 17 18:43:17.464927 env[1214]: time="2025-03-17T18:43:17.464879437Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:43:17.465836 env[1214]: time="2025-03-17T18:43:17.465788546Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:43:17.468637 env[1214]: time="2025-03-17T18:43:17.468611038Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:43:17.470550 env[1214]: time="2025-03-17T18:43:17.470515624Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:43:17.471611 env[1214]: time="2025-03-17T18:43:17.471585491Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:43:17.472769 env[1214]: time="2025-03-17T18:43:17.472738799Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:43:17.473897 env[1214]: time="2025-03-17T18:43:17.473872102Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:43:17.474985 env[1214]: time="2025-03-17T18:43:17.474955393Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:43:17.476241 env[1214]: time="2025-03-17T18:43:17.476156144Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:43:17.478039 env[1214]: time="2025-03-17T18:43:17.478005622Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:43:17.479799 env[1214]: time="2025-03-17T18:43:17.479771299Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:43:17.480434 env[1214]: time="2025-03-17T18:43:17.480401064Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:43:17.534834 env[1214]: time="2025-03-17T18:43:17.534745318Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:43:17.534834 env[1214]: time="2025-03-17T18:43:17.534795822Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:43:17.534834 env[1214]: time="2025-03-17T18:43:17.534805805Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:43:17.535062 env[1214]: time="2025-03-17T18:43:17.534996571Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2b79f04e645d0de81f7301ac3fee8f50ac83c3928348aa260d2088b28c4e3750 pid=1613 runtime=io.containerd.runc.v2 Mar 17 18:43:17.536403 env[1214]: time="2025-03-17T18:43:17.536342572Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:43:17.536572 env[1214]: time="2025-03-17T18:43:17.536502026Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:43:17.536572 env[1214]: time="2025-03-17T18:43:17.536520717Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:43:17.536828 env[1214]: time="2025-03-17T18:43:17.536736254Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3b0c144d5e2c51a41329b45ae53e531996dd6e17f5c16444af4a5c02300d8342 pid=1631 runtime=io.containerd.runc.v2 Mar 17 18:43:17.537671 env[1214]: time="2025-03-17T18:43:17.537602985Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:43:17.537671 env[1214]: time="2025-03-17T18:43:17.537637447Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:43:17.537821 env[1214]: time="2025-03-17T18:43:17.537650620Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:43:17.537939 env[1214]: time="2025-03-17T18:43:17.537810084Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1978f8fde91e072fb5541c2d80030b404dfd6cb0c347ca24b0b2a7bdb8ceac76 pid=1617 runtime=io.containerd.runc.v2 Mar 17 18:43:17.556070 systemd[1]: Started cri-containerd-1978f8fde91e072fb5541c2d80030b404dfd6cb0c347ca24b0b2a7bdb8ceac76.scope. Mar 17 18:43:17.561063 systemd[1]: Started cri-containerd-2b79f04e645d0de81f7301ac3fee8f50ac83c3928348aa260d2088b28c4e3750.scope. Mar 17 18:43:17.606546 systemd[1]: Started cri-containerd-3b0c144d5e2c51a41329b45ae53e531996dd6e17f5c16444af4a5c02300d8342.scope. Mar 17 18:43:17.738426 env[1214]: time="2025-03-17T18:43:17.738308852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6f32907a07e55aea05abdc5cd284a8d5,Namespace:kube-system,Attempt:0,} returns sandbox id \"1978f8fde91e072fb5541c2d80030b404dfd6cb0c347ca24b0b2a7bdb8ceac76\"" Mar 17 18:43:17.739913 kubelet[1567]: E0317 18:43:17.739723 1567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:43:17.741577 env[1214]: time="2025-03-17T18:43:17.741547038Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6bfa2120b4384a3c4fe0aa40a0a6c276,Namespace:kube-system,Attempt:0,} returns sandbox id \"2b79f04e645d0de81f7301ac3fee8f50ac83c3928348aa260d2088b28c4e3750\"" Mar 17 18:43:17.741974 env[1214]: time="2025-03-17T18:43:17.741952770Z" level=info msg="CreateContainer within sandbox \"1978f8fde91e072fb5541c2d80030b404dfd6cb0c347ca24b0b2a7bdb8ceac76\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 17 18:43:17.743938 kubelet[1567]: E0317 18:43:17.743781 1567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:43:17.745046 env[1214]: time="2025-03-17T18:43:17.745022121Z" level=info msg="CreateContainer within sandbox \"2b79f04e645d0de81f7301ac3fee8f50ac83c3928348aa260d2088b28c4e3750\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 17 18:43:17.745427 env[1214]: time="2025-03-17T18:43:17.745378772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:60762308083b5ef6c837b1be48ec53d6,Namespace:kube-system,Attempt:0,} returns sandbox id \"3b0c144d5e2c51a41329b45ae53e531996dd6e17f5c16444af4a5c02300d8342\"" Mar 17 18:43:17.746674 kubelet[1567]: E0317 18:43:17.746578 1567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:43:17.747802 env[1214]: time="2025-03-17T18:43:17.747780215Z" level=info msg="CreateContainer within sandbox \"3b0c144d5e2c51a41329b45ae53e531996dd6e17f5c16444af4a5c02300d8342\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 17 18:43:17.846798 kubelet[1567]: E0317 18:43:17.846751 1567 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.108:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.108:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:43:18.021755 kubelet[1567]: I0317 18:43:18.021612 1567 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Mar 17 18:43:18.022128 kubelet[1567]: E0317 18:43:18.022086 1567 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.108:6443/api/v1/nodes\": dial tcp 10.0.0.108:6443: connect: connection refused" node="localhost" Mar 17 18:43:18.174643 env[1214]: time="2025-03-17T18:43:18.174551644Z" level=info msg="CreateContainer within sandbox \"1978f8fde91e072fb5541c2d80030b404dfd6cb0c347ca24b0b2a7bdb8ceac76\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e07caed564ec89a866eeac3f5e06c8092c207268d75f842b4584df073c3f392e\"" Mar 17 18:43:18.175418 env[1214]: time="2025-03-17T18:43:18.175376862Z" level=info msg="StartContainer for \"e07caed564ec89a866eeac3f5e06c8092c207268d75f842b4584df073c3f392e\"" Mar 17 18:43:18.186078 env[1214]: time="2025-03-17T18:43:18.186021484Z" level=info msg="CreateContainer within sandbox \"2b79f04e645d0de81f7301ac3fee8f50ac83c3928348aa260d2088b28c4e3750\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"61e2824a0f48d770aaba163f1854682466ac33df648ff4d022c733541a47a41b\"" Mar 17 18:43:18.186708 env[1214]: time="2025-03-17T18:43:18.186676018Z" level=info msg="StartContainer for \"61e2824a0f48d770aaba163f1854682466ac33df648ff4d022c733541a47a41b\"" Mar 17 18:43:18.190804 systemd[1]: Started cri-containerd-e07caed564ec89a866eeac3f5e06c8092c207268d75f842b4584df073c3f392e.scope. Mar 17 18:43:18.191918 env[1214]: time="2025-03-17T18:43:18.191646301Z" level=info msg="CreateContainer within sandbox \"3b0c144d5e2c51a41329b45ae53e531996dd6e17f5c16444af4a5c02300d8342\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2c63131bf47dddd652d85fcc6b89cae43c7fa63b4d406add33931bba24c89903\"" Mar 17 18:43:18.192419 env[1214]: time="2025-03-17T18:43:18.192392475Z" level=info msg="StartContainer for \"2c63131bf47dddd652d85fcc6b89cae43c7fa63b4d406add33931bba24c89903\"" Mar 17 18:43:18.208918 systemd[1]: Started cri-containerd-2c63131bf47dddd652d85fcc6b89cae43c7fa63b4d406add33931bba24c89903.scope. Mar 17 18:43:18.223675 systemd[1]: Started cri-containerd-61e2824a0f48d770aaba163f1854682466ac33df648ff4d022c733541a47a41b.scope. Mar 17 18:43:18.263254 env[1214]: time="2025-03-17T18:43:18.263171811Z" level=info msg="StartContainer for \"e07caed564ec89a866eeac3f5e06c8092c207268d75f842b4584df073c3f392e\" returns successfully" Mar 17 18:43:18.280960 env[1214]: time="2025-03-17T18:43:18.280846904Z" level=info msg="StartContainer for \"61e2824a0f48d770aaba163f1854682466ac33df648ff4d022c733541a47a41b\" returns successfully" Mar 17 18:43:18.281201 env[1214]: time="2025-03-17T18:43:18.280935786Z" level=info msg="StartContainer for \"2c63131bf47dddd652d85fcc6b89cae43c7fa63b4d406add33931bba24c89903\" returns successfully" Mar 17 18:43:18.854033 kubelet[1567]: E0317 18:43:18.853924 1567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:43:18.855840 kubelet[1567]: E0317 18:43:18.855807 1567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:43:18.858088 kubelet[1567]: E0317 18:43:18.858030 1567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:43:19.624237 kubelet[1567]: I0317 18:43:19.624195 1567 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Mar 17 18:43:19.860514 kubelet[1567]: E0317 18:43:19.860468 1567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:43:19.861246 kubelet[1567]: E0317 18:43:19.861226 1567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:43:19.861513 kubelet[1567]: E0317 18:43:19.861478 1567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:43:20.124014 kubelet[1567]: E0317 18:43:20.123971 1567 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 17 18:43:20.222853 kubelet[1567]: I0317 18:43:20.222806 1567 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Mar 17 18:43:20.222853 kubelet[1567]: E0317 18:43:20.222847 1567 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Mar 17 18:43:20.770434 kubelet[1567]: I0317 18:43:20.770349 1567 apiserver.go:52] "Watching apiserver" Mar 17 18:43:20.826497 kubelet[1567]: I0317 18:43:20.826444 1567 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 17 18:43:20.866423 kubelet[1567]: E0317 18:43:20.866376 1567 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Mar 17 18:43:20.866423 kubelet[1567]: E0317 18:43:20.866427 1567 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Mar 17 18:43:20.866868 kubelet[1567]: E0317 18:43:20.866531 1567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:43:20.866868 kubelet[1567]: E0317 18:43:20.866600 1567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:43:20.866868 kubelet[1567]: E0317 18:43:20.866614 1567 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Mar 17 18:43:20.866868 kubelet[1567]: E0317 18:43:20.866719 1567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:43:21.869539 kubelet[1567]: E0317 18:43:21.869503 1567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:43:22.449851 systemd[1]: Reloading. Mar 17 18:43:22.514153 /usr/lib/systemd/system-generators/torcx-generator[1857]: time="2025-03-17T18:43:22Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 18:43:22.514222 /usr/lib/systemd/system-generators/torcx-generator[1857]: time="2025-03-17T18:43:22Z" level=info msg="torcx already run" Mar 17 18:43:22.581322 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:43:22.581338 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:43:22.598534 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:43:22.686526 systemd[1]: Stopping kubelet.service... Mar 17 18:43:22.708701 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 18:43:22.708974 systemd[1]: Stopped kubelet.service. Mar 17 18:43:22.710534 systemd[1]: Starting kubelet.service... Mar 17 18:43:22.793298 systemd[1]: Started kubelet.service. Mar 17 18:43:22.825986 kubelet[1902]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:43:22.825986 kubelet[1902]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 18:43:22.825986 kubelet[1902]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:43:22.826523 kubelet[1902]: I0317 18:43:22.826014 1902 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 18:43:22.832357 kubelet[1902]: I0317 18:43:22.832315 1902 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Mar 17 18:43:22.832357 kubelet[1902]: I0317 18:43:22.832341 1902 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 18:43:22.834934 kubelet[1902]: I0317 18:43:22.834906 1902 server.go:929] "Client rotation is on, will bootstrap in background" Mar 17 18:43:22.837253 kubelet[1902]: I0317 18:43:22.837225 1902 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 17 18:43:22.839130 kubelet[1902]: I0317 18:43:22.839101 1902 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 18:43:22.843099 kubelet[1902]: E0317 18:43:22.843048 1902 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 17 18:43:22.843099 kubelet[1902]: I0317 18:43:22.843077 1902 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 17 18:43:22.847064 kubelet[1902]: I0317 18:43:22.846636 1902 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 18:43:22.847064 kubelet[1902]: I0317 18:43:22.846725 1902 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 17 18:43:22.847064 kubelet[1902]: I0317 18:43:22.846804 1902 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 18:43:22.847064 kubelet[1902]: I0317 18:43:22.846829 1902 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 17 18:43:22.847844 kubelet[1902]: I0317 18:43:22.846995 1902 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 18:43:22.847844 kubelet[1902]: I0317 18:43:22.847003 1902 container_manager_linux.go:300] "Creating device plugin manager" Mar 17 18:43:22.847844 kubelet[1902]: I0317 18:43:22.847036 1902 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:43:22.847844 kubelet[1902]: I0317 18:43:22.847128 1902 kubelet.go:408] "Attempting to sync node with API server" Mar 17 18:43:22.847844 kubelet[1902]: I0317 18:43:22.847139 1902 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 18:43:22.847844 kubelet[1902]: I0317 18:43:22.847164 1902 kubelet.go:314] "Adding apiserver pod source" Mar 17 18:43:22.847844 kubelet[1902]: I0317 18:43:22.847189 1902 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 18:43:22.848375 kubelet[1902]: I0317 18:43:22.848335 1902 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Mar 17 18:43:22.848835 kubelet[1902]: I0317 18:43:22.848815 1902 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 18:43:22.852840 kubelet[1902]: I0317 18:43:22.852803 1902 server.go:1269] "Started kubelet" Mar 17 18:43:22.852949 kubelet[1902]: I0317 18:43:22.852901 1902 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 18:43:22.853235 kubelet[1902]: I0317 18:43:22.853209 1902 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 18:43:22.856364 kubelet[1902]: I0317 18:43:22.855873 1902 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 18:43:22.862711 kubelet[1902]: I0317 18:43:22.862664 1902 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 18:43:22.865302 kubelet[1902]: I0317 18:43:22.863397 1902 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 17 18:43:22.865302 kubelet[1902]: I0317 18:43:22.864438 1902 server.go:460] "Adding debug handlers to kubelet server" Mar 17 18:43:22.865302 kubelet[1902]: I0317 18:43:22.865099 1902 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 17 18:43:22.866927 kubelet[1902]: E0317 18:43:22.865688 1902 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 18:43:22.866927 kubelet[1902]: I0317 18:43:22.866519 1902 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 17 18:43:22.866927 kubelet[1902]: I0317 18:43:22.866699 1902 reconciler.go:26] "Reconciler: start to sync state" Mar 17 18:43:22.868995 kubelet[1902]: I0317 18:43:22.868971 1902 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 18:43:22.871161 kubelet[1902]: E0317 18:43:22.870725 1902 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 18:43:22.873785 kubelet[1902]: I0317 18:43:22.873672 1902 factory.go:221] Registration of the containerd container factory successfully Mar 17 18:43:22.873785 kubelet[1902]: I0317 18:43:22.873692 1902 factory.go:221] Registration of the systemd container factory successfully Mar 17 18:43:22.883080 kubelet[1902]: I0317 18:43:22.883031 1902 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 18:43:22.886025 kubelet[1902]: I0317 18:43:22.886006 1902 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 18:43:22.886087 kubelet[1902]: I0317 18:43:22.886037 1902 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 18:43:22.886087 kubelet[1902]: I0317 18:43:22.886057 1902 kubelet.go:2321] "Starting kubelet main sync loop" Mar 17 18:43:22.886223 kubelet[1902]: E0317 18:43:22.886200 1902 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 18:43:22.903498 kubelet[1902]: I0317 18:43:22.903469 1902 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 18:43:22.903498 kubelet[1902]: I0317 18:43:22.903486 1902 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 18:43:22.903498 kubelet[1902]: I0317 18:43:22.903505 1902 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:43:22.903727 kubelet[1902]: I0317 18:43:22.903638 1902 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 17 18:43:22.903727 kubelet[1902]: I0317 18:43:22.903648 1902 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 17 18:43:22.903727 kubelet[1902]: I0317 18:43:22.903666 1902 policy_none.go:49] "None policy: Start" Mar 17 18:43:22.904212 kubelet[1902]: I0317 18:43:22.904194 1902 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 18:43:22.904212 kubelet[1902]: I0317 18:43:22.904215 1902 state_mem.go:35] "Initializing new in-memory state store" Mar 17 18:43:22.904329 kubelet[1902]: I0317 18:43:22.904316 1902 state_mem.go:75] "Updated machine memory state" Mar 17 18:43:22.908935 kubelet[1902]: I0317 18:43:22.908906 1902 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 18:43:22.909090 kubelet[1902]: I0317 18:43:22.909068 1902 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 17 18:43:22.909143 kubelet[1902]: I0317 18:43:22.909086 1902 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 18:43:22.909413 kubelet[1902]: I0317 18:43:22.909387 1902 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 18:43:22.994021 kubelet[1902]: E0317 18:43:22.992498 1902 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 17 18:43:23.014193 kubelet[1902]: I0317 18:43:23.014147 1902 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Mar 17 18:43:23.021239 kubelet[1902]: I0317 18:43:23.021212 1902 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Mar 17 18:43:23.021331 kubelet[1902]: I0317 18:43:23.021306 1902 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Mar 17 18:43:23.167640 kubelet[1902]: I0317 18:43:23.167565 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 18:43:23.167640 kubelet[1902]: I0317 18:43:23.167612 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 18:43:23.167640 kubelet[1902]: I0317 18:43:23.167632 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6f32907a07e55aea05abdc5cd284a8d5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6f32907a07e55aea05abdc5cd284a8d5\") " pod="kube-system/kube-scheduler-localhost" Mar 17 18:43:23.167640 kubelet[1902]: I0317 18:43:23.167648 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6bfa2120b4384a3c4fe0aa40a0a6c276-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6bfa2120b4384a3c4fe0aa40a0a6c276\") " pod="kube-system/kube-apiserver-localhost" Mar 17 18:43:23.167640 kubelet[1902]: I0317 18:43:23.167661 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 18:43:23.167993 kubelet[1902]: I0317 18:43:23.167675 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 18:43:23.167993 kubelet[1902]: I0317 18:43:23.167688 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 18:43:23.167993 kubelet[1902]: I0317 18:43:23.167703 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6bfa2120b4384a3c4fe0aa40a0a6c276-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6bfa2120b4384a3c4fe0aa40a0a6c276\") " pod="kube-system/kube-apiserver-localhost" Mar 17 18:43:23.167993 kubelet[1902]: I0317 18:43:23.167719 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6bfa2120b4384a3c4fe0aa40a0a6c276-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6bfa2120b4384a3c4fe0aa40a0a6c276\") " pod="kube-system/kube-apiserver-localhost" Mar 17 18:43:23.293049 kubelet[1902]: E0317 18:43:23.292925 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:43:23.293218 kubelet[1902]: E0317 18:43:23.293078 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:43:23.293218 kubelet[1902]: E0317 18:43:23.293121 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:43:23.575688 sudo[1935]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 17 18:43:23.575965 sudo[1935]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Mar 17 18:43:23.848850 kubelet[1902]: I0317 18:43:23.848705 1902 apiserver.go:52] "Watching apiserver" Mar 17 18:43:23.866994 kubelet[1902]: I0317 18:43:23.866938 1902 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 17 18:43:23.894879 kubelet[1902]: E0317 18:43:23.894842 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:43:23.895099 kubelet[1902]: E0317 18:43:23.895069 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:43:23.895316 kubelet[1902]: E0317 18:43:23.895293 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:43:23.913151 kubelet[1902]: I0317 18:43:23.913028 1902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.912999839 podStartE2EDuration="1.912999839s" podCreationTimestamp="2025-03-17 18:43:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:43:23.912639001 +0000 UTC m=+1.115839158" watchObservedRunningTime="2025-03-17 18:43:23.912999839 +0000 UTC m=+1.116199996" Mar 17 18:43:23.919389 kubelet[1902]: I0317 18:43:23.919338 1902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.919313242 podStartE2EDuration="2.919313242s" podCreationTimestamp="2025-03-17 18:43:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:43:23.918777526 +0000 UTC m=+1.121977683" watchObservedRunningTime="2025-03-17 18:43:23.919313242 +0000 UTC m=+1.122513389" Mar 17 18:43:23.925024 kubelet[1902]: I0317 18:43:23.924972 1902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.9249646089999999 podStartE2EDuration="1.924964609s" podCreationTimestamp="2025-03-17 18:43:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:43:23.924766454 +0000 UTC m=+1.127966612" watchObservedRunningTime="2025-03-17 18:43:23.924964609 +0000 UTC m=+1.128164766" Mar 17 18:43:24.037944 sudo[1935]: pam_unix(sudo:session): session closed for user root Mar 17 18:43:24.896150 kubelet[1902]: E0317 18:43:24.896104 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:43:25.295650 sudo[1308]: pam_unix(sudo:session): session closed for user root Mar 17 18:43:25.297081 sshd[1305]: pam_unix(sshd:session): session closed for user core Mar 17 18:43:25.299933 systemd[1]: sshd@4-10.0.0.108:22-10.0.0.1:36676.service: Deactivated successfully. Mar 17 18:43:25.300681 systemd[1]: session-5.scope: Deactivated successfully. Mar 17 18:43:25.300819 systemd[1]: session-5.scope: Consumed 4.499s CPU time. Mar 17 18:43:25.301274 systemd-logind[1205]: Session 5 logged out. Waiting for processes to exit. Mar 17 18:43:25.302089 systemd-logind[1205]: Removed session 5. Mar 17 18:43:26.079874 kubelet[1902]: E0317 18:43:26.079826 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:43:27.307017 kubelet[1902]: E0317 18:43:27.306958 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:43:27.728829 kubelet[1902]: I0317 18:43:27.728705 1902 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 17 18:43:27.729144 env[1214]: time="2025-03-17T18:43:27.729098909Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 17 18:43:27.729502 kubelet[1902]: I0317 18:43:27.729363 1902 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 17 18:43:28.791328 systemd[1]: Created slice kubepods-besteffort-pod3988015f_576b_464d_96fc_8868242523de.slice. Mar 17 18:43:28.801572 kubelet[1902]: I0317 18:43:28.801518 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnt55\" (UniqueName: \"kubernetes.io/projected/3988015f-576b-464d-96fc-8868242523de-kube-api-access-vnt55\") pod \"kube-proxy-v6sjl\" (UID: \"3988015f-576b-464d-96fc-8868242523de\") " pod="kube-system/kube-proxy-v6sjl" Mar 17 18:43:28.801572 kubelet[1902]: I0317 18:43:28.801569 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/086f8ba3-f9e8-414e-8332-8e34fe73720f-hostproc\") pod \"cilium-6l4j2\" (UID: \"086f8ba3-f9e8-414e-8332-8e34fe73720f\") " pod="kube-system/cilium-6l4j2" Mar 17 18:43:28.801944 kubelet[1902]: I0317 18:43:28.801615 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/086f8ba3-f9e8-414e-8332-8e34fe73720f-host-proc-sys-net\") pod \"cilium-6l4j2\" (UID: \"086f8ba3-f9e8-414e-8332-8e34fe73720f\") " pod="kube-system/cilium-6l4j2" Mar 17 18:43:28.801944 kubelet[1902]: I0317 18:43:28.801638 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3988015f-576b-464d-96fc-8868242523de-lib-modules\") pod \"kube-proxy-v6sjl\" (UID: \"3988015f-576b-464d-96fc-8868242523de\") " pod="kube-system/kube-proxy-v6sjl" Mar 17 18:43:28.801944 kubelet[1902]: I0317 18:43:28.801662 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/086f8ba3-f9e8-414e-8332-8e34fe73720f-lib-modules\") pod \"cilium-6l4j2\" (UID: \"086f8ba3-f9e8-414e-8332-8e34fe73720f\") " pod="kube-system/cilium-6l4j2" Mar 17 18:43:28.801944 kubelet[1902]: I0317 18:43:28.801683 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/086f8ba3-f9e8-414e-8332-8e34fe73720f-clustermesh-secrets\") pod \"cilium-6l4j2\" (UID: \"086f8ba3-f9e8-414e-8332-8e34fe73720f\") " pod="kube-system/cilium-6l4j2" Mar 17 18:43:28.801944 kubelet[1902]: I0317 18:43:28.801719 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/086f8ba3-f9e8-414e-8332-8e34fe73720f-cilium-run\") pod \"cilium-6l4j2\" (UID: \"086f8ba3-f9e8-414e-8332-8e34fe73720f\") " pod="kube-system/cilium-6l4j2" Mar 17 18:43:28.801944 kubelet[1902]: I0317 18:43:28.801821 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3988015f-576b-464d-96fc-8868242523de-xtables-lock\") pod \"kube-proxy-v6sjl\" (UID: \"3988015f-576b-464d-96fc-8868242523de\") " pod="kube-system/kube-proxy-v6sjl" Mar 17 18:43:28.802094 kubelet[1902]: I0317 18:43:28.801850 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/086f8ba3-f9e8-414e-8332-8e34fe73720f-cilium-config-path\") pod \"cilium-6l4j2\" (UID: \"086f8ba3-f9e8-414e-8332-8e34fe73720f\") " pod="kube-system/cilium-6l4j2" Mar 17 18:43:28.802094 kubelet[1902]: I0317 18:43:28.801917 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/086f8ba3-f9e8-414e-8332-8e34fe73720f-hubble-tls\") pod \"cilium-6l4j2\" (UID: \"086f8ba3-f9e8-414e-8332-8e34fe73720f\") " pod="kube-system/cilium-6l4j2" Mar 17 18:43:28.802094 kubelet[1902]: I0317 18:43:28.801973 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/086f8ba3-f9e8-414e-8332-8e34fe73720f-cilium-cgroup\") pod \"cilium-6l4j2\" (UID: \"086f8ba3-f9e8-414e-8332-8e34fe73720f\") " pod="kube-system/cilium-6l4j2" Mar 17 18:43:28.802094 kubelet[1902]: I0317 18:43:28.801990 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/086f8ba3-f9e8-414e-8332-8e34fe73720f-cni-path\") pod \"cilium-6l4j2\" (UID: \"086f8ba3-f9e8-414e-8332-8e34fe73720f\") " pod="kube-system/cilium-6l4j2" Mar 17 18:43:28.802094 kubelet[1902]: I0317 18:43:28.802004 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/086f8ba3-f9e8-414e-8332-8e34fe73720f-etc-cni-netd\") pod \"cilium-6l4j2\" (UID: \"086f8ba3-f9e8-414e-8332-8e34fe73720f\") " pod="kube-system/cilium-6l4j2" Mar 17 18:43:28.802094 kubelet[1902]: I0317 18:43:28.802039 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3988015f-576b-464d-96fc-8868242523de-kube-proxy\") pod \"kube-proxy-v6sjl\" (UID: \"3988015f-576b-464d-96fc-8868242523de\") " pod="kube-system/kube-proxy-v6sjl" Mar 17 18:43:28.802261 kubelet[1902]: I0317 18:43:28.802063 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/086f8ba3-f9e8-414e-8332-8e34fe73720f-bpf-maps\") pod \"cilium-6l4j2\" (UID: \"086f8ba3-f9e8-414e-8332-8e34fe73720f\") " pod="kube-system/cilium-6l4j2" Mar 17 18:43:28.802261 kubelet[1902]: I0317 18:43:28.802082 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/086f8ba3-f9e8-414e-8332-8e34fe73720f-xtables-lock\") pod \"cilium-6l4j2\" (UID: \"086f8ba3-f9e8-414e-8332-8e34fe73720f\") " pod="kube-system/cilium-6l4j2" Mar 17 18:43:28.802261 kubelet[1902]: I0317 18:43:28.802100 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/086f8ba3-f9e8-414e-8332-8e34fe73720f-host-proc-sys-kernel\") pod \"cilium-6l4j2\" (UID: \"086f8ba3-f9e8-414e-8332-8e34fe73720f\") " pod="kube-system/cilium-6l4j2" Mar 17 18:43:28.802261 kubelet[1902]: I0317 18:43:28.802122 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sn774\" (UniqueName: \"kubernetes.io/projected/086f8ba3-f9e8-414e-8332-8e34fe73720f-kube-api-access-sn774\") pod \"cilium-6l4j2\" (UID: \"086f8ba3-f9e8-414e-8332-8e34fe73720f\") " pod="kube-system/cilium-6l4j2" Mar 17 18:43:28.804877 systemd[1]: Created slice kubepods-burstable-pod086f8ba3_f9e8_414e_8332_8e34fe73720f.slice. Mar 17 18:43:28.843863 systemd[1]: Created slice kubepods-besteffort-pode00882d9_93a0_46b7_ad71_8237272a115f.slice. Mar 17 18:43:28.903195 kubelet[1902]: I0317 18:43:28.903131 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m24dx\" (UniqueName: \"kubernetes.io/projected/e00882d9-93a0-46b7-ad71-8237272a115f-kube-api-access-m24dx\") pod \"cilium-operator-5d85765b45-dlmfh\" (UID: \"e00882d9-93a0-46b7-ad71-8237272a115f\") " pod="kube-system/cilium-operator-5d85765b45-dlmfh" Mar 17 18:43:28.903195 kubelet[1902]: I0317 18:43:28.903198 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e00882d9-93a0-46b7-ad71-8237272a115f-cilium-config-path\") pod \"cilium-operator-5d85765b45-dlmfh\" (UID: \"e00882d9-93a0-46b7-ad71-8237272a115f\") " pod="kube-system/cilium-operator-5d85765b45-dlmfh" Mar 17 18:43:28.903663 kubelet[1902]: I0317 18:43:28.903622 1902 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Mar 17 18:43:29.102010 kubelet[1902]: E0317 18:43:29.101949 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:43:29.102679 env[1214]: time="2025-03-17T18:43:29.102640894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-v6sjl,Uid:3988015f-576b-464d-96fc-8868242523de,Namespace:kube-system,Attempt:0,}" Mar 17 18:43:29.109792 kubelet[1902]: E0317 18:43:29.109728 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:43:29.110364 env[1214]: time="2025-03-17T18:43:29.110318360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6l4j2,Uid:086f8ba3-f9e8-414e-8332-8e34fe73720f,Namespace:kube-system,Attempt:0,}" Mar 17 18:43:29.122150 env[1214]: time="2025-03-17T18:43:29.122056487Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:43:29.122150 env[1214]: time="2025-03-17T18:43:29.122094906Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:43:29.122150 env[1214]: time="2025-03-17T18:43:29.122105343Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:43:29.122399 env[1214]: time="2025-03-17T18:43:29.122269354Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8f7142a99df5c45cf99469d46facaf0a75da5b80065879a3fd91a7ef933e9b1b pid=1996 runtime=io.containerd.runc.v2 Mar 17 18:43:29.131694 env[1214]: time="2025-03-17T18:43:29.131615853Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:43:29.131694 env[1214]: time="2025-03-17T18:43:29.131673882Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:43:29.131825 env[1214]: time="2025-03-17T18:43:29.131688018Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:43:29.133376 env[1214]: time="2025-03-17T18:43:29.132849103Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7b9ec8dca0a9ec5d028503feb7a47677997e3fee0770c4e61278a7c25af1c37c pid=2019 runtime=io.containerd.runc.v2 Mar 17 18:43:29.134922 systemd[1]: Started cri-containerd-8f7142a99df5c45cf99469d46facaf0a75da5b80065879a3fd91a7ef933e9b1b.scope. Mar 17 18:43:29.143831 systemd[1]: Started cri-containerd-7b9ec8dca0a9ec5d028503feb7a47677997e3fee0770c4e61278a7c25af1c37c.scope. Mar 17 18:43:29.147222 kubelet[1902]: E0317 18:43:29.147139 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:43:29.149237 env[1214]: time="2025-03-17T18:43:29.147641151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-dlmfh,Uid:e00882d9-93a0-46b7-ad71-8237272a115f,Namespace:kube-system,Attempt:0,}" Mar 17 18:43:29.163546 env[1214]: time="2025-03-17T18:43:29.163499020Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-v6sjl,Uid:3988015f-576b-464d-96fc-8868242523de,Namespace:kube-system,Attempt:0,} returns sandbox id \"8f7142a99df5c45cf99469d46facaf0a75da5b80065879a3fd91a7ef933e9b1b\"" Mar 17 18:43:29.165071 kubelet[1902]: E0317 18:43:29.164340 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:43:29.167944 env[1214]: time="2025-03-17T18:43:29.167900224Z" level=info msg="CreateContainer within sandbox \"8f7142a99df5c45cf99469d46facaf0a75da5b80065879a3fd91a7ef933e9b1b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 17 18:43:29.168637 env[1214]: time="2025-03-17T18:43:29.168592574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6l4j2,Uid:086f8ba3-f9e8-414e-8332-8e34fe73720f,Namespace:kube-system,Attempt:0,} returns sandbox id \"7b9ec8dca0a9ec5d028503feb7a47677997e3fee0770c4e61278a7c25af1c37c\"" Mar 17 18:43:29.169072 kubelet[1902]: E0317 18:43:29.169050 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:43:29.169994 env[1214]: time="2025-03-17T18:43:29.169964451Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 17 18:43:29.179121 env[1214]: time="2025-03-17T18:43:29.179068275Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:43:29.179241 env[1214]: time="2025-03-17T18:43:29.179114845Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:43:29.179241 env[1214]: time="2025-03-17T18:43:29.179144972Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:43:29.179334 env[1214]: time="2025-03-17T18:43:29.179303721Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b9a10ad6b70dc45123f8618c76e6c94a13dea3dada0e396fc5ac7a5b6b55f9f3 pid=2077 runtime=io.containerd.runc.v2 Mar 17 18:43:29.186950 env[1214]: time="2025-03-17T18:43:29.186849359Z" level=info msg="CreateContainer within sandbox \"8f7142a99df5c45cf99469d46facaf0a75da5b80065879a3fd91a7ef933e9b1b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"dfe41e89dac6efd84a80ef18497b78e0beae5638d00831e549ea208e460932f0\"" Mar 17 18:43:29.187976 env[1214]: time="2025-03-17T18:43:29.187941377Z" level=info msg="StartContainer for \"dfe41e89dac6efd84a80ef18497b78e0beae5638d00831e549ea208e460932f0\"" Mar 17 18:43:29.189286 systemd[1]: Started cri-containerd-b9a10ad6b70dc45123f8618c76e6c94a13dea3dada0e396fc5ac7a5b6b55f9f3.scope. Mar 17 18:43:29.207522 systemd[1]: Started cri-containerd-dfe41e89dac6efd84a80ef18497b78e0beae5638d00831e549ea208e460932f0.scope. Mar 17 18:43:29.231096 env[1214]: time="2025-03-17T18:43:29.231057441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-dlmfh,Uid:e00882d9-93a0-46b7-ad71-8237272a115f,Namespace:kube-system,Attempt:0,} returns sandbox id \"b9a10ad6b70dc45123f8618c76e6c94a13dea3dada0e396fc5ac7a5b6b55f9f3\"" Mar 17 18:43:29.232344 kubelet[1902]: E0317 18:43:29.231870 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:43:29.239612 env[1214]: time="2025-03-17T18:43:29.239573604Z" level=info msg="StartContainer for \"dfe41e89dac6efd84a80ef18497b78e0beae5638d00831e549ea208e460932f0\" returns successfully" Mar 17 18:43:29.912706 kubelet[1902]: E0317 18:43:29.906445 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:43:30.941298 kubelet[1902]: E0317 18:43:30.941262 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:43:30.955741 kubelet[1902]: I0317 18:43:30.955566 1902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-v6sjl" podStartSLOduration=2.9555438499999998 podStartE2EDuration="2.95554385s" podCreationTimestamp="2025-03-17 18:43:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:43:29.918894776 +0000 UTC m=+7.122094933" watchObservedRunningTime="2025-03-17 18:43:30.95554385 +0000 UTC m=+8.158744007" Mar 17 18:43:31.909664 kubelet[1902]: E0317 18:43:31.909617 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:43:32.911494 kubelet[1902]: E0317 18:43:32.911449 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:43:35.451553 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3474169754.mount: Deactivated successfully. Mar 17 18:43:35.960737 update_engine[1206]: I0317 18:43:35.960673 1206 update_attempter.cc:509] Updating boot flags... Mar 17 18:43:36.084216 kubelet[1902]: E0317 18:43:36.084153 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:43:37.377156 kubelet[1902]: E0317 18:43:37.377118 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:43:39.748078 env[1214]: time="2025-03-17T18:43:39.748001716Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:43:39.750260 env[1214]: time="2025-03-17T18:43:39.750185871Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:43:39.752072 env[1214]: time="2025-03-17T18:43:39.752030139Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:43:39.752544 env[1214]: time="2025-03-17T18:43:39.752509105Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Mar 17 18:43:39.754948 env[1214]: time="2025-03-17T18:43:39.753663181Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 17 18:43:39.754948 env[1214]: time="2025-03-17T18:43:39.754481736Z" level=info msg="CreateContainer within sandbox \"7b9ec8dca0a9ec5d028503feb7a47677997e3fee0770c4e61278a7c25af1c37c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 18:43:39.769837 env[1214]: time="2025-03-17T18:43:39.769777539Z" level=info msg="CreateContainer within sandbox \"7b9ec8dca0a9ec5d028503feb7a47677997e3fee0770c4e61278a7c25af1c37c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ae6c1cd9927157da2b6075e1d38825ddac613281fb1dd68e07a83c9e38bfcc88\"" Mar 17 18:43:39.770619 env[1214]: time="2025-03-17T18:43:39.770284060Z" level=info msg="StartContainer for \"ae6c1cd9927157da2b6075e1d38825ddac613281fb1dd68e07a83c9e38bfcc88\"" Mar 17 18:43:39.788946 systemd[1]: Started cri-containerd-ae6c1cd9927157da2b6075e1d38825ddac613281fb1dd68e07a83c9e38bfcc88.scope. Mar 17 18:43:39.977422 systemd[1]: cri-containerd-ae6c1cd9927157da2b6075e1d38825ddac613281fb1dd68e07a83c9e38bfcc88.scope: Deactivated successfully. Mar 17 18:43:40.053126 env[1214]: time="2025-03-17T18:43:40.053073746Z" level=info msg="StartContainer for \"ae6c1cd9927157da2b6075e1d38825ddac613281fb1dd68e07a83c9e38bfcc88\" returns successfully" Mar 17 18:43:40.414463 env[1214]: time="2025-03-17T18:43:40.414313405Z" level=info msg="shim disconnected" id=ae6c1cd9927157da2b6075e1d38825ddac613281fb1dd68e07a83c9e38bfcc88 Mar 17 18:43:40.414463 env[1214]: time="2025-03-17T18:43:40.414355260Z" level=warning msg="cleaning up after shim disconnected" id=ae6c1cd9927157da2b6075e1d38825ddac613281fb1dd68e07a83c9e38bfcc88 namespace=k8s.io Mar 17 18:43:40.414463 env[1214]: time="2025-03-17T18:43:40.414364572Z" level=info msg="cleaning up dead shim" Mar 17 18:43:40.421203 env[1214]: time="2025-03-17T18:43:40.421116244Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:43:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2340 runtime=io.containerd.runc.v2\n" Mar 17 18:43:40.764960 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ae6c1cd9927157da2b6075e1d38825ddac613281fb1dd68e07a83c9e38bfcc88-rootfs.mount: Deactivated successfully. Mar 17 18:43:41.060096 kubelet[1902]: E0317 18:43:41.060050 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:43:41.062401 env[1214]: time="2025-03-17T18:43:41.062337089Z" level=info msg="CreateContainer within sandbox \"7b9ec8dca0a9ec5d028503feb7a47677997e3fee0770c4e61278a7c25af1c37c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 18:43:41.086127 env[1214]: time="2025-03-17T18:43:41.086067005Z" level=info msg="CreateContainer within sandbox \"7b9ec8dca0a9ec5d028503feb7a47677997e3fee0770c4e61278a7c25af1c37c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a3ebcc5d755b3b19bf55924bc1e5640627bdfc0416cd301cfd8c7ade4b76ec2c\"" Mar 17 18:43:41.086926 env[1214]: time="2025-03-17T18:43:41.086883285Z" level=info msg="StartContainer for \"a3ebcc5d755b3b19bf55924bc1e5640627bdfc0416cd301cfd8c7ade4b76ec2c\"" Mar 17 18:43:41.109577 systemd[1]: Started cri-containerd-a3ebcc5d755b3b19bf55924bc1e5640627bdfc0416cd301cfd8c7ade4b76ec2c.scope. Mar 17 18:43:41.137614 env[1214]: time="2025-03-17T18:43:41.137546778Z" level=info msg="StartContainer for \"a3ebcc5d755b3b19bf55924bc1e5640627bdfc0416cd301cfd8c7ade4b76ec2c\" returns successfully" Mar 17 18:43:41.145861 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 18:43:41.146065 systemd[1]: Stopped systemd-sysctl.service. Mar 17 18:43:41.146233 systemd[1]: Stopping systemd-sysctl.service... Mar 17 18:43:41.147682 systemd[1]: Starting systemd-sysctl.service... Mar 17 18:43:41.149599 systemd[1]: cri-containerd-a3ebcc5d755b3b19bf55924bc1e5640627bdfc0416cd301cfd8c7ade4b76ec2c.scope: Deactivated successfully. Mar 17 18:43:41.156376 systemd[1]: Finished systemd-sysctl.service. Mar 17 18:43:41.171724 env[1214]: time="2025-03-17T18:43:41.171659841Z" level=info msg="shim disconnected" id=a3ebcc5d755b3b19bf55924bc1e5640627bdfc0416cd301cfd8c7ade4b76ec2c Mar 17 18:43:41.171724 env[1214]: time="2025-03-17T18:43:41.171720978Z" level=warning msg="cleaning up after shim disconnected" id=a3ebcc5d755b3b19bf55924bc1e5640627bdfc0416cd301cfd8c7ade4b76ec2c namespace=k8s.io Mar 17 18:43:41.171724 env[1214]: time="2025-03-17T18:43:41.171730510Z" level=info msg="cleaning up dead shim" Mar 17 18:43:41.178030 env[1214]: time="2025-03-17T18:43:41.177993709Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:43:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2404 runtime=io.containerd.runc.v2\n" Mar 17 18:43:41.765080 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a3ebcc5d755b3b19bf55924bc1e5640627bdfc0416cd301cfd8c7ade4b76ec2c-rootfs.mount: Deactivated successfully. Mar 17 18:43:41.788946 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2794784383.mount: Deactivated successfully. Mar 17 18:43:42.062845 kubelet[1902]: E0317 18:43:42.062805 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:43:42.064997 env[1214]: time="2025-03-17T18:43:42.064950470Z" level=info msg="CreateContainer within sandbox \"7b9ec8dca0a9ec5d028503feb7a47677997e3fee0770c4e61278a7c25af1c37c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 18:43:42.119914 env[1214]: time="2025-03-17T18:43:42.119844844Z" level=info msg="CreateContainer within sandbox \"7b9ec8dca0a9ec5d028503feb7a47677997e3fee0770c4e61278a7c25af1c37c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6dbe9c19c911900902ede4e8445341c190d01a8ab37b45de05db29afe233af72\"" Mar 17 18:43:42.120686 env[1214]: time="2025-03-17T18:43:42.120651057Z" level=info msg="StartContainer for \"6dbe9c19c911900902ede4e8445341c190d01a8ab37b45de05db29afe233af72\"" Mar 17 18:43:42.136710 systemd[1]: Started cri-containerd-6dbe9c19c911900902ede4e8445341c190d01a8ab37b45de05db29afe233af72.scope. Mar 17 18:43:42.176564 systemd[1]: cri-containerd-6dbe9c19c911900902ede4e8445341c190d01a8ab37b45de05db29afe233af72.scope: Deactivated successfully. Mar 17 18:43:42.241318 env[1214]: time="2025-03-17T18:43:42.241253306Z" level=info msg="StartContainer for \"6dbe9c19c911900902ede4e8445341c190d01a8ab37b45de05db29afe233af72\" returns successfully" Mar 17 18:43:42.298139 env[1214]: time="2025-03-17T18:43:42.298083339Z" level=info msg="shim disconnected" id=6dbe9c19c911900902ede4e8445341c190d01a8ab37b45de05db29afe233af72 Mar 17 18:43:42.298139 env[1214]: time="2025-03-17T18:43:42.298133902Z" level=warning msg="cleaning up after shim disconnected" id=6dbe9c19c911900902ede4e8445341c190d01a8ab37b45de05db29afe233af72 namespace=k8s.io Mar 17 18:43:42.298139 env[1214]: time="2025-03-17T18:43:42.298143404Z" level=info msg="cleaning up dead shim" Mar 17 18:43:42.305001 env[1214]: time="2025-03-17T18:43:42.304970308Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:43:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2460 runtime=io.containerd.runc.v2\n" Mar 17 18:43:42.420774 env[1214]: time="2025-03-17T18:43:42.420634188Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:43:42.422521 env[1214]: time="2025-03-17T18:43:42.422467851Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:43:42.423902 env[1214]: time="2025-03-17T18:43:42.423870239Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:43:42.424274 env[1214]: time="2025-03-17T18:43:42.424224541Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Mar 17 18:43:42.426155 env[1214]: time="2025-03-17T18:43:42.426125044Z" level=info msg="CreateContainer within sandbox \"b9a10ad6b70dc45123f8618c76e6c94a13dea3dada0e396fc5ac7a5b6b55f9f3\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 17 18:43:42.437173 env[1214]: time="2025-03-17T18:43:42.437131990Z" level=info msg="CreateContainer within sandbox \"b9a10ad6b70dc45123f8618c76e6c94a13dea3dada0e396fc5ac7a5b6b55f9f3\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"201931dee49fa38900b57d925e0ee45792bd8c18d3ccbe1504b6e7b64b3e94ba\"" Mar 17 18:43:42.437576 env[1214]: time="2025-03-17T18:43:42.437533960Z" level=info msg="StartContainer for \"201931dee49fa38900b57d925e0ee45792bd8c18d3ccbe1504b6e7b64b3e94ba\"" Mar 17 18:43:42.451392 systemd[1]: Started cri-containerd-201931dee49fa38900b57d925e0ee45792bd8c18d3ccbe1504b6e7b64b3e94ba.scope. Mar 17 18:43:42.476976 env[1214]: time="2025-03-17T18:43:42.476912347Z" level=info msg="StartContainer for \"201931dee49fa38900b57d925e0ee45792bd8c18d3ccbe1504b6e7b64b3e94ba\" returns successfully" Mar 17 18:43:43.066113 kubelet[1902]: E0317 18:43:43.066078 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:43:43.076782 kubelet[1902]: E0317 18:43:43.076380 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:43:43.077012 kubelet[1902]: I0317 18:43:43.076967 1902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-dlmfh" podStartSLOduration=1.88491977 podStartE2EDuration="15.07695179s" podCreationTimestamp="2025-03-17 18:43:28 +0000 UTC" firstStartedPulling="2025-03-17 18:43:29.23283092 +0000 UTC m=+6.436031077" lastFinishedPulling="2025-03-17 18:43:42.42486294 +0000 UTC m=+19.628063097" observedRunningTime="2025-03-17 18:43:43.076580465 +0000 UTC m=+20.279780622" watchObservedRunningTime="2025-03-17 18:43:43.07695179 +0000 UTC m=+20.280151937" Mar 17 18:43:43.078683 env[1214]: time="2025-03-17T18:43:43.078636924Z" level=info msg="CreateContainer within sandbox \"7b9ec8dca0a9ec5d028503feb7a47677997e3fee0770c4e61278a7c25af1c37c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 18:43:43.094518 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1683281995.mount: Deactivated successfully. Mar 17 18:43:43.100604 env[1214]: time="2025-03-17T18:43:43.100550817Z" level=info msg="CreateContainer within sandbox \"7b9ec8dca0a9ec5d028503feb7a47677997e3fee0770c4e61278a7c25af1c37c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2b4d45523432b916987bbbfa6c24cd603e2cc3494ea5bfe90cac6c06ba04af90\"" Mar 17 18:43:43.101147 env[1214]: time="2025-03-17T18:43:43.101114769Z" level=info msg="StartContainer for \"2b4d45523432b916987bbbfa6c24cd603e2cc3494ea5bfe90cac6c06ba04af90\"" Mar 17 18:43:43.142152 systemd[1]: Started cri-containerd-2b4d45523432b916987bbbfa6c24cd603e2cc3494ea5bfe90cac6c06ba04af90.scope. Mar 17 18:43:43.175129 systemd[1]: cri-containerd-2b4d45523432b916987bbbfa6c24cd603e2cc3494ea5bfe90cac6c06ba04af90.scope: Deactivated successfully. Mar 17 18:43:43.215556 env[1214]: time="2025-03-17T18:43:43.215495173Z" level=info msg="StartContainer for \"2b4d45523432b916987bbbfa6c24cd603e2cc3494ea5bfe90cac6c06ba04af90\" returns successfully" Mar 17 18:43:43.236124 env[1214]: time="2025-03-17T18:43:43.236064359Z" level=info msg="shim disconnected" id=2b4d45523432b916987bbbfa6c24cd603e2cc3494ea5bfe90cac6c06ba04af90 Mar 17 18:43:43.236124 env[1214]: time="2025-03-17T18:43:43.236120213Z" level=warning msg="cleaning up after shim disconnected" id=2b4d45523432b916987bbbfa6c24cd603e2cc3494ea5bfe90cac6c06ba04af90 namespace=k8s.io Mar 17 18:43:43.236367 env[1214]: time="2025-03-17T18:43:43.236132471Z" level=info msg="cleaning up dead shim" Mar 17 18:43:43.243539 env[1214]: time="2025-03-17T18:43:43.243492100Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:43:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2554 runtime=io.containerd.runc.v2\n" Mar 17 18:43:43.765776 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2b4d45523432b916987bbbfa6c24cd603e2cc3494ea5bfe90cac6c06ba04af90-rootfs.mount: Deactivated successfully. Mar 17 18:43:44.080192 kubelet[1902]: E0317 18:43:44.080148 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:43:44.080546 kubelet[1902]: E0317 18:43:44.080164 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:43:44.081840 env[1214]: time="2025-03-17T18:43:44.081783894Z" level=info msg="CreateContainer within sandbox \"7b9ec8dca0a9ec5d028503feb7a47677997e3fee0770c4e61278a7c25af1c37c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 18:43:44.098403 env[1214]: time="2025-03-17T18:43:44.098344989Z" level=info msg="CreateContainer within sandbox \"7b9ec8dca0a9ec5d028503feb7a47677997e3fee0770c4e61278a7c25af1c37c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"27dc2415685cf623a8b278f77e033899b9ebe066faaa26351fdc3a2c38341502\"" Mar 17 18:43:44.098715 env[1214]: time="2025-03-17T18:43:44.098683016Z" level=info msg="StartContainer for \"27dc2415685cf623a8b278f77e033899b9ebe066faaa26351fdc3a2c38341502\"" Mar 17 18:43:44.117281 systemd[1]: Started cri-containerd-27dc2415685cf623a8b278f77e033899b9ebe066faaa26351fdc3a2c38341502.scope. Mar 17 18:43:44.146432 env[1214]: time="2025-03-17T18:43:44.146378615Z" level=info msg="StartContainer for \"27dc2415685cf623a8b278f77e033899b9ebe066faaa26351fdc3a2c38341502\" returns successfully" Mar 17 18:43:44.304373 kubelet[1902]: I0317 18:43:44.304323 1902 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Mar 17 18:43:44.333259 systemd[1]: Created slice kubepods-burstable-pod6d90cfef_522b_4bfb_850b_040164f5e40e.slice. Mar 17 18:43:44.341010 systemd[1]: Created slice kubepods-burstable-podc42c9b71_b907_4740_b00c_795714787a17.slice. Mar 17 18:43:44.417213 kubelet[1902]: I0317 18:43:44.417165 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c42c9b71-b907-4740-b00c-795714787a17-config-volume\") pod \"coredns-6f6b679f8f-z67qh\" (UID: \"c42c9b71-b907-4740-b00c-795714787a17\") " pod="kube-system/coredns-6f6b679f8f-z67qh" Mar 17 18:43:44.417213 kubelet[1902]: I0317 18:43:44.417216 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6d90cfef-522b-4bfb-850b-040164f5e40e-config-volume\") pod \"coredns-6f6b679f8f-bqrl2\" (UID: \"6d90cfef-522b-4bfb-850b-040164f5e40e\") " pod="kube-system/coredns-6f6b679f8f-bqrl2" Mar 17 18:43:44.417397 kubelet[1902]: I0317 18:43:44.417237 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfxqs\" (UniqueName: \"kubernetes.io/projected/6d90cfef-522b-4bfb-850b-040164f5e40e-kube-api-access-tfxqs\") pod \"coredns-6f6b679f8f-bqrl2\" (UID: \"6d90cfef-522b-4bfb-850b-040164f5e40e\") " pod="kube-system/coredns-6f6b679f8f-bqrl2" Mar 17 18:43:44.417397 kubelet[1902]: I0317 18:43:44.417254 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxtlb\" (UniqueName: \"kubernetes.io/projected/c42c9b71-b907-4740-b00c-795714787a17-kube-api-access-pxtlb\") pod \"coredns-6f6b679f8f-z67qh\" (UID: \"c42c9b71-b907-4740-b00c-795714787a17\") " pod="kube-system/coredns-6f6b679f8f-z67qh" Mar 17 18:43:44.640483 kubelet[1902]: E0317 18:43:44.640369 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:43:44.641303 env[1214]: time="2025-03-17T18:43:44.641271232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-bqrl2,Uid:6d90cfef-522b-4bfb-850b-040164f5e40e,Namespace:kube-system,Attempt:0,}" Mar 17 18:43:44.644507 kubelet[1902]: E0317 18:43:44.644484 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:43:44.644819 env[1214]: time="2025-03-17T18:43:44.644785035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-z67qh,Uid:c42c9b71-b907-4740-b00c-795714787a17,Namespace:kube-system,Attempt:0,}" Mar 17 18:43:45.086460 kubelet[1902]: E0317 18:43:45.086426 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:43:45.100302 kubelet[1902]: I0317 18:43:45.100226 1902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-6l4j2" podStartSLOduration=6.516243742 podStartE2EDuration="17.100207336s" podCreationTimestamp="2025-03-17 18:43:28 +0000 UTC" firstStartedPulling="2025-03-17 18:43:29.169507857 +0000 UTC m=+6.372708014" lastFinishedPulling="2025-03-17 18:43:39.753471461 +0000 UTC m=+16.956671608" observedRunningTime="2025-03-17 18:43:45.098764013 +0000 UTC m=+22.301964160" watchObservedRunningTime="2025-03-17 18:43:45.100207336 +0000 UTC m=+22.303407513" Mar 17 18:43:46.088677 kubelet[1902]: E0317 18:43:46.088616 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:43:46.104232 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Mar 17 18:43:46.104348 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Mar 17 18:43:46.105194 systemd-networkd[1029]: cilium_host: Link UP Mar 17 18:43:46.105311 systemd-networkd[1029]: cilium_net: Link UP Mar 17 18:43:46.105446 systemd-networkd[1029]: cilium_net: Gained carrier Mar 17 18:43:46.105569 systemd-networkd[1029]: cilium_host: Gained carrier Mar 17 18:43:46.148280 systemd-networkd[1029]: cilium_net: Gained IPv6LL Mar 17 18:43:46.182089 systemd-networkd[1029]: cilium_vxlan: Link UP Mar 17 18:43:46.182094 systemd-networkd[1029]: cilium_vxlan: Gained carrier Mar 17 18:43:46.367239 kernel: NET: Registered PF_ALG protocol family Mar 17 18:43:46.819343 systemd-networkd[1029]: cilium_host: Gained IPv6LL Mar 17 18:43:46.883958 systemd-networkd[1029]: lxc_health: Link UP Mar 17 18:43:46.894281 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Mar 17 18:43:46.895012 systemd-networkd[1029]: lxc_health: Gained carrier Mar 17 18:43:47.090308 kubelet[1902]: E0317 18:43:47.090161 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:43:47.221561 systemd-networkd[1029]: lxc294e9ae78cfb: Link UP Mar 17 18:43:47.230220 kernel: eth0: renamed from tmpbfe01 Mar 17 18:43:47.237313 systemd-networkd[1029]: lxccfed9c0cb708: Link UP Mar 17 18:43:47.247126 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Mar 17 18:43:47.247244 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc294e9ae78cfb: link becomes ready Mar 17 18:43:47.246957 systemd-networkd[1029]: lxc294e9ae78cfb: Gained carrier Mar 17 18:43:47.250199 kernel: eth0: renamed from tmpc0733 Mar 17 18:43:47.257609 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxccfed9c0cb708: link becomes ready Mar 17 18:43:47.256988 systemd-networkd[1029]: lxccfed9c0cb708: Gained carrier Mar 17 18:43:47.330468 systemd-networkd[1029]: cilium_vxlan: Gained IPv6LL Mar 17 18:43:48.092348 kubelet[1902]: E0317 18:43:48.092313 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:43:48.418375 systemd-networkd[1029]: lxc294e9ae78cfb: Gained IPv6LL Mar 17 18:43:48.674377 systemd-networkd[1029]: lxccfed9c0cb708: Gained IPv6LL Mar 17 18:43:48.674726 systemd-networkd[1029]: lxc_health: Gained IPv6LL Mar 17 18:43:49.094189 kubelet[1902]: E0317 18:43:49.094129 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:43:50.095514 kubelet[1902]: E0317 18:43:50.095480 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:43:50.450841 env[1214]: time="2025-03-17T18:43:50.450683485Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:43:50.450841 env[1214]: time="2025-03-17T18:43:50.450720955Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:43:50.450841 env[1214]: time="2025-03-17T18:43:50.450731146Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:43:50.451270 env[1214]: time="2025-03-17T18:43:50.451005933Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c073308681143bae804b9e686311d60f88381f717ec7c4ab2b1136cae2464b3a pid=3128 runtime=io.containerd.runc.v2 Mar 17 18:43:50.453746 env[1214]: time="2025-03-17T18:43:50.453661794Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:43:50.453746 env[1214]: time="2025-03-17T18:43:50.453708734Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:43:50.453746 env[1214]: time="2025-03-17T18:43:50.453719206Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:43:50.453948 env[1214]: time="2025-03-17T18:43:50.453890873Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bfe015138113f93892a0d1303791ac9c380a08e7eb36acb90e35725053ccaac4 pid=3140 runtime=io.containerd.runc.v2 Mar 17 18:43:50.467894 systemd[1]: run-containerd-runc-k8s.io-bfe015138113f93892a0d1303791ac9c380a08e7eb36acb90e35725053ccaac4-runc.hAYqph.mount: Deactivated successfully. Mar 17 18:43:50.472411 systemd[1]: Started cri-containerd-bfe015138113f93892a0d1303791ac9c380a08e7eb36acb90e35725053ccaac4.scope. Mar 17 18:43:50.477782 systemd[1]: Started cri-containerd-c073308681143bae804b9e686311d60f88381f717ec7c4ab2b1136cae2464b3a.scope. Mar 17 18:43:50.484846 systemd-resolved[1150]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 18:43:50.490450 systemd-resolved[1150]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 18:43:50.513439 env[1214]: time="2025-03-17T18:43:50.513373663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-bqrl2,Uid:6d90cfef-522b-4bfb-850b-040164f5e40e,Namespace:kube-system,Attempt:0,} returns sandbox id \"bfe015138113f93892a0d1303791ac9c380a08e7eb36acb90e35725053ccaac4\"" Mar 17 18:43:50.514388 kubelet[1902]: E0317 18:43:50.514228 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:43:50.516628 env[1214]: time="2025-03-17T18:43:50.516564174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-z67qh,Uid:c42c9b71-b907-4740-b00c-795714787a17,Namespace:kube-system,Attempt:0,} returns sandbox id \"c073308681143bae804b9e686311d60f88381f717ec7c4ab2b1136cae2464b3a\"" Mar 17 18:43:50.518612 kubelet[1902]: E0317 18:43:50.518488 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:43:50.519617 env[1214]: time="2025-03-17T18:43:50.519594644Z" level=info msg="CreateContainer within sandbox \"bfe015138113f93892a0d1303791ac9c380a08e7eb36acb90e35725053ccaac4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 18:43:50.520380 env[1214]: time="2025-03-17T18:43:50.520359247Z" level=info msg="CreateContainer within sandbox \"c073308681143bae804b9e686311d60f88381f717ec7c4ab2b1136cae2464b3a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 18:43:50.924121 env[1214]: time="2025-03-17T18:43:50.924062630Z" level=info msg="CreateContainer within sandbox \"bfe015138113f93892a0d1303791ac9c380a08e7eb36acb90e35725053ccaac4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6065c4fea47773ffa8b442ddbc587c396644fe71e9f52d37f3b1c3130bc9b7cc\"" Mar 17 18:43:50.924598 env[1214]: time="2025-03-17T18:43:50.924554790Z" level=info msg="StartContainer for \"6065c4fea47773ffa8b442ddbc587c396644fe71e9f52d37f3b1c3130bc9b7cc\"" Mar 17 18:43:50.926773 env[1214]: time="2025-03-17T18:43:50.926725044Z" level=info msg="CreateContainer within sandbox \"c073308681143bae804b9e686311d60f88381f717ec7c4ab2b1136cae2464b3a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e45ab858b4aa56ab24d75b2dae2d35aa3fe11e258c069f44b955f3314f9b36ff\"" Mar 17 18:43:50.927214 env[1214]: time="2025-03-17T18:43:50.927164242Z" level=info msg="StartContainer for \"e45ab858b4aa56ab24d75b2dae2d35aa3fe11e258c069f44b955f3314f9b36ff\"" Mar 17 18:43:50.944309 systemd[1]: Started cri-containerd-6065c4fea47773ffa8b442ddbc587c396644fe71e9f52d37f3b1c3130bc9b7cc.scope. Mar 17 18:43:50.948060 systemd[1]: Started cri-containerd-e45ab858b4aa56ab24d75b2dae2d35aa3fe11e258c069f44b955f3314f9b36ff.scope. Mar 17 18:43:51.025785 env[1214]: time="2025-03-17T18:43:51.025729527Z" level=info msg="StartContainer for \"6065c4fea47773ffa8b442ddbc587c396644fe71e9f52d37f3b1c3130bc9b7cc\" returns successfully" Mar 17 18:43:51.091900 env[1214]: time="2025-03-17T18:43:51.091831032Z" level=info msg="StartContainer for \"e45ab858b4aa56ab24d75b2dae2d35aa3fe11e258c069f44b955f3314f9b36ff\" returns successfully" Mar 17 18:43:51.099916 kubelet[1902]: E0317 18:43:51.098484 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:43:51.101244 kubelet[1902]: E0317 18:43:51.101014 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:43:51.172621 kubelet[1902]: I0317 18:43:51.172257 1902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-bqrl2" podStartSLOduration=23.172229096 podStartE2EDuration="23.172229096s" podCreationTimestamp="2025-03-17 18:43:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:43:51.172063243 +0000 UTC m=+28.375263400" watchObservedRunningTime="2025-03-17 18:43:51.172229096 +0000 UTC m=+28.375429253" Mar 17 18:43:51.172621 kubelet[1902]: I0317 18:43:51.172340 1902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-z67qh" podStartSLOduration=23.172337206 podStartE2EDuration="23.172337206s" podCreationTimestamp="2025-03-17 18:43:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:43:51.164621485 +0000 UTC m=+28.367821642" watchObservedRunningTime="2025-03-17 18:43:51.172337206 +0000 UTC m=+28.375537363" Mar 17 18:43:52.102862 kubelet[1902]: E0317 18:43:52.102825 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:43:52.103551 kubelet[1902]: E0317 18:43:52.102969 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:43:52.375908 systemd[1]: Started sshd@5-10.0.0.108:22-10.0.0.1:39208.service. Mar 17 18:43:52.415796 sshd[3283]: Accepted publickey for core from 10.0.0.1 port 39208 ssh2: RSA SHA256:DYcGKLA+BUI3KXBOyjzF6/uTec/cV0nLMAEcssN4/64 Mar 17 18:43:52.416905 sshd[3283]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:43:52.420611 systemd-logind[1205]: New session 6 of user core. Mar 17 18:43:52.421468 systemd[1]: Started session-6.scope. Mar 17 18:43:52.543988 sshd[3283]: pam_unix(sshd:session): session closed for user core Mar 17 18:43:52.546321 systemd[1]: sshd@5-10.0.0.108:22-10.0.0.1:39208.service: Deactivated successfully. Mar 17 18:43:52.547006 systemd[1]: session-6.scope: Deactivated successfully. Mar 17 18:43:52.547733 systemd-logind[1205]: Session 6 logged out. Waiting for processes to exit. Mar 17 18:43:52.548463 systemd-logind[1205]: Removed session 6. Mar 17 18:43:53.107269 kubelet[1902]: E0317 18:43:53.107218 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:43:53.107642 kubelet[1902]: E0317 18:43:53.107300 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:43:57.550013 systemd[1]: Started sshd@6-10.0.0.108:22-10.0.0.1:49198.service. Mar 17 18:43:57.587996 sshd[3298]: Accepted publickey for core from 10.0.0.1 port 49198 ssh2: RSA SHA256:DYcGKLA+BUI3KXBOyjzF6/uTec/cV0nLMAEcssN4/64 Mar 17 18:43:57.589385 sshd[3298]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:43:57.593054 systemd-logind[1205]: New session 7 of user core. Mar 17 18:43:57.593989 systemd[1]: Started session-7.scope. Mar 17 18:43:57.699005 sshd[3298]: pam_unix(sshd:session): session closed for user core Mar 17 18:43:57.701429 systemd[1]: sshd@6-10.0.0.108:22-10.0.0.1:49198.service: Deactivated successfully. Mar 17 18:43:57.702270 systemd[1]: session-7.scope: Deactivated successfully. Mar 17 18:43:57.702791 systemd-logind[1205]: Session 7 logged out. Waiting for processes to exit. Mar 17 18:43:57.703497 systemd-logind[1205]: Removed session 7. Mar 17 18:44:02.705008 systemd[1]: Started sshd@7-10.0.0.108:22-10.0.0.1:49210.service. Mar 17 18:44:02.743752 sshd[3316]: Accepted publickey for core from 10.0.0.1 port 49210 ssh2: RSA SHA256:DYcGKLA+BUI3KXBOyjzF6/uTec/cV0nLMAEcssN4/64 Mar 17 18:44:02.744870 sshd[3316]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:44:02.748168 systemd-logind[1205]: New session 8 of user core. Mar 17 18:44:02.748944 systemd[1]: Started session-8.scope. Mar 17 18:44:02.853037 sshd[3316]: pam_unix(sshd:session): session closed for user core Mar 17 18:44:02.855375 systemd[1]: sshd@7-10.0.0.108:22-10.0.0.1:49210.service: Deactivated successfully. Mar 17 18:44:02.856087 systemd[1]: session-8.scope: Deactivated successfully. Mar 17 18:44:02.856611 systemd-logind[1205]: Session 8 logged out. Waiting for processes to exit. Mar 17 18:44:02.857357 systemd-logind[1205]: Removed session 8. Mar 17 18:44:07.859111 systemd[1]: Started sshd@8-10.0.0.108:22-10.0.0.1:52456.service. Mar 17 18:44:07.899507 sshd[3330]: Accepted publickey for core from 10.0.0.1 port 52456 ssh2: RSA SHA256:DYcGKLA+BUI3KXBOyjzF6/uTec/cV0nLMAEcssN4/64 Mar 17 18:44:07.900693 sshd[3330]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:44:07.904226 systemd-logind[1205]: New session 9 of user core. Mar 17 18:44:07.904971 systemd[1]: Started session-9.scope. Mar 17 18:44:08.007959 sshd[3330]: pam_unix(sshd:session): session closed for user core Mar 17 18:44:08.010423 systemd[1]: sshd@8-10.0.0.108:22-10.0.0.1:52456.service: Deactivated successfully. Mar 17 18:44:08.011188 systemd[1]: session-9.scope: Deactivated successfully. Mar 17 18:44:08.011671 systemd-logind[1205]: Session 9 logged out. Waiting for processes to exit. Mar 17 18:44:08.012346 systemd-logind[1205]: Removed session 9. Mar 17 18:44:13.013665 systemd[1]: Started sshd@9-10.0.0.108:22-10.0.0.1:52466.service. Mar 17 18:44:13.054114 sshd[3346]: Accepted publickey for core from 10.0.0.1 port 52466 ssh2: RSA SHA256:DYcGKLA+BUI3KXBOyjzF6/uTec/cV0nLMAEcssN4/64 Mar 17 18:44:13.055502 sshd[3346]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:44:13.059480 systemd-logind[1205]: New session 10 of user core. Mar 17 18:44:13.060735 systemd[1]: Started session-10.scope. Mar 17 18:44:13.191318 sshd[3346]: pam_unix(sshd:session): session closed for user core Mar 17 18:44:13.195111 systemd[1]: sshd@9-10.0.0.108:22-10.0.0.1:52466.service: Deactivated successfully. Mar 17 18:44:13.195844 systemd[1]: session-10.scope: Deactivated successfully. Mar 17 18:44:13.196442 systemd-logind[1205]: Session 10 logged out. Waiting for processes to exit. Mar 17 18:44:13.197747 systemd[1]: Started sshd@10-10.0.0.108:22-10.0.0.1:52474.service. Mar 17 18:44:13.198613 systemd-logind[1205]: Removed session 10. Mar 17 18:44:13.235199 sshd[3360]: Accepted publickey for core from 10.0.0.1 port 52474 ssh2: RSA SHA256:DYcGKLA+BUI3KXBOyjzF6/uTec/cV0nLMAEcssN4/64 Mar 17 18:44:13.236243 sshd[3360]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:44:13.239538 systemd-logind[1205]: New session 11 of user core. Mar 17 18:44:13.240378 systemd[1]: Started session-11.scope. Mar 17 18:44:13.386301 sshd[3360]: pam_unix(sshd:session): session closed for user core Mar 17 18:44:13.388897 systemd[1]: sshd@10-10.0.0.108:22-10.0.0.1:52474.service: Deactivated successfully. Mar 17 18:44:13.389424 systemd[1]: session-11.scope: Deactivated successfully. Mar 17 18:44:13.390942 systemd[1]: Started sshd@11-10.0.0.108:22-10.0.0.1:52490.service. Mar 17 18:44:13.396482 systemd-logind[1205]: Session 11 logged out. Waiting for processes to exit. Mar 17 18:44:13.401794 systemd-logind[1205]: Removed session 11. Mar 17 18:44:13.430220 sshd[3372]: Accepted publickey for core from 10.0.0.1 port 52490 ssh2: RSA SHA256:DYcGKLA+BUI3KXBOyjzF6/uTec/cV0nLMAEcssN4/64 Mar 17 18:44:13.431413 sshd[3372]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:44:13.434950 systemd-logind[1205]: New session 12 of user core. Mar 17 18:44:13.436137 systemd[1]: Started session-12.scope. Mar 17 18:44:13.539430 sshd[3372]: pam_unix(sshd:session): session closed for user core Mar 17 18:44:13.541960 systemd[1]: sshd@11-10.0.0.108:22-10.0.0.1:52490.service: Deactivated successfully. Mar 17 18:44:13.542866 systemd[1]: session-12.scope: Deactivated successfully. Mar 17 18:44:13.543440 systemd-logind[1205]: Session 12 logged out. Waiting for processes to exit. Mar 17 18:44:13.544264 systemd-logind[1205]: Removed session 12. Mar 17 18:44:18.545331 systemd[1]: Started sshd@12-10.0.0.108:22-10.0.0.1:42324.service. Mar 17 18:44:18.583380 sshd[3388]: Accepted publickey for core from 10.0.0.1 port 42324 ssh2: RSA SHA256:DYcGKLA+BUI3KXBOyjzF6/uTec/cV0nLMAEcssN4/64 Mar 17 18:44:18.584656 sshd[3388]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:44:18.588341 systemd-logind[1205]: New session 13 of user core. Mar 17 18:44:18.589116 systemd[1]: Started session-13.scope. Mar 17 18:44:18.691657 sshd[3388]: pam_unix(sshd:session): session closed for user core Mar 17 18:44:18.693662 systemd[1]: sshd@12-10.0.0.108:22-10.0.0.1:42324.service: Deactivated successfully. Mar 17 18:44:18.694335 systemd[1]: session-13.scope: Deactivated successfully. Mar 17 18:44:18.694859 systemd-logind[1205]: Session 13 logged out. Waiting for processes to exit. Mar 17 18:44:18.695509 systemd-logind[1205]: Removed session 13. Mar 17 18:44:23.696861 systemd[1]: Started sshd@13-10.0.0.108:22-10.0.0.1:42334.service. Mar 17 18:44:23.737135 sshd[3404]: Accepted publickey for core from 10.0.0.1 port 42334 ssh2: RSA SHA256:DYcGKLA+BUI3KXBOyjzF6/uTec/cV0nLMAEcssN4/64 Mar 17 18:44:23.738340 sshd[3404]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:44:23.741830 systemd-logind[1205]: New session 14 of user core. Mar 17 18:44:23.742670 systemd[1]: Started session-14.scope. Mar 17 18:44:23.852668 sshd[3404]: pam_unix(sshd:session): session closed for user core Mar 17 18:44:23.856402 systemd[1]: sshd@13-10.0.0.108:22-10.0.0.1:42334.service: Deactivated successfully. Mar 17 18:44:23.857141 systemd[1]: session-14.scope: Deactivated successfully. Mar 17 18:44:23.857814 systemd-logind[1205]: Session 14 logged out. Waiting for processes to exit. Mar 17 18:44:23.859482 systemd[1]: Started sshd@14-10.0.0.108:22-10.0.0.1:42350.service. Mar 17 18:44:23.860466 systemd-logind[1205]: Removed session 14. Mar 17 18:44:23.897520 sshd[3418]: Accepted publickey for core from 10.0.0.1 port 42350 ssh2: RSA SHA256:DYcGKLA+BUI3KXBOyjzF6/uTec/cV0nLMAEcssN4/64 Mar 17 18:44:23.898779 sshd[3418]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:44:23.902360 systemd-logind[1205]: New session 15 of user core. Mar 17 18:44:23.903262 systemd[1]: Started session-15.scope. Mar 17 18:44:24.103924 sshd[3418]: pam_unix(sshd:session): session closed for user core Mar 17 18:44:24.107334 systemd[1]: Started sshd@15-10.0.0.108:22-10.0.0.1:33472.service. Mar 17 18:44:24.107856 systemd[1]: sshd@14-10.0.0.108:22-10.0.0.1:42350.service: Deactivated successfully. Mar 17 18:44:24.108486 systemd[1]: session-15.scope: Deactivated successfully. Mar 17 18:44:24.109120 systemd-logind[1205]: Session 15 logged out. Waiting for processes to exit. Mar 17 18:44:24.110109 systemd-logind[1205]: Removed session 15. Mar 17 18:44:24.149133 sshd[3428]: Accepted publickey for core from 10.0.0.1 port 33472 ssh2: RSA SHA256:DYcGKLA+BUI3KXBOyjzF6/uTec/cV0nLMAEcssN4/64 Mar 17 18:44:24.150490 sshd[3428]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:44:24.154288 systemd-logind[1205]: New session 16 of user core. Mar 17 18:44:24.155110 systemd[1]: Started session-16.scope. Mar 17 18:44:25.415993 sshd[3428]: pam_unix(sshd:session): session closed for user core Mar 17 18:44:25.420743 systemd[1]: Started sshd@16-10.0.0.108:22-10.0.0.1:33476.service. Mar 17 18:44:25.423916 systemd[1]: sshd@15-10.0.0.108:22-10.0.0.1:33472.service: Deactivated successfully. Mar 17 18:44:25.425098 systemd[1]: session-16.scope: Deactivated successfully. Mar 17 18:44:25.426478 systemd-logind[1205]: Session 16 logged out. Waiting for processes to exit. Mar 17 18:44:25.427705 systemd-logind[1205]: Removed session 16. Mar 17 18:44:25.463447 sshd[3445]: Accepted publickey for core from 10.0.0.1 port 33476 ssh2: RSA SHA256:DYcGKLA+BUI3KXBOyjzF6/uTec/cV0nLMAEcssN4/64 Mar 17 18:44:25.464683 sshd[3445]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:44:25.468143 systemd-logind[1205]: New session 17 of user core. Mar 17 18:44:25.468965 systemd[1]: Started session-17.scope. Mar 17 18:44:25.751382 sshd[3445]: pam_unix(sshd:session): session closed for user core Mar 17 18:44:25.754118 systemd[1]: sshd@16-10.0.0.108:22-10.0.0.1:33476.service: Deactivated successfully. Mar 17 18:44:25.754860 systemd[1]: session-17.scope: Deactivated successfully. Mar 17 18:44:25.755676 systemd-logind[1205]: Session 17 logged out. Waiting for processes to exit. Mar 17 18:44:25.757113 systemd[1]: Started sshd@17-10.0.0.108:22-10.0.0.1:33486.service. Mar 17 18:44:25.758100 systemd-logind[1205]: Removed session 17. Mar 17 18:44:25.795114 sshd[3458]: Accepted publickey for core from 10.0.0.1 port 33486 ssh2: RSA SHA256:DYcGKLA+BUI3KXBOyjzF6/uTec/cV0nLMAEcssN4/64 Mar 17 18:44:25.796567 sshd[3458]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:44:25.800377 systemd-logind[1205]: New session 18 of user core. Mar 17 18:44:25.801452 systemd[1]: Started session-18.scope. Mar 17 18:44:25.908835 sshd[3458]: pam_unix(sshd:session): session closed for user core Mar 17 18:44:25.911527 systemd[1]: sshd@17-10.0.0.108:22-10.0.0.1:33486.service: Deactivated successfully. Mar 17 18:44:25.912328 systemd[1]: session-18.scope: Deactivated successfully. Mar 17 18:44:25.913070 systemd-logind[1205]: Session 18 logged out. Waiting for processes to exit. Mar 17 18:44:25.913913 systemd-logind[1205]: Removed session 18. Mar 17 18:44:30.912819 systemd[1]: Started sshd@18-10.0.0.108:22-10.0.0.1:33502.service. Mar 17 18:44:30.950805 sshd[3474]: Accepted publickey for core from 10.0.0.1 port 33502 ssh2: RSA SHA256:DYcGKLA+BUI3KXBOyjzF6/uTec/cV0nLMAEcssN4/64 Mar 17 18:44:30.951936 sshd[3474]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:44:30.955138 systemd-logind[1205]: New session 19 of user core. Mar 17 18:44:30.955853 systemd[1]: Started session-19.scope. Mar 17 18:44:31.054897 sshd[3474]: pam_unix(sshd:session): session closed for user core Mar 17 18:44:31.057887 systemd[1]: sshd@18-10.0.0.108:22-10.0.0.1:33502.service: Deactivated successfully. Mar 17 18:44:31.058614 systemd[1]: session-19.scope: Deactivated successfully. Mar 17 18:44:31.059381 systemd-logind[1205]: Session 19 logged out. Waiting for processes to exit. Mar 17 18:44:31.060077 systemd-logind[1205]: Removed session 19. Mar 17 18:44:36.060629 systemd[1]: Started sshd@19-10.0.0.108:22-10.0.0.1:38036.service. Mar 17 18:44:36.100522 sshd[3491]: Accepted publickey for core from 10.0.0.1 port 38036 ssh2: RSA SHA256:DYcGKLA+BUI3KXBOyjzF6/uTec/cV0nLMAEcssN4/64 Mar 17 18:44:36.101953 sshd[3491]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:44:36.106348 systemd-logind[1205]: New session 20 of user core. Mar 17 18:44:36.107387 systemd[1]: Started session-20.scope. Mar 17 18:44:36.216631 sshd[3491]: pam_unix(sshd:session): session closed for user core Mar 17 18:44:36.220505 systemd[1]: sshd@19-10.0.0.108:22-10.0.0.1:38036.service: Deactivated successfully. Mar 17 18:44:36.221241 systemd[1]: session-20.scope: Deactivated successfully. Mar 17 18:44:36.222014 systemd-logind[1205]: Session 20 logged out. Waiting for processes to exit. Mar 17 18:44:36.222780 systemd-logind[1205]: Removed session 20. Mar 17 18:44:40.887647 kubelet[1902]: E0317 18:44:40.887593 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:44:41.222535 systemd[1]: Started sshd@20-10.0.0.108:22-10.0.0.1:38050.service. Mar 17 18:44:41.262490 sshd[3505]: Accepted publickey for core from 10.0.0.1 port 38050 ssh2: RSA SHA256:DYcGKLA+BUI3KXBOyjzF6/uTec/cV0nLMAEcssN4/64 Mar 17 18:44:41.264096 sshd[3505]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:44:41.268252 systemd-logind[1205]: New session 21 of user core. Mar 17 18:44:41.269248 systemd[1]: Started session-21.scope. Mar 17 18:44:41.372554 sshd[3505]: pam_unix(sshd:session): session closed for user core Mar 17 18:44:41.375026 systemd[1]: sshd@20-10.0.0.108:22-10.0.0.1:38050.service: Deactivated successfully. Mar 17 18:44:41.375917 systemd[1]: session-21.scope: Deactivated successfully. Mar 17 18:44:41.376436 systemd-logind[1205]: Session 21 logged out. Waiting for processes to exit. Mar 17 18:44:41.377133 systemd-logind[1205]: Removed session 21. Mar 17 18:44:45.886903 kubelet[1902]: E0317 18:44:45.886857 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:44:46.378574 systemd[1]: Started sshd@21-10.0.0.108:22-10.0.0.1:57866.service. Mar 17 18:44:46.417482 sshd[3518]: Accepted publickey for core from 10.0.0.1 port 57866 ssh2: RSA SHA256:DYcGKLA+BUI3KXBOyjzF6/uTec/cV0nLMAEcssN4/64 Mar 17 18:44:46.418848 sshd[3518]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:44:46.422747 systemd-logind[1205]: New session 22 of user core. Mar 17 18:44:46.423760 systemd[1]: Started session-22.scope. Mar 17 18:44:46.521652 sshd[3518]: pam_unix(sshd:session): session closed for user core Mar 17 18:44:46.524536 systemd[1]: sshd@21-10.0.0.108:22-10.0.0.1:57866.service: Deactivated successfully. Mar 17 18:44:46.525285 systemd[1]: session-22.scope: Deactivated successfully. Mar 17 18:44:46.526089 systemd-logind[1205]: Session 22 logged out. Waiting for processes to exit. Mar 17 18:44:46.527282 systemd[1]: Started sshd@22-10.0.0.108:22-10.0.0.1:57868.service. Mar 17 18:44:46.528480 systemd-logind[1205]: Removed session 22. Mar 17 18:44:46.565165 sshd[3531]: Accepted publickey for core from 10.0.0.1 port 57868 ssh2: RSA SHA256:DYcGKLA+BUI3KXBOyjzF6/uTec/cV0nLMAEcssN4/64 Mar 17 18:44:46.566150 sshd[3531]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:44:46.569615 systemd-logind[1205]: New session 23 of user core. Mar 17 18:44:46.570609 systemd[1]: Started session-23.scope. Mar 17 18:44:47.887196 kubelet[1902]: E0317 18:44:47.887124 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:44:48.659672 systemd[1]: run-containerd-runc-k8s.io-27dc2415685cf623a8b278f77e033899b9ebe066faaa26351fdc3a2c38341502-runc.acBYAd.mount: Deactivated successfully. Mar 17 18:44:48.677650 env[1214]: time="2025-03-17T18:44:48.677571251Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 18:44:48.683029 env[1214]: time="2025-03-17T18:44:48.682988890Z" level=info msg="StopContainer for \"27dc2415685cf623a8b278f77e033899b9ebe066faaa26351fdc3a2c38341502\" with timeout 2 (s)" Mar 17 18:44:48.683289 env[1214]: time="2025-03-17T18:44:48.683266052Z" level=info msg="Stop container \"27dc2415685cf623a8b278f77e033899b9ebe066faaa26351fdc3a2c38341502\" with signal terminated" Mar 17 18:44:48.689937 systemd-networkd[1029]: lxc_health: Link DOWN Mar 17 18:44:48.689944 systemd-networkd[1029]: lxc_health: Lost carrier Mar 17 18:44:48.717862 env[1214]: time="2025-03-17T18:44:48.717823337Z" level=info msg="StopContainer for \"201931dee49fa38900b57d925e0ee45792bd8c18d3ccbe1504b6e7b64b3e94ba\" with timeout 30 (s)" Mar 17 18:44:48.718467 env[1214]: time="2025-03-17T18:44:48.718422846Z" level=info msg="Stop container \"201931dee49fa38900b57d925e0ee45792bd8c18d3ccbe1504b6e7b64b3e94ba\" with signal terminated" Mar 17 18:44:48.725166 systemd[1]: cri-containerd-27dc2415685cf623a8b278f77e033899b9ebe066faaa26351fdc3a2c38341502.scope: Deactivated successfully. Mar 17 18:44:48.725476 systemd[1]: cri-containerd-27dc2415685cf623a8b278f77e033899b9ebe066faaa26351fdc3a2c38341502.scope: Consumed 5.917s CPU time. Mar 17 18:44:48.729539 systemd[1]: cri-containerd-201931dee49fa38900b57d925e0ee45792bd8c18d3ccbe1504b6e7b64b3e94ba.scope: Deactivated successfully. Mar 17 18:44:48.748240 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-27dc2415685cf623a8b278f77e033899b9ebe066faaa26351fdc3a2c38341502-rootfs.mount: Deactivated successfully. Mar 17 18:44:48.752471 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-201931dee49fa38900b57d925e0ee45792bd8c18d3ccbe1504b6e7b64b3e94ba-rootfs.mount: Deactivated successfully. Mar 17 18:44:49.062748 env[1214]: time="2025-03-17T18:44:49.062698329Z" level=info msg="shim disconnected" id=27dc2415685cf623a8b278f77e033899b9ebe066faaa26351fdc3a2c38341502 Mar 17 18:44:49.063027 env[1214]: time="2025-03-17T18:44:49.062752171Z" level=warning msg="cleaning up after shim disconnected" id=27dc2415685cf623a8b278f77e033899b9ebe066faaa26351fdc3a2c38341502 namespace=k8s.io Mar 17 18:44:49.063027 env[1214]: time="2025-03-17T18:44:49.062761158Z" level=info msg="cleaning up dead shim" Mar 17 18:44:49.068781 env[1214]: time="2025-03-17T18:44:49.068681869Z" level=info msg="shim disconnected" id=201931dee49fa38900b57d925e0ee45792bd8c18d3ccbe1504b6e7b64b3e94ba Mar 17 18:44:49.068781 env[1214]: time="2025-03-17T18:44:49.068750448Z" level=warning msg="cleaning up after shim disconnected" id=201931dee49fa38900b57d925e0ee45792bd8c18d3ccbe1504b6e7b64b3e94ba namespace=k8s.io Mar 17 18:44:49.068781 env[1214]: time="2025-03-17T18:44:49.068773222Z" level=info msg="cleaning up dead shim" Mar 17 18:44:49.072631 env[1214]: time="2025-03-17T18:44:49.072579897Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:44:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3600 runtime=io.containerd.runc.v2\n" Mar 17 18:44:49.077144 env[1214]: time="2025-03-17T18:44:49.077071945Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:44:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3610 runtime=io.containerd.runc.v2\n" Mar 17 18:44:49.163918 env[1214]: time="2025-03-17T18:44:49.163842596Z" level=info msg="StopContainer for \"27dc2415685cf623a8b278f77e033899b9ebe066faaa26351fdc3a2c38341502\" returns successfully" Mar 17 18:44:49.164571 env[1214]: time="2025-03-17T18:44:49.164526786Z" level=info msg="StopPodSandbox for \"7b9ec8dca0a9ec5d028503feb7a47677997e3fee0770c4e61278a7c25af1c37c\"" Mar 17 18:44:49.164821 env[1214]: time="2025-03-17T18:44:49.164610714Z" level=info msg="Container to stop \"ae6c1cd9927157da2b6075e1d38825ddac613281fb1dd68e07a83c9e38bfcc88\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:44:49.164821 env[1214]: time="2025-03-17T18:44:49.164634349Z" level=info msg="Container to stop \"a3ebcc5d755b3b19bf55924bc1e5640627bdfc0416cd301cfd8c7ade4b76ec2c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:44:49.164821 env[1214]: time="2025-03-17T18:44:49.164647263Z" level=info msg="Container to stop \"6dbe9c19c911900902ede4e8445341c190d01a8ab37b45de05db29afe233af72\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:44:49.164821 env[1214]: time="2025-03-17T18:44:49.164658754Z" level=info msg="Container to stop \"2b4d45523432b916987bbbfa6c24cd603e2cc3494ea5bfe90cac6c06ba04af90\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:44:49.164821 env[1214]: time="2025-03-17T18:44:49.164671488Z" level=info msg="Container to stop \"27dc2415685cf623a8b278f77e033899b9ebe066faaa26351fdc3a2c38341502\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:44:49.167047 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7b9ec8dca0a9ec5d028503feb7a47677997e3fee0770c4e61278a7c25af1c37c-shm.mount: Deactivated successfully. Mar 17 18:44:49.170538 systemd[1]: cri-containerd-7b9ec8dca0a9ec5d028503feb7a47677997e3fee0770c4e61278a7c25af1c37c.scope: Deactivated successfully. Mar 17 18:44:49.217309 env[1214]: time="2025-03-17T18:44:49.216575954Z" level=info msg="StopContainer for \"201931dee49fa38900b57d925e0ee45792bd8c18d3ccbe1504b6e7b64b3e94ba\" returns successfully" Mar 17 18:44:49.219456 env[1214]: time="2025-03-17T18:44:49.219427109Z" level=info msg="StopPodSandbox for \"b9a10ad6b70dc45123f8618c76e6c94a13dea3dada0e396fc5ac7a5b6b55f9f3\"" Mar 17 18:44:49.219625 env[1214]: time="2025-03-17T18:44:49.219579355Z" level=info msg="Container to stop \"201931dee49fa38900b57d925e0ee45792bd8c18d3ccbe1504b6e7b64b3e94ba\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:44:49.228426 systemd[1]: cri-containerd-b9a10ad6b70dc45123f8618c76e6c94a13dea3dada0e396fc5ac7a5b6b55f9f3.scope: Deactivated successfully. Mar 17 18:44:49.332633 env[1214]: time="2025-03-17T18:44:49.332452702Z" level=info msg="shim disconnected" id=7b9ec8dca0a9ec5d028503feb7a47677997e3fee0770c4e61278a7c25af1c37c Mar 17 18:44:49.332633 env[1214]: time="2025-03-17T18:44:49.332525129Z" level=warning msg="cleaning up after shim disconnected" id=7b9ec8dca0a9ec5d028503feb7a47677997e3fee0770c4e61278a7c25af1c37c namespace=k8s.io Mar 17 18:44:49.332633 env[1214]: time="2025-03-17T18:44:49.332541129Z" level=info msg="cleaning up dead shim" Mar 17 18:44:49.340453 env[1214]: time="2025-03-17T18:44:49.340405885Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:44:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3661 runtime=io.containerd.runc.v2\n" Mar 17 18:44:49.340851 env[1214]: time="2025-03-17T18:44:49.340812201Z" level=info msg="TearDown network for sandbox \"7b9ec8dca0a9ec5d028503feb7a47677997e3fee0770c4e61278a7c25af1c37c\" successfully" Mar 17 18:44:49.340851 env[1214]: time="2025-03-17T18:44:49.340843179Z" level=info msg="StopPodSandbox for \"7b9ec8dca0a9ec5d028503feb7a47677997e3fee0770c4e61278a7c25af1c37c\" returns successfully" Mar 17 18:44:49.375552 kubelet[1902]: I0317 18:44:49.375490 1902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/086f8ba3-f9e8-414e-8332-8e34fe73720f-host-proc-sys-kernel\") pod \"086f8ba3-f9e8-414e-8332-8e34fe73720f\" (UID: \"086f8ba3-f9e8-414e-8332-8e34fe73720f\") " Mar 17 18:44:49.375552 kubelet[1902]: I0317 18:44:49.375553 1902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/086f8ba3-f9e8-414e-8332-8e34fe73720f-hubble-tls\") pod \"086f8ba3-f9e8-414e-8332-8e34fe73720f\" (UID: \"086f8ba3-f9e8-414e-8332-8e34fe73720f\") " Mar 17 18:44:49.387090 kubelet[1902]: I0317 18:44:49.387040 1902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/086f8ba3-f9e8-414e-8332-8e34fe73720f-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "086f8ba3-f9e8-414e-8332-8e34fe73720f" (UID: "086f8ba3-f9e8-414e-8332-8e34fe73720f"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:44:49.389545 kubelet[1902]: I0317 18:44:49.389484 1902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/086f8ba3-f9e8-414e-8332-8e34fe73720f-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "086f8ba3-f9e8-414e-8332-8e34fe73720f" (UID: "086f8ba3-f9e8-414e-8332-8e34fe73720f"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:44:49.451197 env[1214]: time="2025-03-17T18:44:49.451120923Z" level=info msg="shim disconnected" id=b9a10ad6b70dc45123f8618c76e6c94a13dea3dada0e396fc5ac7a5b6b55f9f3 Mar 17 18:44:49.451930 env[1214]: time="2025-03-17T18:44:49.451903418Z" level=warning msg="cleaning up after shim disconnected" id=b9a10ad6b70dc45123f8618c76e6c94a13dea3dada0e396fc5ac7a5b6b55f9f3 namespace=k8s.io Mar 17 18:44:49.451930 env[1214]: time="2025-03-17T18:44:49.451922234Z" level=info msg="cleaning up dead shim" Mar 17 18:44:49.458198 env[1214]: time="2025-03-17T18:44:49.458129025Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:44:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3674 runtime=io.containerd.runc.v2\n" Mar 17 18:44:49.458484 env[1214]: time="2025-03-17T18:44:49.458457864Z" level=info msg="TearDown network for sandbox \"b9a10ad6b70dc45123f8618c76e6c94a13dea3dada0e396fc5ac7a5b6b55f9f3\" successfully" Mar 17 18:44:49.458484 env[1214]: time="2025-03-17T18:44:49.458482000Z" level=info msg="StopPodSandbox for \"b9a10ad6b70dc45123f8618c76e6c94a13dea3dada0e396fc5ac7a5b6b55f9f3\" returns successfully" Mar 17 18:44:49.476097 kubelet[1902]: I0317 18:44:49.476066 1902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/086f8ba3-f9e8-414e-8332-8e34fe73720f-cni-path\") pod \"086f8ba3-f9e8-414e-8332-8e34fe73720f\" (UID: \"086f8ba3-f9e8-414e-8332-8e34fe73720f\") " Mar 17 18:44:49.476193 kubelet[1902]: I0317 18:44:49.476111 1902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/086f8ba3-f9e8-414e-8332-8e34fe73720f-cilium-config-path\") pod \"086f8ba3-f9e8-414e-8332-8e34fe73720f\" (UID: \"086f8ba3-f9e8-414e-8332-8e34fe73720f\") " Mar 17 18:44:49.476193 kubelet[1902]: I0317 18:44:49.476131 1902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/086f8ba3-f9e8-414e-8332-8e34fe73720f-xtables-lock\") pod \"086f8ba3-f9e8-414e-8332-8e34fe73720f\" (UID: \"086f8ba3-f9e8-414e-8332-8e34fe73720f\") " Mar 17 18:44:49.476193 kubelet[1902]: I0317 18:44:49.476152 1902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m24dx\" (UniqueName: \"kubernetes.io/projected/e00882d9-93a0-46b7-ad71-8237272a115f-kube-api-access-m24dx\") pod \"e00882d9-93a0-46b7-ad71-8237272a115f\" (UID: \"e00882d9-93a0-46b7-ad71-8237272a115f\") " Mar 17 18:44:49.476193 kubelet[1902]: I0317 18:44:49.476159 1902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/086f8ba3-f9e8-414e-8332-8e34fe73720f-cni-path" (OuterVolumeSpecName: "cni-path") pod "086f8ba3-f9e8-414e-8332-8e34fe73720f" (UID: "086f8ba3-f9e8-414e-8332-8e34fe73720f"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:44:49.476313 kubelet[1902]: I0317 18:44:49.476171 1902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/086f8ba3-f9e8-414e-8332-8e34fe73720f-cilium-run\") pod \"086f8ba3-f9e8-414e-8332-8e34fe73720f\" (UID: \"086f8ba3-f9e8-414e-8332-8e34fe73720f\") " Mar 17 18:44:49.476313 kubelet[1902]: I0317 18:44:49.476216 1902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/086f8ba3-f9e8-414e-8332-8e34fe73720f-clustermesh-secrets\") pod \"086f8ba3-f9e8-414e-8332-8e34fe73720f\" (UID: \"086f8ba3-f9e8-414e-8332-8e34fe73720f\") " Mar 17 18:44:49.476313 kubelet[1902]: I0317 18:44:49.476217 1902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/086f8ba3-f9e8-414e-8332-8e34fe73720f-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "086f8ba3-f9e8-414e-8332-8e34fe73720f" (UID: "086f8ba3-f9e8-414e-8332-8e34fe73720f"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:44:49.476313 kubelet[1902]: I0317 18:44:49.476237 1902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e00882d9-93a0-46b7-ad71-8237272a115f-cilium-config-path\") pod \"e00882d9-93a0-46b7-ad71-8237272a115f\" (UID: \"e00882d9-93a0-46b7-ad71-8237272a115f\") " Mar 17 18:44:49.476313 kubelet[1902]: I0317 18:44:49.476256 1902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/086f8ba3-f9e8-414e-8332-8e34fe73720f-hostproc\") pod \"086f8ba3-f9e8-414e-8332-8e34fe73720f\" (UID: \"086f8ba3-f9e8-414e-8332-8e34fe73720f\") " Mar 17 18:44:49.476313 kubelet[1902]: I0317 18:44:49.476277 1902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/086f8ba3-f9e8-414e-8332-8e34fe73720f-lib-modules\") pod \"086f8ba3-f9e8-414e-8332-8e34fe73720f\" (UID: \"086f8ba3-f9e8-414e-8332-8e34fe73720f\") " Mar 17 18:44:49.476465 kubelet[1902]: I0317 18:44:49.476297 1902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sn774\" (UniqueName: \"kubernetes.io/projected/086f8ba3-f9e8-414e-8332-8e34fe73720f-kube-api-access-sn774\") pod \"086f8ba3-f9e8-414e-8332-8e34fe73720f\" (UID: \"086f8ba3-f9e8-414e-8332-8e34fe73720f\") " Mar 17 18:44:49.476465 kubelet[1902]: I0317 18:44:49.476316 1902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/086f8ba3-f9e8-414e-8332-8e34fe73720f-bpf-maps\") pod \"086f8ba3-f9e8-414e-8332-8e34fe73720f\" (UID: \"086f8ba3-f9e8-414e-8332-8e34fe73720f\") " Mar 17 18:44:49.476465 kubelet[1902]: I0317 18:44:49.476334 1902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/086f8ba3-f9e8-414e-8332-8e34fe73720f-etc-cni-netd\") pod \"086f8ba3-f9e8-414e-8332-8e34fe73720f\" (UID: \"086f8ba3-f9e8-414e-8332-8e34fe73720f\") " Mar 17 18:44:49.476465 kubelet[1902]: I0317 18:44:49.476352 1902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/086f8ba3-f9e8-414e-8332-8e34fe73720f-host-proc-sys-net\") pod \"086f8ba3-f9e8-414e-8332-8e34fe73720f\" (UID: \"086f8ba3-f9e8-414e-8332-8e34fe73720f\") " Mar 17 18:44:49.476465 kubelet[1902]: I0317 18:44:49.476372 1902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/086f8ba3-f9e8-414e-8332-8e34fe73720f-cilium-cgroup\") pod \"086f8ba3-f9e8-414e-8332-8e34fe73720f\" (UID: \"086f8ba3-f9e8-414e-8332-8e34fe73720f\") " Mar 17 18:44:49.476465 kubelet[1902]: I0317 18:44:49.476412 1902 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/086f8ba3-f9e8-414e-8332-8e34fe73720f-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Mar 17 18:44:49.476604 kubelet[1902]: I0317 18:44:49.476425 1902 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/086f8ba3-f9e8-414e-8332-8e34fe73720f-hubble-tls\") on node \"localhost\" DevicePath \"\"" Mar 17 18:44:49.476604 kubelet[1902]: I0317 18:44:49.476436 1902 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/086f8ba3-f9e8-414e-8332-8e34fe73720f-cni-path\") on node \"localhost\" DevicePath \"\"" Mar 17 18:44:49.476604 kubelet[1902]: I0317 18:44:49.476446 1902 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/086f8ba3-f9e8-414e-8332-8e34fe73720f-xtables-lock\") on node \"localhost\" DevicePath \"\"" Mar 17 18:44:49.476604 kubelet[1902]: I0317 18:44:49.476475 1902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/086f8ba3-f9e8-414e-8332-8e34fe73720f-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "086f8ba3-f9e8-414e-8332-8e34fe73720f" (UID: "086f8ba3-f9e8-414e-8332-8e34fe73720f"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:44:49.476604 kubelet[1902]: I0317 18:44:49.476498 1902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/086f8ba3-f9e8-414e-8332-8e34fe73720f-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "086f8ba3-f9e8-414e-8332-8e34fe73720f" (UID: "086f8ba3-f9e8-414e-8332-8e34fe73720f"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:44:49.478677 kubelet[1902]: I0317 18:44:49.476901 1902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/086f8ba3-f9e8-414e-8332-8e34fe73720f-hostproc" (OuterVolumeSpecName: "hostproc") pod "086f8ba3-f9e8-414e-8332-8e34fe73720f" (UID: "086f8ba3-f9e8-414e-8332-8e34fe73720f"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:44:49.478677 kubelet[1902]: I0317 18:44:49.478562 1902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/086f8ba3-f9e8-414e-8332-8e34fe73720f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "086f8ba3-f9e8-414e-8332-8e34fe73720f" (UID: "086f8ba3-f9e8-414e-8332-8e34fe73720f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 18:44:49.478677 kubelet[1902]: I0317 18:44:49.478620 1902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/086f8ba3-f9e8-414e-8332-8e34fe73720f-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "086f8ba3-f9e8-414e-8332-8e34fe73720f" (UID: "086f8ba3-f9e8-414e-8332-8e34fe73720f"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:44:49.478677 kubelet[1902]: I0317 18:44:49.478641 1902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/086f8ba3-f9e8-414e-8332-8e34fe73720f-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "086f8ba3-f9e8-414e-8332-8e34fe73720f" (UID: "086f8ba3-f9e8-414e-8332-8e34fe73720f"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:44:49.478677 kubelet[1902]: I0317 18:44:49.478677 1902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/086f8ba3-f9e8-414e-8332-8e34fe73720f-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "086f8ba3-f9e8-414e-8332-8e34fe73720f" (UID: "086f8ba3-f9e8-414e-8332-8e34fe73720f"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:44:49.478842 kubelet[1902]: I0317 18:44:49.478705 1902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/086f8ba3-f9e8-414e-8332-8e34fe73720f-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "086f8ba3-f9e8-414e-8332-8e34fe73720f" (UID: "086f8ba3-f9e8-414e-8332-8e34fe73720f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:44:49.479221 kubelet[1902]: I0317 18:44:49.479193 1902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/086f8ba3-f9e8-414e-8332-8e34fe73720f-kube-api-access-sn774" (OuterVolumeSpecName: "kube-api-access-sn774") pod "086f8ba3-f9e8-414e-8332-8e34fe73720f" (UID: "086f8ba3-f9e8-414e-8332-8e34fe73720f"). InnerVolumeSpecName "kube-api-access-sn774". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:44:49.479675 kubelet[1902]: I0317 18:44:49.479652 1902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e00882d9-93a0-46b7-ad71-8237272a115f-kube-api-access-m24dx" (OuterVolumeSpecName: "kube-api-access-m24dx") pod "e00882d9-93a0-46b7-ad71-8237272a115f" (UID: "e00882d9-93a0-46b7-ad71-8237272a115f"). InnerVolumeSpecName "kube-api-access-m24dx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:44:49.479944 kubelet[1902]: I0317 18:44:49.479915 1902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e00882d9-93a0-46b7-ad71-8237272a115f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e00882d9-93a0-46b7-ad71-8237272a115f" (UID: "e00882d9-93a0-46b7-ad71-8237272a115f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 18:44:49.481185 kubelet[1902]: I0317 18:44:49.481148 1902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/086f8ba3-f9e8-414e-8332-8e34fe73720f-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "086f8ba3-f9e8-414e-8332-8e34fe73720f" (UID: "086f8ba3-f9e8-414e-8332-8e34fe73720f"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 17 18:44:49.577034 kubelet[1902]: I0317 18:44:49.576980 1902 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/086f8ba3-f9e8-414e-8332-8e34fe73720f-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Mar 17 18:44:49.577034 kubelet[1902]: I0317 18:44:49.577014 1902 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/086f8ba3-f9e8-414e-8332-8e34fe73720f-bpf-maps\") on node \"localhost\" DevicePath \"\"" Mar 17 18:44:49.577034 kubelet[1902]: I0317 18:44:49.577024 1902 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/086f8ba3-f9e8-414e-8332-8e34fe73720f-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Mar 17 18:44:49.577034 kubelet[1902]: I0317 18:44:49.577032 1902 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/086f8ba3-f9e8-414e-8332-8e34fe73720f-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Mar 17 18:44:49.577034 kubelet[1902]: I0317 18:44:49.577046 1902 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/086f8ba3-f9e8-414e-8332-8e34fe73720f-cilium-run\") on node \"localhost\" DevicePath \"\"" Mar 17 18:44:49.577034 kubelet[1902]: I0317 18:44:49.577053 1902 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/086f8ba3-f9e8-414e-8332-8e34fe73720f-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Mar 17 18:44:49.577387 kubelet[1902]: I0317 18:44:49.577061 1902 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/086f8ba3-f9e8-414e-8332-8e34fe73720f-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 17 18:44:49.577387 kubelet[1902]: I0317 18:44:49.577069 1902 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-m24dx\" (UniqueName: \"kubernetes.io/projected/e00882d9-93a0-46b7-ad71-8237272a115f-kube-api-access-m24dx\") on node \"localhost\" DevicePath \"\"" Mar 17 18:44:49.577387 kubelet[1902]: I0317 18:44:49.577076 1902 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/086f8ba3-f9e8-414e-8332-8e34fe73720f-hostproc\") on node \"localhost\" DevicePath \"\"" Mar 17 18:44:49.577387 kubelet[1902]: I0317 18:44:49.577084 1902 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e00882d9-93a0-46b7-ad71-8237272a115f-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 17 18:44:49.577387 kubelet[1902]: I0317 18:44:49.577090 1902 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/086f8ba3-f9e8-414e-8332-8e34fe73720f-lib-modules\") on node \"localhost\" DevicePath \"\"" Mar 17 18:44:49.577387 kubelet[1902]: I0317 18:44:49.577101 1902 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-sn774\" (UniqueName: \"kubernetes.io/projected/086f8ba3-f9e8-414e-8332-8e34fe73720f-kube-api-access-sn774\") on node \"localhost\" DevicePath \"\"" Mar 17 18:44:49.656122 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b9a10ad6b70dc45123f8618c76e6c94a13dea3dada0e396fc5ac7a5b6b55f9f3-rootfs.mount: Deactivated successfully. Mar 17 18:44:49.656250 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b9a10ad6b70dc45123f8618c76e6c94a13dea3dada0e396fc5ac7a5b6b55f9f3-shm.mount: Deactivated successfully. Mar 17 18:44:49.656307 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7b9ec8dca0a9ec5d028503feb7a47677997e3fee0770c4e61278a7c25af1c37c-rootfs.mount: Deactivated successfully. Mar 17 18:44:49.656355 systemd[1]: var-lib-kubelet-pods-e00882d9\x2d93a0\x2d46b7\x2dad71\x2d8237272a115f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dm24dx.mount: Deactivated successfully. Mar 17 18:44:49.656419 systemd[1]: var-lib-kubelet-pods-086f8ba3\x2df9e8\x2d414e\x2d8332\x2d8e34fe73720f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsn774.mount: Deactivated successfully. Mar 17 18:44:49.656475 systemd[1]: var-lib-kubelet-pods-086f8ba3\x2df9e8\x2d414e\x2d8332\x2d8e34fe73720f-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 17 18:44:49.656523 systemd[1]: var-lib-kubelet-pods-086f8ba3\x2df9e8\x2d414e\x2d8332\x2d8e34fe73720f-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 17 18:44:50.229399 kubelet[1902]: I0317 18:44:50.229354 1902 scope.go:117] "RemoveContainer" containerID="27dc2415685cf623a8b278f77e033899b9ebe066faaa26351fdc3a2c38341502" Mar 17 18:44:50.230522 env[1214]: time="2025-03-17T18:44:50.230482416Z" level=info msg="RemoveContainer for \"27dc2415685cf623a8b278f77e033899b9ebe066faaa26351fdc3a2c38341502\"" Mar 17 18:44:50.233962 systemd[1]: Removed slice kubepods-burstable-pod086f8ba3_f9e8_414e_8332_8e34fe73720f.slice. Mar 17 18:44:50.234039 systemd[1]: kubepods-burstable-pod086f8ba3_f9e8_414e_8332_8e34fe73720f.slice: Consumed 6.025s CPU time. Mar 17 18:44:50.235440 systemd[1]: Removed slice kubepods-besteffort-pode00882d9_93a0_46b7_ad71_8237272a115f.slice. Mar 17 18:44:50.247396 sshd[3531]: pam_unix(sshd:session): session closed for user core Mar 17 18:44:50.251873 systemd[1]: Started sshd@23-10.0.0.108:22-10.0.0.1:57884.service. Mar 17 18:44:50.252401 systemd[1]: sshd@22-10.0.0.108:22-10.0.0.1:57868.service: Deactivated successfully. Mar 17 18:44:50.253146 systemd[1]: session-23.scope: Deactivated successfully. Mar 17 18:44:50.253318 systemd[1]: session-23.scope: Consumed 1.077s CPU time. Mar 17 18:44:50.254026 systemd-logind[1205]: Session 23 logged out. Waiting for processes to exit. Mar 17 18:44:50.255447 systemd-logind[1205]: Removed session 23. Mar 17 18:44:50.305256 sshd[3691]: Accepted publickey for core from 10.0.0.1 port 57884 ssh2: RSA SHA256:DYcGKLA+BUI3KXBOyjzF6/uTec/cV0nLMAEcssN4/64 Mar 17 18:44:50.306570 sshd[3691]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:44:50.310137 systemd-logind[1205]: New session 24 of user core. Mar 17 18:44:50.311230 systemd[1]: Started session-24.scope. Mar 17 18:44:50.331680 env[1214]: time="2025-03-17T18:44:50.331616702Z" level=info msg="RemoveContainer for \"27dc2415685cf623a8b278f77e033899b9ebe066faaa26351fdc3a2c38341502\" returns successfully" Mar 17 18:44:50.332045 kubelet[1902]: I0317 18:44:50.332008 1902 scope.go:117] "RemoveContainer" containerID="2b4d45523432b916987bbbfa6c24cd603e2cc3494ea5bfe90cac6c06ba04af90" Mar 17 18:44:50.333430 env[1214]: time="2025-03-17T18:44:50.333390942Z" level=info msg="RemoveContainer for \"2b4d45523432b916987bbbfa6c24cd603e2cc3494ea5bfe90cac6c06ba04af90\"" Mar 17 18:44:50.360277 env[1214]: time="2025-03-17T18:44:50.360201948Z" level=info msg="RemoveContainer for \"2b4d45523432b916987bbbfa6c24cd603e2cc3494ea5bfe90cac6c06ba04af90\" returns successfully" Mar 17 18:44:50.360560 kubelet[1902]: I0317 18:44:50.360520 1902 scope.go:117] "RemoveContainer" containerID="6dbe9c19c911900902ede4e8445341c190d01a8ab37b45de05db29afe233af72" Mar 17 18:44:50.361804 env[1214]: time="2025-03-17T18:44:50.361755791Z" level=info msg="RemoveContainer for \"6dbe9c19c911900902ede4e8445341c190d01a8ab37b45de05db29afe233af72\"" Mar 17 18:44:50.370320 env[1214]: time="2025-03-17T18:44:50.370225429Z" level=info msg="RemoveContainer for \"6dbe9c19c911900902ede4e8445341c190d01a8ab37b45de05db29afe233af72\" returns successfully" Mar 17 18:44:50.370634 kubelet[1902]: I0317 18:44:50.370591 1902 scope.go:117] "RemoveContainer" containerID="a3ebcc5d755b3b19bf55924bc1e5640627bdfc0416cd301cfd8c7ade4b76ec2c" Mar 17 18:44:50.372574 env[1214]: time="2025-03-17T18:44:50.372543915Z" level=info msg="RemoveContainer for \"a3ebcc5d755b3b19bf55924bc1e5640627bdfc0416cd301cfd8c7ade4b76ec2c\"" Mar 17 18:44:50.380219 env[1214]: time="2025-03-17T18:44:50.380065231Z" level=info msg="RemoveContainer for \"a3ebcc5d755b3b19bf55924bc1e5640627bdfc0416cd301cfd8c7ade4b76ec2c\" returns successfully" Mar 17 18:44:50.380967 kubelet[1902]: I0317 18:44:50.380926 1902 scope.go:117] "RemoveContainer" containerID="ae6c1cd9927157da2b6075e1d38825ddac613281fb1dd68e07a83c9e38bfcc88" Mar 17 18:44:50.382599 env[1214]: time="2025-03-17T18:44:50.382564339Z" level=info msg="RemoveContainer for \"ae6c1cd9927157da2b6075e1d38825ddac613281fb1dd68e07a83c9e38bfcc88\"" Mar 17 18:44:50.386048 env[1214]: time="2025-03-17T18:44:50.386018970Z" level=info msg="RemoveContainer for \"ae6c1cd9927157da2b6075e1d38825ddac613281fb1dd68e07a83c9e38bfcc88\" returns successfully" Mar 17 18:44:50.386234 kubelet[1902]: I0317 18:44:50.386207 1902 scope.go:117] "RemoveContainer" containerID="201931dee49fa38900b57d925e0ee45792bd8c18d3ccbe1504b6e7b64b3e94ba" Mar 17 18:44:50.387335 env[1214]: time="2025-03-17T18:44:50.387312431Z" level=info msg="RemoveContainer for \"201931dee49fa38900b57d925e0ee45792bd8c18d3ccbe1504b6e7b64b3e94ba\"" Mar 17 18:44:50.390670 env[1214]: time="2025-03-17T18:44:50.390622880Z" level=info msg="RemoveContainer for \"201931dee49fa38900b57d925e0ee45792bd8c18d3ccbe1504b6e7b64b3e94ba\" returns successfully" Mar 17 18:44:50.889546 kubelet[1902]: I0317 18:44:50.889501 1902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="086f8ba3-f9e8-414e-8332-8e34fe73720f" path="/var/lib/kubelet/pods/086f8ba3-f9e8-414e-8332-8e34fe73720f/volumes" Mar 17 18:44:50.890188 kubelet[1902]: I0317 18:44:50.890126 1902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e00882d9-93a0-46b7-ad71-8237272a115f" path="/var/lib/kubelet/pods/e00882d9-93a0-46b7-ad71-8237272a115f/volumes" Mar 17 18:44:51.778111 sshd[3691]: pam_unix(sshd:session): session closed for user core Mar 17 18:44:51.780963 systemd[1]: sshd@23-10.0.0.108:22-10.0.0.1:57884.service: Deactivated successfully. Mar 17 18:44:51.781524 systemd[1]: session-24.scope: Deactivated successfully. Mar 17 18:44:51.781656 systemd[1]: session-24.scope: Consumed 1.010s CPU time. Mar 17 18:44:51.782060 systemd-logind[1205]: Session 24 logged out. Waiting for processes to exit. Mar 17 18:44:51.783327 systemd[1]: Started sshd@24-10.0.0.108:22-10.0.0.1:57890.service. Mar 17 18:44:51.784150 systemd-logind[1205]: Removed session 24. Mar 17 18:44:51.821773 sshd[3704]: Accepted publickey for core from 10.0.0.1 port 57890 ssh2: RSA SHA256:DYcGKLA+BUI3KXBOyjzF6/uTec/cV0nLMAEcssN4/64 Mar 17 18:44:51.823547 sshd[3704]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:44:51.828946 systemd[1]: Started session-25.scope. Mar 17 18:44:51.829098 systemd-logind[1205]: New session 25 of user core. Mar 17 18:44:51.856973 kubelet[1902]: E0317 18:44:51.856031 1902 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="086f8ba3-f9e8-414e-8332-8e34fe73720f" containerName="cilium-agent" Mar 17 18:44:51.856973 kubelet[1902]: E0317 18:44:51.856094 1902 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="086f8ba3-f9e8-414e-8332-8e34fe73720f" containerName="apply-sysctl-overwrites" Mar 17 18:44:51.856973 kubelet[1902]: E0317 18:44:51.856103 1902 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e00882d9-93a0-46b7-ad71-8237272a115f" containerName="cilium-operator" Mar 17 18:44:51.856973 kubelet[1902]: E0317 18:44:51.856135 1902 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="086f8ba3-f9e8-414e-8332-8e34fe73720f" containerName="mount-cgroup" Mar 17 18:44:51.856973 kubelet[1902]: E0317 18:44:51.856141 1902 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="086f8ba3-f9e8-414e-8332-8e34fe73720f" containerName="mount-bpf-fs" Mar 17 18:44:51.856973 kubelet[1902]: E0317 18:44:51.856149 1902 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="086f8ba3-f9e8-414e-8332-8e34fe73720f" containerName="clean-cilium-state" Mar 17 18:44:51.856973 kubelet[1902]: I0317 18:44:51.856195 1902 memory_manager.go:354] "RemoveStaleState removing state" podUID="086f8ba3-f9e8-414e-8332-8e34fe73720f" containerName="cilium-agent" Mar 17 18:44:51.856973 kubelet[1902]: I0317 18:44:51.856202 1902 memory_manager.go:354] "RemoveStaleState removing state" podUID="e00882d9-93a0-46b7-ad71-8237272a115f" containerName="cilium-operator" Mar 17 18:44:51.862380 systemd[1]: Created slice kubepods-burstable-pod381a7987_b238_42dd_a10e_a34bc050fcd7.slice. Mar 17 18:44:51.888531 kubelet[1902]: I0317 18:44:51.888492 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/381a7987-b238-42dd-a10e-a34bc050fcd7-bpf-maps\") pod \"cilium-j8q8n\" (UID: \"381a7987-b238-42dd-a10e-a34bc050fcd7\") " pod="kube-system/cilium-j8q8n" Mar 17 18:44:51.888784 kubelet[1902]: I0317 18:44:51.888763 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/381a7987-b238-42dd-a10e-a34bc050fcd7-cilium-cgroup\") pod \"cilium-j8q8n\" (UID: \"381a7987-b238-42dd-a10e-a34bc050fcd7\") " pod="kube-system/cilium-j8q8n" Mar 17 18:44:51.888905 kubelet[1902]: I0317 18:44:51.888890 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/381a7987-b238-42dd-a10e-a34bc050fcd7-cilium-config-path\") pod \"cilium-j8q8n\" (UID: \"381a7987-b238-42dd-a10e-a34bc050fcd7\") " pod="kube-system/cilium-j8q8n" Mar 17 18:44:51.889020 kubelet[1902]: I0317 18:44:51.889005 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/381a7987-b238-42dd-a10e-a34bc050fcd7-cilium-ipsec-secrets\") pod \"cilium-j8q8n\" (UID: \"381a7987-b238-42dd-a10e-a34bc050fcd7\") " pod="kube-system/cilium-j8q8n" Mar 17 18:44:51.889137 kubelet[1902]: I0317 18:44:51.889122 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/381a7987-b238-42dd-a10e-a34bc050fcd7-hubble-tls\") pod \"cilium-j8q8n\" (UID: \"381a7987-b238-42dd-a10e-a34bc050fcd7\") " pod="kube-system/cilium-j8q8n" Mar 17 18:44:51.889257 kubelet[1902]: I0317 18:44:51.889240 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skk9c\" (UniqueName: \"kubernetes.io/projected/381a7987-b238-42dd-a10e-a34bc050fcd7-kube-api-access-skk9c\") pod \"cilium-j8q8n\" (UID: \"381a7987-b238-42dd-a10e-a34bc050fcd7\") " pod="kube-system/cilium-j8q8n" Mar 17 18:44:51.889362 kubelet[1902]: I0317 18:44:51.889345 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/381a7987-b238-42dd-a10e-a34bc050fcd7-cilium-run\") pod \"cilium-j8q8n\" (UID: \"381a7987-b238-42dd-a10e-a34bc050fcd7\") " pod="kube-system/cilium-j8q8n" Mar 17 18:44:51.889461 kubelet[1902]: I0317 18:44:51.889445 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/381a7987-b238-42dd-a10e-a34bc050fcd7-etc-cni-netd\") pod \"cilium-j8q8n\" (UID: \"381a7987-b238-42dd-a10e-a34bc050fcd7\") " pod="kube-system/cilium-j8q8n" Mar 17 18:44:51.889558 kubelet[1902]: I0317 18:44:51.889541 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/381a7987-b238-42dd-a10e-a34bc050fcd7-clustermesh-secrets\") pod \"cilium-j8q8n\" (UID: \"381a7987-b238-42dd-a10e-a34bc050fcd7\") " pod="kube-system/cilium-j8q8n" Mar 17 18:44:51.889665 kubelet[1902]: I0317 18:44:51.889648 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/381a7987-b238-42dd-a10e-a34bc050fcd7-cni-path\") pod \"cilium-j8q8n\" (UID: \"381a7987-b238-42dd-a10e-a34bc050fcd7\") " pod="kube-system/cilium-j8q8n" Mar 17 18:44:51.889776 kubelet[1902]: I0317 18:44:51.889758 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/381a7987-b238-42dd-a10e-a34bc050fcd7-xtables-lock\") pod \"cilium-j8q8n\" (UID: \"381a7987-b238-42dd-a10e-a34bc050fcd7\") " pod="kube-system/cilium-j8q8n" Mar 17 18:44:51.889879 kubelet[1902]: I0317 18:44:51.889861 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/381a7987-b238-42dd-a10e-a34bc050fcd7-host-proc-sys-net\") pod \"cilium-j8q8n\" (UID: \"381a7987-b238-42dd-a10e-a34bc050fcd7\") " pod="kube-system/cilium-j8q8n" Mar 17 18:44:51.889975 kubelet[1902]: I0317 18:44:51.889958 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/381a7987-b238-42dd-a10e-a34bc050fcd7-host-proc-sys-kernel\") pod \"cilium-j8q8n\" (UID: \"381a7987-b238-42dd-a10e-a34bc050fcd7\") " pod="kube-system/cilium-j8q8n" Mar 17 18:44:51.890086 kubelet[1902]: I0317 18:44:51.890069 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/381a7987-b238-42dd-a10e-a34bc050fcd7-hostproc\") pod \"cilium-j8q8n\" (UID: \"381a7987-b238-42dd-a10e-a34bc050fcd7\") " pod="kube-system/cilium-j8q8n" Mar 17 18:44:51.890314 kubelet[1902]: I0317 18:44:51.890297 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/381a7987-b238-42dd-a10e-a34bc050fcd7-lib-modules\") pod \"cilium-j8q8n\" (UID: \"381a7987-b238-42dd-a10e-a34bc050fcd7\") " pod="kube-system/cilium-j8q8n" Mar 17 18:44:51.967453 sshd[3704]: pam_unix(sshd:session): session closed for user core Mar 17 18:44:51.970799 systemd[1]: sshd@24-10.0.0.108:22-10.0.0.1:57890.service: Deactivated successfully. Mar 17 18:44:51.971479 systemd[1]: session-25.scope: Deactivated successfully. Mar 17 18:44:51.973394 systemd-logind[1205]: Session 25 logged out. Waiting for processes to exit. Mar 17 18:44:51.974304 systemd[1]: Started sshd@25-10.0.0.108:22-10.0.0.1:57894.service. Mar 17 18:44:51.975980 systemd-logind[1205]: Removed session 25. Mar 17 18:44:52.018618 kubelet[1902]: E0317 18:44:52.018505 1902 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-skk9c], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-j8q8n" podUID="381a7987-b238-42dd-a10e-a34bc050fcd7" Mar 17 18:44:52.023913 sshd[3718]: Accepted publickey for core from 10.0.0.1 port 57894 ssh2: RSA SHA256:DYcGKLA+BUI3KXBOyjzF6/uTec/cV0nLMAEcssN4/64 Mar 17 18:44:52.025284 sshd[3718]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:44:52.029057 systemd-logind[1205]: New session 26 of user core. Mar 17 18:44:52.030348 systemd[1]: Started session-26.scope. Mar 17 18:44:52.292902 kubelet[1902]: I0317 18:44:52.292774 1902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/381a7987-b238-42dd-a10e-a34bc050fcd7-hubble-tls\") pod \"381a7987-b238-42dd-a10e-a34bc050fcd7\" (UID: \"381a7987-b238-42dd-a10e-a34bc050fcd7\") " Mar 17 18:44:52.292902 kubelet[1902]: I0317 18:44:52.292806 1902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/381a7987-b238-42dd-a10e-a34bc050fcd7-etc-cni-netd\") pod \"381a7987-b238-42dd-a10e-a34bc050fcd7\" (UID: \"381a7987-b238-42dd-a10e-a34bc050fcd7\") " Mar 17 18:44:52.292902 kubelet[1902]: I0317 18:44:52.292826 1902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/381a7987-b238-42dd-a10e-a34bc050fcd7-lib-modules\") pod \"381a7987-b238-42dd-a10e-a34bc050fcd7\" (UID: \"381a7987-b238-42dd-a10e-a34bc050fcd7\") " Mar 17 18:44:52.292902 kubelet[1902]: I0317 18:44:52.292842 1902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/381a7987-b238-42dd-a10e-a34bc050fcd7-cilium-cgroup\") pod \"381a7987-b238-42dd-a10e-a34bc050fcd7\" (UID: \"381a7987-b238-42dd-a10e-a34bc050fcd7\") " Mar 17 18:44:52.292902 kubelet[1902]: I0317 18:44:52.292857 1902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/381a7987-b238-42dd-a10e-a34bc050fcd7-host-proc-sys-kernel\") pod \"381a7987-b238-42dd-a10e-a34bc050fcd7\" (UID: \"381a7987-b238-42dd-a10e-a34bc050fcd7\") " Mar 17 18:44:52.292902 kubelet[1902]: I0317 18:44:52.292876 1902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/381a7987-b238-42dd-a10e-a34bc050fcd7-hostproc\") pod \"381a7987-b238-42dd-a10e-a34bc050fcd7\" (UID: \"381a7987-b238-42dd-a10e-a34bc050fcd7\") " Mar 17 18:44:52.293219 kubelet[1902]: I0317 18:44:52.292890 1902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/381a7987-b238-42dd-a10e-a34bc050fcd7-bpf-maps\") pod \"381a7987-b238-42dd-a10e-a34bc050fcd7\" (UID: \"381a7987-b238-42dd-a10e-a34bc050fcd7\") " Mar 17 18:44:52.293219 kubelet[1902]: I0317 18:44:52.292906 1902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/381a7987-b238-42dd-a10e-a34bc050fcd7-cilium-ipsec-secrets\") pod \"381a7987-b238-42dd-a10e-a34bc050fcd7\" (UID: \"381a7987-b238-42dd-a10e-a34bc050fcd7\") " Mar 17 18:44:52.293219 kubelet[1902]: I0317 18:44:52.292922 1902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/381a7987-b238-42dd-a10e-a34bc050fcd7-clustermesh-secrets\") pod \"381a7987-b238-42dd-a10e-a34bc050fcd7\" (UID: \"381a7987-b238-42dd-a10e-a34bc050fcd7\") " Mar 17 18:44:52.293219 kubelet[1902]: I0317 18:44:52.292937 1902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/381a7987-b238-42dd-a10e-a34bc050fcd7-cni-path\") pod \"381a7987-b238-42dd-a10e-a34bc050fcd7\" (UID: \"381a7987-b238-42dd-a10e-a34bc050fcd7\") " Mar 17 18:44:52.293219 kubelet[1902]: I0317 18:44:52.292920 1902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/381a7987-b238-42dd-a10e-a34bc050fcd7-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "381a7987-b238-42dd-a10e-a34bc050fcd7" (UID: "381a7987-b238-42dd-a10e-a34bc050fcd7"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:44:52.293219 kubelet[1902]: I0317 18:44:52.292952 1902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/381a7987-b238-42dd-a10e-a34bc050fcd7-xtables-lock\") pod \"381a7987-b238-42dd-a10e-a34bc050fcd7\" (UID: \"381a7987-b238-42dd-a10e-a34bc050fcd7\") " Mar 17 18:44:52.293370 kubelet[1902]: I0317 18:44:52.292988 1902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/381a7987-b238-42dd-a10e-a34bc050fcd7-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "381a7987-b238-42dd-a10e-a34bc050fcd7" (UID: "381a7987-b238-42dd-a10e-a34bc050fcd7"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:44:52.293370 kubelet[1902]: I0317 18:44:52.293009 1902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-skk9c\" (UniqueName: \"kubernetes.io/projected/381a7987-b238-42dd-a10e-a34bc050fcd7-kube-api-access-skk9c\") pod \"381a7987-b238-42dd-a10e-a34bc050fcd7\" (UID: \"381a7987-b238-42dd-a10e-a34bc050fcd7\") " Mar 17 18:44:52.293370 kubelet[1902]: I0317 18:44:52.293012 1902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/381a7987-b238-42dd-a10e-a34bc050fcd7-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "381a7987-b238-42dd-a10e-a34bc050fcd7" (UID: "381a7987-b238-42dd-a10e-a34bc050fcd7"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:44:52.293370 kubelet[1902]: I0317 18:44:52.293029 1902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/381a7987-b238-42dd-a10e-a34bc050fcd7-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "381a7987-b238-42dd-a10e-a34bc050fcd7" (UID: "381a7987-b238-42dd-a10e-a34bc050fcd7"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:44:52.293370 kubelet[1902]: I0317 18:44:52.293032 1902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/381a7987-b238-42dd-a10e-a34bc050fcd7-host-proc-sys-net\") pod \"381a7987-b238-42dd-a10e-a34bc050fcd7\" (UID: \"381a7987-b238-42dd-a10e-a34bc050fcd7\") " Mar 17 18:44:52.293489 kubelet[1902]: I0317 18:44:52.293043 1902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/381a7987-b238-42dd-a10e-a34bc050fcd7-hostproc" (OuterVolumeSpecName: "hostproc") pod "381a7987-b238-42dd-a10e-a34bc050fcd7" (UID: "381a7987-b238-42dd-a10e-a34bc050fcd7"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:44:52.293489 kubelet[1902]: I0317 18:44:52.293048 1902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/381a7987-b238-42dd-a10e-a34bc050fcd7-cilium-run\") pod \"381a7987-b238-42dd-a10e-a34bc050fcd7\" (UID: \"381a7987-b238-42dd-a10e-a34bc050fcd7\") " Mar 17 18:44:52.293489 kubelet[1902]: I0317 18:44:52.293057 1902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/381a7987-b238-42dd-a10e-a34bc050fcd7-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "381a7987-b238-42dd-a10e-a34bc050fcd7" (UID: "381a7987-b238-42dd-a10e-a34bc050fcd7"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:44:52.293489 kubelet[1902]: I0317 18:44:52.293067 1902 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/381a7987-b238-42dd-a10e-a34bc050fcd7-cilium-config-path\") pod \"381a7987-b238-42dd-a10e-a34bc050fcd7\" (UID: \"381a7987-b238-42dd-a10e-a34bc050fcd7\") " Mar 17 18:44:52.293489 kubelet[1902]: I0317 18:44:52.293095 1902 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/381a7987-b238-42dd-a10e-a34bc050fcd7-lib-modules\") on node \"localhost\" DevicePath \"\"" Mar 17 18:44:52.293489 kubelet[1902]: I0317 18:44:52.293103 1902 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/381a7987-b238-42dd-a10e-a34bc050fcd7-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Mar 17 18:44:52.293620 kubelet[1902]: I0317 18:44:52.293109 1902 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/381a7987-b238-42dd-a10e-a34bc050fcd7-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Mar 17 18:44:52.293620 kubelet[1902]: I0317 18:44:52.293118 1902 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/381a7987-b238-42dd-a10e-a34bc050fcd7-hostproc\") on node \"localhost\" DevicePath \"\"" Mar 17 18:44:52.293620 kubelet[1902]: I0317 18:44:52.293125 1902 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/381a7987-b238-42dd-a10e-a34bc050fcd7-bpf-maps\") on node \"localhost\" DevicePath \"\"" Mar 17 18:44:52.293620 kubelet[1902]: I0317 18:44:52.293131 1902 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/381a7987-b238-42dd-a10e-a34bc050fcd7-xtables-lock\") on node \"localhost\" DevicePath \"\"" Mar 17 18:44:52.294816 kubelet[1902]: I0317 18:44:52.294766 1902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/381a7987-b238-42dd-a10e-a34bc050fcd7-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "381a7987-b238-42dd-a10e-a34bc050fcd7" (UID: "381a7987-b238-42dd-a10e-a34bc050fcd7"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 18:44:52.294879 kubelet[1902]: I0317 18:44:52.294838 1902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/381a7987-b238-42dd-a10e-a34bc050fcd7-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "381a7987-b238-42dd-a10e-a34bc050fcd7" (UID: "381a7987-b238-42dd-a10e-a34bc050fcd7"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:44:52.294879 kubelet[1902]: I0317 18:44:52.294857 1902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/381a7987-b238-42dd-a10e-a34bc050fcd7-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "381a7987-b238-42dd-a10e-a34bc050fcd7" (UID: "381a7987-b238-42dd-a10e-a34bc050fcd7"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:44:52.294879 kubelet[1902]: I0317 18:44:52.294869 1902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/381a7987-b238-42dd-a10e-a34bc050fcd7-cni-path" (OuterVolumeSpecName: "cni-path") pod "381a7987-b238-42dd-a10e-a34bc050fcd7" (UID: "381a7987-b238-42dd-a10e-a34bc050fcd7"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:44:52.294982 kubelet[1902]: I0317 18:44:52.294889 1902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/381a7987-b238-42dd-a10e-a34bc050fcd7-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "381a7987-b238-42dd-a10e-a34bc050fcd7" (UID: "381a7987-b238-42dd-a10e-a34bc050fcd7"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:44:52.297492 systemd[1]: var-lib-kubelet-pods-381a7987\x2db238\x2d42dd\x2da10e\x2da34bc050fcd7-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 17 18:44:52.297609 systemd[1]: var-lib-kubelet-pods-381a7987\x2db238\x2d42dd\x2da10e\x2da34bc050fcd7-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Mar 17 18:44:52.297804 kubelet[1902]: I0317 18:44:52.297765 1902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/381a7987-b238-42dd-a10e-a34bc050fcd7-kube-api-access-skk9c" (OuterVolumeSpecName: "kube-api-access-skk9c") pod "381a7987-b238-42dd-a10e-a34bc050fcd7" (UID: "381a7987-b238-42dd-a10e-a34bc050fcd7"). InnerVolumeSpecName "kube-api-access-skk9c". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:44:52.297887 kubelet[1902]: I0317 18:44:52.297855 1902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/381a7987-b238-42dd-a10e-a34bc050fcd7-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "381a7987-b238-42dd-a10e-a34bc050fcd7" (UID: "381a7987-b238-42dd-a10e-a34bc050fcd7"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 17 18:44:52.297978 kubelet[1902]: I0317 18:44:52.297846 1902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/381a7987-b238-42dd-a10e-a34bc050fcd7-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "381a7987-b238-42dd-a10e-a34bc050fcd7" (UID: "381a7987-b238-42dd-a10e-a34bc050fcd7"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:44:52.298792 kubelet[1902]: I0317 18:44:52.298752 1902 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/381a7987-b238-42dd-a10e-a34bc050fcd7-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "381a7987-b238-42dd-a10e-a34bc050fcd7" (UID: "381a7987-b238-42dd-a10e-a34bc050fcd7"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 17 18:44:52.393781 kubelet[1902]: I0317 18:44:52.393694 1902 reconciler_common.go:288] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/381a7987-b238-42dd-a10e-a34bc050fcd7-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Mar 17 18:44:52.393781 kubelet[1902]: I0317 18:44:52.393753 1902 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/381a7987-b238-42dd-a10e-a34bc050fcd7-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Mar 17 18:44:52.393781 kubelet[1902]: I0317 18:44:52.393766 1902 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/381a7987-b238-42dd-a10e-a34bc050fcd7-cni-path\") on node \"localhost\" DevicePath \"\"" Mar 17 18:44:52.393781 kubelet[1902]: I0317 18:44:52.393777 1902 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-skk9c\" (UniqueName: \"kubernetes.io/projected/381a7987-b238-42dd-a10e-a34bc050fcd7-kube-api-access-skk9c\") on node \"localhost\" DevicePath \"\"" Mar 17 18:44:52.394087 kubelet[1902]: I0317 18:44:52.393810 1902 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/381a7987-b238-42dd-a10e-a34bc050fcd7-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Mar 17 18:44:52.394087 kubelet[1902]: I0317 18:44:52.393823 1902 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/381a7987-b238-42dd-a10e-a34bc050fcd7-cilium-run\") on node \"localhost\" DevicePath \"\"" Mar 17 18:44:52.394087 kubelet[1902]: I0317 18:44:52.393833 1902 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/381a7987-b238-42dd-a10e-a34bc050fcd7-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 17 18:44:52.394087 kubelet[1902]: I0317 18:44:52.393843 1902 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/381a7987-b238-42dd-a10e-a34bc050fcd7-hubble-tls\") on node \"localhost\" DevicePath \"\"" Mar 17 18:44:52.394087 kubelet[1902]: I0317 18:44:52.393855 1902 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/381a7987-b238-42dd-a10e-a34bc050fcd7-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Mar 17 18:44:52.891485 systemd[1]: Removed slice kubepods-burstable-pod381a7987_b238_42dd_a10e_a34bc050fcd7.slice. Mar 17 18:44:52.928305 kubelet[1902]: E0317 18:44:52.928237 1902 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 18:44:52.996154 systemd[1]: var-lib-kubelet-pods-381a7987\x2db238\x2d42dd\x2da10e\x2da34bc050fcd7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dskk9c.mount: Deactivated successfully. Mar 17 18:44:52.996264 systemd[1]: var-lib-kubelet-pods-381a7987\x2db238\x2d42dd\x2da10e\x2da34bc050fcd7-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 17 18:44:53.289082 kubelet[1902]: W0317 18:44:53.288957 1902 reflector.go:561] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Mar 17 18:44:53.289082 kubelet[1902]: E0317 18:44:53.289037 1902 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-ipsec-keys\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-ipsec-keys\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Mar 17 18:44:53.289742 kubelet[1902]: W0317 18:44:53.289523 1902 reflector.go:561] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Mar 17 18:44:53.289742 kubelet[1902]: E0317 18:44:53.289544 1902 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Mar 17 18:44:53.289742 kubelet[1902]: W0317 18:44:53.289726 1902 reflector.go:561] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Mar 17 18:44:53.289875 kubelet[1902]: E0317 18:44:53.289763 1902 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Mar 17 18:44:53.290026 kubelet[1902]: W0317 18:44:53.289947 1902 reflector.go:561] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Mar 17 18:44:53.290026 kubelet[1902]: E0317 18:44:53.289966 1902 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Mar 17 18:44:53.291638 systemd[1]: Created slice kubepods-burstable-poda1c9ddb6_7f56_4aa9_a6e3_770cf993dbd0.slice. Mar 17 18:44:53.298548 kubelet[1902]: I0317 18:44:53.298493 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a1c9ddb6-7f56-4aa9-a6e3-770cf993dbd0-host-proc-sys-net\") pod \"cilium-b72nx\" (UID: \"a1c9ddb6-7f56-4aa9-a6e3-770cf993dbd0\") " pod="kube-system/cilium-b72nx" Mar 17 18:44:53.298548 kubelet[1902]: I0317 18:44:53.298549 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a1c9ddb6-7f56-4aa9-a6e3-770cf993dbd0-host-proc-sys-kernel\") pod \"cilium-b72nx\" (UID: \"a1c9ddb6-7f56-4aa9-a6e3-770cf993dbd0\") " pod="kube-system/cilium-b72nx" Mar 17 18:44:53.298769 kubelet[1902]: I0317 18:44:53.298568 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a1c9ddb6-7f56-4aa9-a6e3-770cf993dbd0-hubble-tls\") pod \"cilium-b72nx\" (UID: \"a1c9ddb6-7f56-4aa9-a6e3-770cf993dbd0\") " pod="kube-system/cilium-b72nx" Mar 17 18:44:53.298769 kubelet[1902]: I0317 18:44:53.298588 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gppcf\" (UniqueName: \"kubernetes.io/projected/a1c9ddb6-7f56-4aa9-a6e3-770cf993dbd0-kube-api-access-gppcf\") pod \"cilium-b72nx\" (UID: \"a1c9ddb6-7f56-4aa9-a6e3-770cf993dbd0\") " pod="kube-system/cilium-b72nx" Mar 17 18:44:53.298769 kubelet[1902]: I0317 18:44:53.298608 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a1c9ddb6-7f56-4aa9-a6e3-770cf993dbd0-cilium-run\") pod \"cilium-b72nx\" (UID: \"a1c9ddb6-7f56-4aa9-a6e3-770cf993dbd0\") " pod="kube-system/cilium-b72nx" Mar 17 18:44:53.298769 kubelet[1902]: I0317 18:44:53.298624 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a1c9ddb6-7f56-4aa9-a6e3-770cf993dbd0-bpf-maps\") pod \"cilium-b72nx\" (UID: \"a1c9ddb6-7f56-4aa9-a6e3-770cf993dbd0\") " pod="kube-system/cilium-b72nx" Mar 17 18:44:53.298769 kubelet[1902]: I0317 18:44:53.298644 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a1c9ddb6-7f56-4aa9-a6e3-770cf993dbd0-lib-modules\") pod \"cilium-b72nx\" (UID: \"a1c9ddb6-7f56-4aa9-a6e3-770cf993dbd0\") " pod="kube-system/cilium-b72nx" Mar 17 18:44:53.298769 kubelet[1902]: I0317 18:44:53.298661 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a1c9ddb6-7f56-4aa9-a6e3-770cf993dbd0-xtables-lock\") pod \"cilium-b72nx\" (UID: \"a1c9ddb6-7f56-4aa9-a6e3-770cf993dbd0\") " pod="kube-system/cilium-b72nx" Mar 17 18:44:53.298928 kubelet[1902]: I0317 18:44:53.298678 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a1c9ddb6-7f56-4aa9-a6e3-770cf993dbd0-hostproc\") pod \"cilium-b72nx\" (UID: \"a1c9ddb6-7f56-4aa9-a6e3-770cf993dbd0\") " pod="kube-system/cilium-b72nx" Mar 17 18:44:53.298928 kubelet[1902]: I0317 18:44:53.298706 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a1c9ddb6-7f56-4aa9-a6e3-770cf993dbd0-cilium-cgroup\") pod \"cilium-b72nx\" (UID: \"a1c9ddb6-7f56-4aa9-a6e3-770cf993dbd0\") " pod="kube-system/cilium-b72nx" Mar 17 18:44:53.298928 kubelet[1902]: I0317 18:44:53.298731 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a1c9ddb6-7f56-4aa9-a6e3-770cf993dbd0-cni-path\") pod \"cilium-b72nx\" (UID: \"a1c9ddb6-7f56-4aa9-a6e3-770cf993dbd0\") " pod="kube-system/cilium-b72nx" Mar 17 18:44:53.298928 kubelet[1902]: I0317 18:44:53.298752 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a1c9ddb6-7f56-4aa9-a6e3-770cf993dbd0-clustermesh-secrets\") pod \"cilium-b72nx\" (UID: \"a1c9ddb6-7f56-4aa9-a6e3-770cf993dbd0\") " pod="kube-system/cilium-b72nx" Mar 17 18:44:53.298928 kubelet[1902]: I0317 18:44:53.298768 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a1c9ddb6-7f56-4aa9-a6e3-770cf993dbd0-cilium-config-path\") pod \"cilium-b72nx\" (UID: \"a1c9ddb6-7f56-4aa9-a6e3-770cf993dbd0\") " pod="kube-system/cilium-b72nx" Mar 17 18:44:53.298928 kubelet[1902]: I0317 18:44:53.298782 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a1c9ddb6-7f56-4aa9-a6e3-770cf993dbd0-etc-cni-netd\") pod \"cilium-b72nx\" (UID: \"a1c9ddb6-7f56-4aa9-a6e3-770cf993dbd0\") " pod="kube-system/cilium-b72nx" Mar 17 18:44:53.299062 kubelet[1902]: I0317 18:44:53.298798 1902 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a1c9ddb6-7f56-4aa9-a6e3-770cf993dbd0-cilium-ipsec-secrets\") pod \"cilium-b72nx\" (UID: \"a1c9ddb6-7f56-4aa9-a6e3-770cf993dbd0\") " pod="kube-system/cilium-b72nx" Mar 17 18:44:54.399854 kubelet[1902]: E0317 18:44:54.399784 1902 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Mar 17 18:44:54.400321 kubelet[1902]: E0317 18:44:54.399920 1902 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a1c9ddb6-7f56-4aa9-a6e3-770cf993dbd0-cilium-config-path podName:a1c9ddb6-7f56-4aa9-a6e3-770cf993dbd0 nodeName:}" failed. No retries permitted until 2025-03-17 18:44:54.899888096 +0000 UTC m=+92.103088253 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/a1c9ddb6-7f56-4aa9-a6e3-770cf993dbd0-cilium-config-path") pod "cilium-b72nx" (UID: "a1c9ddb6-7f56-4aa9-a6e3-770cf993dbd0") : failed to sync configmap cache: timed out waiting for the condition Mar 17 18:44:54.889317 kubelet[1902]: I0317 18:44:54.889263 1902 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="381a7987-b238-42dd-a10e-a34bc050fcd7" path="/var/lib/kubelet/pods/381a7987-b238-42dd-a10e-a34bc050fcd7/volumes" Mar 17 18:44:55.002115 kubelet[1902]: I0317 18:44:55.002022 1902 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-03-17T18:44:55Z","lastTransitionTime":"2025-03-17T18:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 17 18:44:55.094855 kubelet[1902]: E0317 18:44:55.094796 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:44:55.095495 env[1214]: time="2025-03-17T18:44:55.095434322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-b72nx,Uid:a1c9ddb6-7f56-4aa9-a6e3-770cf993dbd0,Namespace:kube-system,Attempt:0,}" Mar 17 18:44:55.398750 env[1214]: time="2025-03-17T18:44:55.398655882Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:44:55.398750 env[1214]: time="2025-03-17T18:44:55.398715676Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:44:55.398750 env[1214]: time="2025-03-17T18:44:55.398738809Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:44:55.399017 env[1214]: time="2025-03-17T18:44:55.398943708Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/05ce9f0cdd7c3c92d8cd19f9026a252f8ad523334db5a95197ff298603aa3c5a pid=3749 runtime=io.containerd.runc.v2 Mar 17 18:44:55.415469 systemd[1]: Started cri-containerd-05ce9f0cdd7c3c92d8cd19f9026a252f8ad523334db5a95197ff298603aa3c5a.scope. Mar 17 18:44:55.432797 env[1214]: time="2025-03-17T18:44:55.432747132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-b72nx,Uid:a1c9ddb6-7f56-4aa9-a6e3-770cf993dbd0,Namespace:kube-system,Attempt:0,} returns sandbox id \"05ce9f0cdd7c3c92d8cd19f9026a252f8ad523334db5a95197ff298603aa3c5a\"" Mar 17 18:44:55.433303 kubelet[1902]: E0317 18:44:55.433281 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:44:55.435295 env[1214]: time="2025-03-17T18:44:55.435246268Z" level=info msg="CreateContainer within sandbox \"05ce9f0cdd7c3c92d8cd19f9026a252f8ad523334db5a95197ff298603aa3c5a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 18:44:55.602567 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4187162418.mount: Deactivated successfully. Mar 17 18:44:55.721071 env[1214]: time="2025-03-17T18:44:55.720911874Z" level=info msg="CreateContainer within sandbox \"05ce9f0cdd7c3c92d8cd19f9026a252f8ad523334db5a95197ff298603aa3c5a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2547dbfcf5c55dcb447c71271b1d5e8bc203f86410f6fa6724894a9f8054b4e2\"" Mar 17 18:44:55.721577 env[1214]: time="2025-03-17T18:44:55.721549123Z" level=info msg="StartContainer for \"2547dbfcf5c55dcb447c71271b1d5e8bc203f86410f6fa6724894a9f8054b4e2\"" Mar 17 18:44:55.734812 systemd[1]: Started cri-containerd-2547dbfcf5c55dcb447c71271b1d5e8bc203f86410f6fa6724894a9f8054b4e2.scope. Mar 17 18:44:55.807711 env[1214]: time="2025-03-17T18:44:55.807622655Z" level=info msg="StartContainer for \"2547dbfcf5c55dcb447c71271b1d5e8bc203f86410f6fa6724894a9f8054b4e2\" returns successfully" Mar 17 18:44:55.808814 systemd[1]: cri-containerd-2547dbfcf5c55dcb447c71271b1d5e8bc203f86410f6fa6724894a9f8054b4e2.scope: Deactivated successfully. Mar 17 18:44:55.912888 env[1214]: time="2025-03-17T18:44:55.912819006Z" level=info msg="shim disconnected" id=2547dbfcf5c55dcb447c71271b1d5e8bc203f86410f6fa6724894a9f8054b4e2 Mar 17 18:44:55.912888 env[1214]: time="2025-03-17T18:44:55.912880764Z" level=warning msg="cleaning up after shim disconnected" id=2547dbfcf5c55dcb447c71271b1d5e8bc203f86410f6fa6724894a9f8054b4e2 namespace=k8s.io Mar 17 18:44:55.912888 env[1214]: time="2025-03-17T18:44:55.912890302Z" level=info msg="cleaning up dead shim" Mar 17 18:44:55.919304 env[1214]: time="2025-03-17T18:44:55.919254406Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:44:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3831 runtime=io.containerd.runc.v2\n" Mar 17 18:44:56.246710 kubelet[1902]: E0317 18:44:56.246644 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:44:56.249593 env[1214]: time="2025-03-17T18:44:56.248898927Z" level=info msg="CreateContainer within sandbox \"05ce9f0cdd7c3c92d8cd19f9026a252f8ad523334db5a95197ff298603aa3c5a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 18:44:56.278488 env[1214]: time="2025-03-17T18:44:56.278406475Z" level=info msg="CreateContainer within sandbox \"05ce9f0cdd7c3c92d8cd19f9026a252f8ad523334db5a95197ff298603aa3c5a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"4d2d326c972192ec63fb54739f9bcee441fa122c54d53215cffd84da5d4e43b0\"" Mar 17 18:44:56.279093 env[1214]: time="2025-03-17T18:44:56.279049357Z" level=info msg="StartContainer for \"4d2d326c972192ec63fb54739f9bcee441fa122c54d53215cffd84da5d4e43b0\"" Mar 17 18:44:56.318126 systemd[1]: Started cri-containerd-4d2d326c972192ec63fb54739f9bcee441fa122c54d53215cffd84da5d4e43b0.scope. Mar 17 18:44:56.352009 env[1214]: time="2025-03-17T18:44:56.351960234Z" level=info msg="StartContainer for \"4d2d326c972192ec63fb54739f9bcee441fa122c54d53215cffd84da5d4e43b0\" returns successfully" Mar 17 18:44:56.356611 systemd[1]: cri-containerd-4d2d326c972192ec63fb54739f9bcee441fa122c54d53215cffd84da5d4e43b0.scope: Deactivated successfully. Mar 17 18:44:56.377822 env[1214]: time="2025-03-17T18:44:56.377759984Z" level=info msg="shim disconnected" id=4d2d326c972192ec63fb54739f9bcee441fa122c54d53215cffd84da5d4e43b0 Mar 17 18:44:56.377822 env[1214]: time="2025-03-17T18:44:56.377813615Z" level=warning msg="cleaning up after shim disconnected" id=4d2d326c972192ec63fb54739f9bcee441fa122c54d53215cffd84da5d4e43b0 namespace=k8s.io Mar 17 18:44:56.377822 env[1214]: time="2025-03-17T18:44:56.377822973Z" level=info msg="cleaning up dead shim" Mar 17 18:44:56.384861 env[1214]: time="2025-03-17T18:44:56.384803341Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:44:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3891 runtime=io.containerd.runc.v2\n" Mar 17 18:44:57.250531 kubelet[1902]: E0317 18:44:57.250479 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:44:57.253134 env[1214]: time="2025-03-17T18:44:57.253095249Z" level=info msg="CreateContainer within sandbox \"05ce9f0cdd7c3c92d8cd19f9026a252f8ad523334db5a95197ff298603aa3c5a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 18:44:57.267695 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4286732180.mount: Deactivated successfully. Mar 17 18:44:57.271780 env[1214]: time="2025-03-17T18:44:57.271723033Z" level=info msg="CreateContainer within sandbox \"05ce9f0cdd7c3c92d8cd19f9026a252f8ad523334db5a95197ff298603aa3c5a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d954badf13489e7ecbe0c4a644a2eeba9611fd9aba30a52b03756aca1c84e565\"" Mar 17 18:44:57.272404 env[1214]: time="2025-03-17T18:44:57.272363963Z" level=info msg="StartContainer for \"d954badf13489e7ecbe0c4a644a2eeba9611fd9aba30a52b03756aca1c84e565\"" Mar 17 18:44:57.290327 systemd[1]: Started cri-containerd-d954badf13489e7ecbe0c4a644a2eeba9611fd9aba30a52b03756aca1c84e565.scope. Mar 17 18:44:57.315991 env[1214]: time="2025-03-17T18:44:57.315919528Z" level=info msg="StartContainer for \"d954badf13489e7ecbe0c4a644a2eeba9611fd9aba30a52b03756aca1c84e565\" returns successfully" Mar 17 18:44:57.321105 systemd[1]: cri-containerd-d954badf13489e7ecbe0c4a644a2eeba9611fd9aba30a52b03756aca1c84e565.scope: Deactivated successfully. Mar 17 18:44:57.346161 env[1214]: time="2025-03-17T18:44:57.346100715Z" level=info msg="shim disconnected" id=d954badf13489e7ecbe0c4a644a2eeba9611fd9aba30a52b03756aca1c84e565 Mar 17 18:44:57.346161 env[1214]: time="2025-03-17T18:44:57.346160088Z" level=warning msg="cleaning up after shim disconnected" id=d954badf13489e7ecbe0c4a644a2eeba9611fd9aba30a52b03756aca1c84e565 namespace=k8s.io Mar 17 18:44:57.346343 env[1214]: time="2025-03-17T18:44:57.346170468Z" level=info msg="cleaning up dead shim" Mar 17 18:44:57.352946 env[1214]: time="2025-03-17T18:44:57.352919770Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:44:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3949 runtime=io.containerd.runc.v2\n" Mar 17 18:44:57.394713 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d954badf13489e7ecbe0c4a644a2eeba9611fd9aba30a52b03756aca1c84e565-rootfs.mount: Deactivated successfully. Mar 17 18:44:57.929985 kubelet[1902]: E0317 18:44:57.929910 1902 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 18:44:58.254632 kubelet[1902]: E0317 18:44:58.254486 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:44:58.257711 env[1214]: time="2025-03-17T18:44:58.257650203Z" level=info msg="CreateContainer within sandbox \"05ce9f0cdd7c3c92d8cd19f9026a252f8ad523334db5a95197ff298603aa3c5a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 18:44:58.272595 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3469894651.mount: Deactivated successfully. Mar 17 18:44:58.278207 env[1214]: time="2025-03-17T18:44:58.278144518Z" level=info msg="CreateContainer within sandbox \"05ce9f0cdd7c3c92d8cd19f9026a252f8ad523334db5a95197ff298603aa3c5a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"35242a5b685c8cdbfe92780451c6c06305f581cdc9c04659edf80b6108349761\"" Mar 17 18:44:58.278895 env[1214]: time="2025-03-17T18:44:58.278860501Z" level=info msg="StartContainer for \"35242a5b685c8cdbfe92780451c6c06305f581cdc9c04659edf80b6108349761\"" Mar 17 18:44:58.295217 systemd[1]: Started cri-containerd-35242a5b685c8cdbfe92780451c6c06305f581cdc9c04659edf80b6108349761.scope. Mar 17 18:44:58.322613 systemd[1]: cri-containerd-35242a5b685c8cdbfe92780451c6c06305f581cdc9c04659edf80b6108349761.scope: Deactivated successfully. Mar 17 18:44:58.324257 env[1214]: time="2025-03-17T18:44:58.324198303Z" level=info msg="StartContainer for \"35242a5b685c8cdbfe92780451c6c06305f581cdc9c04659edf80b6108349761\" returns successfully" Mar 17 18:44:58.346200 env[1214]: time="2025-03-17T18:44:58.346112874Z" level=info msg="shim disconnected" id=35242a5b685c8cdbfe92780451c6c06305f581cdc9c04659edf80b6108349761 Mar 17 18:44:58.346405 env[1214]: time="2025-03-17T18:44:58.346212604Z" level=warning msg="cleaning up after shim disconnected" id=35242a5b685c8cdbfe92780451c6c06305f581cdc9c04659edf80b6108349761 namespace=k8s.io Mar 17 18:44:58.346405 env[1214]: time="2025-03-17T18:44:58.346226199Z" level=info msg="cleaning up dead shim" Mar 17 18:44:58.354257 env[1214]: time="2025-03-17T18:44:58.354165761Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:44:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4003 runtime=io.containerd.runc.v2\n" Mar 17 18:44:58.394919 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-35242a5b685c8cdbfe92780451c6c06305f581cdc9c04659edf80b6108349761-rootfs.mount: Deactivated successfully. Mar 17 18:44:59.258394 kubelet[1902]: E0317 18:44:59.258353 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:44:59.260473 env[1214]: time="2025-03-17T18:44:59.260298722Z" level=info msg="CreateContainer within sandbox \"05ce9f0cdd7c3c92d8cd19f9026a252f8ad523334db5a95197ff298603aa3c5a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 18:44:59.275028 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount243060956.mount: Deactivated successfully. Mar 17 18:44:59.280527 env[1214]: time="2025-03-17T18:44:59.280480559Z" level=info msg="CreateContainer within sandbox \"05ce9f0cdd7c3c92d8cd19f9026a252f8ad523334db5a95197ff298603aa3c5a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"1179a2be9dd4029809f4863233d31a6c4d472fd0a73b383a85a198f2bc4399b1\"" Mar 17 18:44:59.282303 env[1214]: time="2025-03-17T18:44:59.282233451Z" level=info msg="StartContainer for \"1179a2be9dd4029809f4863233d31a6c4d472fd0a73b383a85a198f2bc4399b1\"" Mar 17 18:44:59.300171 systemd[1]: Started cri-containerd-1179a2be9dd4029809f4863233d31a6c4d472fd0a73b383a85a198f2bc4399b1.scope. Mar 17 18:44:59.331764 env[1214]: time="2025-03-17T18:44:59.331702003Z" level=info msg="StartContainer for \"1179a2be9dd4029809f4863233d31a6c4d472fd0a73b383a85a198f2bc4399b1\" returns successfully" Mar 17 18:44:59.605213 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Mar 17 18:44:59.887278 kubelet[1902]: E0317 18:44:59.887116 1902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-bqrl2" podUID="6d90cfef-522b-4bfb-850b-040164f5e40e" Mar 17 18:45:00.262733 kubelet[1902]: E0317 18:45:00.262574 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:45:00.279029 kubelet[1902]: I0317 18:45:00.278948 1902 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-b72nx" podStartSLOduration=7.278921956 podStartE2EDuration="7.278921956s" podCreationTimestamp="2025-03-17 18:44:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:45:00.278590123 +0000 UTC m=+97.481790280" watchObservedRunningTime="2025-03-17 18:45:00.278921956 +0000 UTC m=+97.482122113" Mar 17 18:45:01.264276 kubelet[1902]: E0317 18:45:01.264236 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:45:01.886972 kubelet[1902]: E0317 18:45:01.886917 1902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-z67qh" podUID="c42c9b71-b907-4740-b00c-795714787a17" Mar 17 18:45:01.887156 kubelet[1902]: E0317 18:45:01.887014 1902 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-bqrl2" podUID="6d90cfef-522b-4bfb-850b-040164f5e40e" Mar 17 18:45:02.172723 systemd-networkd[1029]: lxc_health: Link UP Mar 17 18:45:02.180235 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Mar 17 18:45:02.179941 systemd-networkd[1029]: lxc_health: Gained carrier Mar 17 18:45:03.096251 kubelet[1902]: E0317 18:45:03.096203 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:45:03.267663 kubelet[1902]: E0317 18:45:03.267621 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:45:03.362356 systemd-networkd[1029]: lxc_health: Gained IPv6LL Mar 17 18:45:03.887668 kubelet[1902]: E0317 18:45:03.887607 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:45:03.887897 kubelet[1902]: E0317 18:45:03.887690 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:45:04.269404 kubelet[1902]: E0317 18:45:04.269279 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:45:06.886969 kubelet[1902]: E0317 18:45:06.886920 1902 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:45:08.669713 sshd[3718]: pam_unix(sshd:session): session closed for user core Mar 17 18:45:08.671763 systemd[1]: sshd@25-10.0.0.108:22-10.0.0.1:57894.service: Deactivated successfully. Mar 17 18:45:08.672438 systemd[1]: session-26.scope: Deactivated successfully. Mar 17 18:45:08.672905 systemd-logind[1205]: Session 26 logged out. Waiting for processes to exit. Mar 17 18:45:08.673611 systemd-logind[1205]: Removed session 26.