Sep 10 00:36:48.024262 kernel: Linux version 5.15.191-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Tue Sep 9 23:10:34 -00 2025 Sep 10 00:36:48.024283 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ebdf135b7dd8c9596dea7f2ca48bf31be0143f7cba32a9cc0282a66ca6db3272 Sep 10 00:36:48.024294 kernel: BIOS-provided physical RAM map: Sep 10 00:36:48.024301 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 10 00:36:48.024308 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 10 00:36:48.024314 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 10 00:36:48.024322 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Sep 10 00:36:48.024329 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 10 00:36:48.024335 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Sep 10 00:36:48.024344 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Sep 10 00:36:48.024351 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Sep 10 00:36:48.024357 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Sep 10 00:36:48.024364 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Sep 10 00:36:48.024371 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 10 00:36:48.024379 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Sep 10 00:36:48.024388 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Sep 10 00:36:48.024395 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 10 00:36:48.024402 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 10 00:36:48.024409 kernel: NX (Execute Disable) protection: active Sep 10 00:36:48.024416 kernel: e820: update [mem 0x9b475018-0x9b47ec57] usable ==> usable Sep 10 00:36:48.024424 kernel: e820: update [mem 0x9b475018-0x9b47ec57] usable ==> usable Sep 10 00:36:48.024431 kernel: e820: update [mem 0x9b438018-0x9b474e57] usable ==> usable Sep 10 00:36:48.024438 kernel: e820: update [mem 0x9b438018-0x9b474e57] usable ==> usable Sep 10 00:36:48.024445 kernel: extended physical RAM map: Sep 10 00:36:48.024452 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 10 00:36:48.024460 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 10 00:36:48.024467 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 10 00:36:48.024474 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Sep 10 00:36:48.024481 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 10 00:36:48.024488 kernel: reserve setup_data: [mem 0x000000000080c000-0x000000000080ffff] usable Sep 10 00:36:48.024495 kernel: reserve setup_data: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Sep 10 00:36:48.024502 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b438017] usable Sep 10 00:36:48.024509 kernel: reserve setup_data: [mem 0x000000009b438018-0x000000009b474e57] usable Sep 10 00:36:48.024516 kernel: reserve setup_data: [mem 0x000000009b474e58-0x000000009b475017] usable Sep 10 00:36:48.024523 kernel: reserve setup_data: [mem 0x000000009b475018-0x000000009b47ec57] usable Sep 10 00:36:48.024530 kernel: reserve setup_data: [mem 0x000000009b47ec58-0x000000009c8eefff] usable Sep 10 00:36:48.024539 kernel: reserve setup_data: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Sep 10 00:36:48.024546 kernel: reserve setup_data: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Sep 10 00:36:48.024553 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 10 00:36:48.024560 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Sep 10 00:36:48.024571 kernel: reserve setup_data: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Sep 10 00:36:48.024578 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 10 00:36:48.024586 kernel: reserve setup_data: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 10 00:36:48.024595 kernel: efi: EFI v2.70 by EDK II Sep 10 00:36:48.024602 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b673018 RNG=0x9cb73018 Sep 10 00:36:48.024610 kernel: random: crng init done Sep 10 00:36:48.024618 kernel: SMBIOS 2.8 present. Sep 10 00:36:48.024625 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Sep 10 00:36:48.024633 kernel: Hypervisor detected: KVM Sep 10 00:36:48.024640 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 10 00:36:48.024648 kernel: kvm-clock: cpu 0, msr 6819f001, primary cpu clock Sep 10 00:36:48.024655 kernel: kvm-clock: using sched offset of 5059842323 cycles Sep 10 00:36:48.024665 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 10 00:36:48.024673 kernel: tsc: Detected 2794.748 MHz processor Sep 10 00:36:48.024681 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 10 00:36:48.024689 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 10 00:36:48.024697 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Sep 10 00:36:48.024705 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 10 00:36:48.024712 kernel: Using GB pages for direct mapping Sep 10 00:36:48.024720 kernel: Secure boot disabled Sep 10 00:36:48.024728 kernel: ACPI: Early table checksum verification disabled Sep 10 00:36:48.024737 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Sep 10 00:36:48.024745 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Sep 10 00:36:48.024753 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:36:48.024761 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:36:48.024769 kernel: ACPI: FACS 0x000000009CBDD000 000040 Sep 10 00:36:48.024777 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:36:48.024785 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:36:48.024793 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:36:48.024801 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:36:48.024810 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Sep 10 00:36:48.024818 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Sep 10 00:36:48.024838 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Sep 10 00:36:48.024846 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Sep 10 00:36:48.024854 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Sep 10 00:36:48.024862 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Sep 10 00:36:48.024869 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Sep 10 00:36:48.024877 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Sep 10 00:36:48.024885 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Sep 10 00:36:48.024894 kernel: No NUMA configuration found Sep 10 00:36:48.024902 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Sep 10 00:36:48.024910 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Sep 10 00:36:48.024918 kernel: Zone ranges: Sep 10 00:36:48.024926 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 10 00:36:48.024934 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Sep 10 00:36:48.024941 kernel: Normal empty Sep 10 00:36:48.024949 kernel: Movable zone start for each node Sep 10 00:36:48.024957 kernel: Early memory node ranges Sep 10 00:36:48.024966 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 10 00:36:48.024975 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Sep 10 00:36:48.024983 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Sep 10 00:36:48.024992 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Sep 10 00:36:48.025000 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Sep 10 00:36:48.025009 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Sep 10 00:36:48.025017 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Sep 10 00:36:48.025026 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 10 00:36:48.025034 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 10 00:36:48.025043 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Sep 10 00:36:48.025053 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 10 00:36:48.025062 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Sep 10 00:36:48.025071 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Sep 10 00:36:48.025079 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Sep 10 00:36:48.025088 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 10 00:36:48.025097 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 10 00:36:48.025105 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 10 00:36:48.025114 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 10 00:36:48.025122 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 10 00:36:48.025135 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 10 00:36:48.025144 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 10 00:36:48.025155 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 10 00:36:48.025164 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 10 00:36:48.025172 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 10 00:36:48.025181 kernel: TSC deadline timer available Sep 10 00:36:48.025189 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Sep 10 00:36:48.025197 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 10 00:36:48.025204 kernel: kvm-guest: setup PV sched yield Sep 10 00:36:48.025215 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Sep 10 00:36:48.025223 kernel: Booting paravirtualized kernel on KVM Sep 10 00:36:48.025237 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 10 00:36:48.025246 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Sep 10 00:36:48.025263 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 Sep 10 00:36:48.025271 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 Sep 10 00:36:48.025280 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 10 00:36:48.025288 kernel: kvm-guest: setup async PF for cpu 0 Sep 10 00:36:48.025296 kernel: kvm-guest: stealtime: cpu 0, msr 9b21c0c0 Sep 10 00:36:48.025304 kernel: kvm-guest: PV spinlocks enabled Sep 10 00:36:48.025312 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 10 00:36:48.025320 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Sep 10 00:36:48.025330 kernel: Policy zone: DMA32 Sep 10 00:36:48.025340 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ebdf135b7dd8c9596dea7f2ca48bf31be0143f7cba32a9cc0282a66ca6db3272 Sep 10 00:36:48.025349 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 10 00:36:48.025357 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 10 00:36:48.025368 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 10 00:36:48.025377 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 10 00:36:48.025386 kernel: Memory: 2397432K/2567000K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47492K init, 4088K bss, 169308K reserved, 0K cma-reserved) Sep 10 00:36:48.025395 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 10 00:36:48.025404 kernel: ftrace: allocating 34612 entries in 136 pages Sep 10 00:36:48.025412 kernel: ftrace: allocated 136 pages with 2 groups Sep 10 00:36:48.025421 kernel: rcu: Hierarchical RCU implementation. Sep 10 00:36:48.025430 kernel: rcu: RCU event tracing is enabled. Sep 10 00:36:48.025439 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 10 00:36:48.025449 kernel: Rude variant of Tasks RCU enabled. Sep 10 00:36:48.025458 kernel: Tracing variant of Tasks RCU enabled. Sep 10 00:36:48.025467 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 10 00:36:48.025476 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 10 00:36:48.025483 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 10 00:36:48.025490 kernel: Console: colour dummy device 80x25 Sep 10 00:36:48.025496 kernel: printk: console [ttyS0] enabled Sep 10 00:36:48.025503 kernel: ACPI: Core revision 20210730 Sep 10 00:36:48.025510 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 10 00:36:48.025520 kernel: APIC: Switch to symmetric I/O mode setup Sep 10 00:36:48.025528 kernel: x2apic enabled Sep 10 00:36:48.025537 kernel: Switched APIC routing to physical x2apic. Sep 10 00:36:48.025545 kernel: kvm-guest: setup PV IPIs Sep 10 00:36:48.025553 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 10 00:36:48.025562 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Sep 10 00:36:48.025570 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Sep 10 00:36:48.025578 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 10 00:36:48.025587 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 10 00:36:48.025597 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 10 00:36:48.025605 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 10 00:36:48.025613 kernel: Spectre V2 : Mitigation: Retpolines Sep 10 00:36:48.025621 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 10 00:36:48.025630 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 10 00:36:48.025638 kernel: active return thunk: retbleed_return_thunk Sep 10 00:36:48.025646 kernel: RETBleed: Mitigation: untrained return thunk Sep 10 00:36:48.025655 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 10 00:36:48.025663 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Sep 10 00:36:48.025673 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 10 00:36:48.025682 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 10 00:36:48.025690 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 10 00:36:48.025699 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 10 00:36:48.025708 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Sep 10 00:36:48.025717 kernel: Freeing SMP alternatives memory: 32K Sep 10 00:36:48.025725 kernel: pid_max: default: 32768 minimum: 301 Sep 10 00:36:48.025734 kernel: LSM: Security Framework initializing Sep 10 00:36:48.025742 kernel: SELinux: Initializing. Sep 10 00:36:48.025753 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 10 00:36:48.025761 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 10 00:36:48.025770 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 10 00:36:48.025779 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 10 00:36:48.025787 kernel: ... version: 0 Sep 10 00:36:48.025796 kernel: ... bit width: 48 Sep 10 00:36:48.025804 kernel: ... generic registers: 6 Sep 10 00:36:48.025813 kernel: ... value mask: 0000ffffffffffff Sep 10 00:36:48.025822 kernel: ... max period: 00007fffffffffff Sep 10 00:36:48.025846 kernel: ... fixed-purpose events: 0 Sep 10 00:36:48.025854 kernel: ... event mask: 000000000000003f Sep 10 00:36:48.025863 kernel: signal: max sigframe size: 1776 Sep 10 00:36:48.025872 kernel: rcu: Hierarchical SRCU implementation. Sep 10 00:36:48.025881 kernel: smp: Bringing up secondary CPUs ... Sep 10 00:36:48.025891 kernel: x86: Booting SMP configuration: Sep 10 00:36:48.025900 kernel: .... node #0, CPUs: #1 Sep 10 00:36:48.025910 kernel: kvm-clock: cpu 1, msr 6819f041, secondary cpu clock Sep 10 00:36:48.025920 kernel: kvm-guest: setup async PF for cpu 1 Sep 10 00:36:48.025931 kernel: kvm-guest: stealtime: cpu 1, msr 9b29c0c0 Sep 10 00:36:48.025941 kernel: #2 Sep 10 00:36:48.025950 kernel: kvm-clock: cpu 2, msr 6819f081, secondary cpu clock Sep 10 00:36:48.025960 kernel: kvm-guest: setup async PF for cpu 2 Sep 10 00:36:48.025969 kernel: kvm-guest: stealtime: cpu 2, msr 9b31c0c0 Sep 10 00:36:48.025977 kernel: #3 Sep 10 00:36:48.025986 kernel: kvm-clock: cpu 3, msr 6819f0c1, secondary cpu clock Sep 10 00:36:48.025995 kernel: kvm-guest: setup async PF for cpu 3 Sep 10 00:36:48.026004 kernel: kvm-guest: stealtime: cpu 3, msr 9b39c0c0 Sep 10 00:36:48.026015 kernel: smp: Brought up 1 node, 4 CPUs Sep 10 00:36:48.026024 kernel: smpboot: Max logical packages: 1 Sep 10 00:36:48.026034 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Sep 10 00:36:48.026043 kernel: devtmpfs: initialized Sep 10 00:36:48.026052 kernel: x86/mm: Memory block size: 128MB Sep 10 00:36:48.026062 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Sep 10 00:36:48.026071 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Sep 10 00:36:48.026081 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Sep 10 00:36:48.026091 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Sep 10 00:36:48.026103 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Sep 10 00:36:48.026112 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 10 00:36:48.026121 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 10 00:36:48.026130 kernel: pinctrl core: initialized pinctrl subsystem Sep 10 00:36:48.026139 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 10 00:36:48.026148 kernel: audit: initializing netlink subsys (disabled) Sep 10 00:36:48.026157 kernel: audit: type=2000 audit(1757464607.758:1): state=initialized audit_enabled=0 res=1 Sep 10 00:36:48.026166 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 10 00:36:48.026175 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 10 00:36:48.026185 kernel: cpuidle: using governor menu Sep 10 00:36:48.026193 kernel: ACPI: bus type PCI registered Sep 10 00:36:48.026201 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 10 00:36:48.026210 kernel: dca service started, version 1.12.1 Sep 10 00:36:48.026218 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Sep 10 00:36:48.026227 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 Sep 10 00:36:48.026235 kernel: PCI: Using configuration type 1 for base access Sep 10 00:36:48.026244 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 10 00:36:48.026253 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Sep 10 00:36:48.026272 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Sep 10 00:36:48.026281 kernel: ACPI: Added _OSI(Module Device) Sep 10 00:36:48.026289 kernel: ACPI: Added _OSI(Processor Device) Sep 10 00:36:48.026297 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 10 00:36:48.026305 kernel: ACPI: Added _OSI(Linux-Dell-Video) Sep 10 00:36:48.026314 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Sep 10 00:36:48.026322 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Sep 10 00:36:48.026331 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 10 00:36:48.026339 kernel: ACPI: Interpreter enabled Sep 10 00:36:48.026348 kernel: ACPI: PM: (supports S0 S3 S5) Sep 10 00:36:48.026357 kernel: ACPI: Using IOAPIC for interrupt routing Sep 10 00:36:48.026365 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 10 00:36:48.026373 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 10 00:36:48.026382 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 10 00:36:48.026520 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 10 00:36:48.026607 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 10 00:36:48.026686 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 10 00:36:48.026699 kernel: PCI host bridge to bus 0000:00 Sep 10 00:36:48.026787 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 10 00:36:48.026875 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 10 00:36:48.026948 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 10 00:36:48.027019 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Sep 10 00:36:48.027088 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 10 00:36:48.027156 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Sep 10 00:36:48.027228 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 10 00:36:48.027349 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Sep 10 00:36:48.027457 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Sep 10 00:36:48.027542 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Sep 10 00:36:48.027621 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Sep 10 00:36:48.027709 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Sep 10 00:36:48.027796 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Sep 10 00:36:48.027889 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 10 00:36:48.027988 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Sep 10 00:36:48.028073 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Sep 10 00:36:48.028161 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Sep 10 00:36:48.028323 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Sep 10 00:36:48.028445 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Sep 10 00:36:48.028544 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Sep 10 00:36:48.028626 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Sep 10 00:36:48.028704 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Sep 10 00:36:48.028792 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 10 00:36:48.028891 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Sep 10 00:36:48.028970 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Sep 10 00:36:48.029064 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Sep 10 00:36:48.029162 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Sep 10 00:36:48.029267 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Sep 10 00:36:48.029352 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 10 00:36:48.029445 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Sep 10 00:36:48.029530 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Sep 10 00:36:48.029609 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Sep 10 00:36:48.029696 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Sep 10 00:36:48.029785 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Sep 10 00:36:48.029797 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 10 00:36:48.029806 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 10 00:36:48.029815 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 10 00:36:48.029836 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 10 00:36:48.029846 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 10 00:36:48.029854 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 10 00:36:48.029863 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 10 00:36:48.029874 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 10 00:36:48.029883 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 10 00:36:48.029892 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 10 00:36:48.029901 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 10 00:36:48.029909 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 10 00:36:48.029918 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 10 00:36:48.029927 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 10 00:36:48.029936 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 10 00:36:48.029945 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 10 00:36:48.029957 kernel: iommu: Default domain type: Translated Sep 10 00:36:48.029966 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 10 00:36:48.030071 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 10 00:36:48.030172 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 10 00:36:48.030284 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 10 00:36:48.030299 kernel: vgaarb: loaded Sep 10 00:36:48.030309 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 10 00:36:48.030317 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 10 00:36:48.030329 kernel: PTP clock support registered Sep 10 00:36:48.030337 kernel: Registered efivars operations Sep 10 00:36:48.030345 kernel: PCI: Using ACPI for IRQ routing Sep 10 00:36:48.030354 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 10 00:36:48.030362 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Sep 10 00:36:48.030370 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Sep 10 00:36:48.030378 kernel: e820: reserve RAM buffer [mem 0x9b438018-0x9bffffff] Sep 10 00:36:48.030386 kernel: e820: reserve RAM buffer [mem 0x9b475018-0x9bffffff] Sep 10 00:36:48.030394 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Sep 10 00:36:48.030403 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Sep 10 00:36:48.030413 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 10 00:36:48.030421 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 10 00:36:48.030429 kernel: clocksource: Switched to clocksource kvm-clock Sep 10 00:36:48.030437 kernel: VFS: Disk quotas dquot_6.6.0 Sep 10 00:36:48.030446 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 10 00:36:48.030454 kernel: pnp: PnP ACPI init Sep 10 00:36:48.030561 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 10 00:36:48.030576 kernel: pnp: PnP ACPI: found 6 devices Sep 10 00:36:48.030585 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 10 00:36:48.030593 kernel: NET: Registered PF_INET protocol family Sep 10 00:36:48.030602 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 10 00:36:48.030610 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 10 00:36:48.030618 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 10 00:36:48.030627 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 10 00:36:48.030635 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Sep 10 00:36:48.030644 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 10 00:36:48.030654 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 10 00:36:48.030662 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 10 00:36:48.030670 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 10 00:36:48.030679 kernel: NET: Registered PF_XDP protocol family Sep 10 00:36:48.030760 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Sep 10 00:36:48.030869 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Sep 10 00:36:48.030956 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 10 00:36:48.031035 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 10 00:36:48.031109 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 10 00:36:48.031184 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Sep 10 00:36:48.031270 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 10 00:36:48.031345 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Sep 10 00:36:48.031357 kernel: PCI: CLS 0 bytes, default 64 Sep 10 00:36:48.031365 kernel: Initialise system trusted keyrings Sep 10 00:36:48.031374 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 10 00:36:48.031382 kernel: Key type asymmetric registered Sep 10 00:36:48.031390 kernel: Asymmetric key parser 'x509' registered Sep 10 00:36:48.031401 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 10 00:36:48.031409 kernel: io scheduler mq-deadline registered Sep 10 00:36:48.031427 kernel: io scheduler kyber registered Sep 10 00:36:48.031437 kernel: io scheduler bfq registered Sep 10 00:36:48.031446 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 10 00:36:48.031455 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 10 00:36:48.031464 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 10 00:36:48.031473 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 10 00:36:48.031483 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 10 00:36:48.031493 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 10 00:36:48.031503 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 10 00:36:48.031512 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 10 00:36:48.031521 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 10 00:36:48.031614 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 10 00:36:48.031628 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 10 00:36:48.031698 kernel: rtc_cmos 00:04: registered as rtc0 Sep 10 00:36:48.031770 kernel: rtc_cmos 00:04: setting system clock to 2025-09-10T00:36:47 UTC (1757464607) Sep 10 00:36:48.031867 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 10 00:36:48.031880 kernel: efifb: probing for efifb Sep 10 00:36:48.031889 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Sep 10 00:36:48.031898 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Sep 10 00:36:48.031907 kernel: efifb: scrolling: redraw Sep 10 00:36:48.031916 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 10 00:36:48.031926 kernel: Console: switching to colour frame buffer device 160x50 Sep 10 00:36:48.031936 kernel: fb0: EFI VGA frame buffer device Sep 10 00:36:48.031946 kernel: pstore: Registered efi as persistent store backend Sep 10 00:36:48.031959 kernel: NET: Registered PF_INET6 protocol family Sep 10 00:36:48.031969 kernel: Segment Routing with IPv6 Sep 10 00:36:48.031979 kernel: In-situ OAM (IOAM) with IPv6 Sep 10 00:36:48.031991 kernel: NET: Registered PF_PACKET protocol family Sep 10 00:36:48.032002 kernel: Key type dns_resolver registered Sep 10 00:36:48.032013 kernel: IPI shorthand broadcast: enabled Sep 10 00:36:48.032023 kernel: sched_clock: Marking stable (490187379, 123022062)->(689277130, -76067689) Sep 10 00:36:48.032033 kernel: registered taskstats version 1 Sep 10 00:36:48.032043 kernel: Loading compiled-in X.509 certificates Sep 10 00:36:48.032052 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.191-flatcar: 3af57cd809cc9e43d7af9f276bb20b532a4147af' Sep 10 00:36:48.032062 kernel: Key type .fscrypt registered Sep 10 00:36:48.032071 kernel: Key type fscrypt-provisioning registered Sep 10 00:36:48.032081 kernel: pstore: Using crash dump compression: deflate Sep 10 00:36:48.032091 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 10 00:36:48.032103 kernel: ima: Allocated hash algorithm: sha1 Sep 10 00:36:48.032113 kernel: ima: No architecture policies found Sep 10 00:36:48.032122 kernel: clk: Disabling unused clocks Sep 10 00:36:48.032133 kernel: Freeing unused kernel image (initmem) memory: 47492K Sep 10 00:36:48.032145 kernel: Write protecting the kernel read-only data: 28672k Sep 10 00:36:48.032157 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Sep 10 00:36:48.032168 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Sep 10 00:36:48.032177 kernel: Run /init as init process Sep 10 00:36:48.032187 kernel: with arguments: Sep 10 00:36:48.032199 kernel: /init Sep 10 00:36:48.032210 kernel: with environment: Sep 10 00:36:48.032219 kernel: HOME=/ Sep 10 00:36:48.032228 kernel: TERM=linux Sep 10 00:36:48.032237 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 10 00:36:48.032249 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 10 00:36:48.032272 systemd[1]: Detected virtualization kvm. Sep 10 00:36:48.032283 systemd[1]: Detected architecture x86-64. Sep 10 00:36:48.032294 systemd[1]: Running in initrd. Sep 10 00:36:48.032303 systemd[1]: No hostname configured, using default hostname. Sep 10 00:36:48.032312 systemd[1]: Hostname set to . Sep 10 00:36:48.032322 systemd[1]: Initializing machine ID from VM UUID. Sep 10 00:36:48.032331 systemd[1]: Queued start job for default target initrd.target. Sep 10 00:36:48.032340 systemd[1]: Started systemd-ask-password-console.path. Sep 10 00:36:48.032349 systemd[1]: Reached target cryptsetup.target. Sep 10 00:36:48.032358 systemd[1]: Reached target paths.target. Sep 10 00:36:48.032367 systemd[1]: Reached target slices.target. Sep 10 00:36:48.032378 systemd[1]: Reached target swap.target. Sep 10 00:36:48.032387 systemd[1]: Reached target timers.target. Sep 10 00:36:48.032397 systemd[1]: Listening on iscsid.socket. Sep 10 00:36:48.032406 systemd[1]: Listening on iscsiuio.socket. Sep 10 00:36:48.032415 systemd[1]: Listening on systemd-journald-audit.socket. Sep 10 00:36:48.032424 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 10 00:36:48.032434 systemd[1]: Listening on systemd-journald.socket. Sep 10 00:36:48.032444 systemd[1]: Listening on systemd-networkd.socket. Sep 10 00:36:48.032454 systemd[1]: Listening on systemd-udevd-control.socket. Sep 10 00:36:48.032463 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 10 00:36:48.032472 systemd[1]: Reached target sockets.target. Sep 10 00:36:48.032481 systemd[1]: Starting kmod-static-nodes.service... Sep 10 00:36:48.032490 systemd[1]: Finished network-cleanup.service. Sep 10 00:36:48.032499 systemd[1]: Starting systemd-fsck-usr.service... Sep 10 00:36:48.032508 systemd[1]: Starting systemd-journald.service... Sep 10 00:36:48.032517 systemd[1]: Starting systemd-modules-load.service... Sep 10 00:36:48.032528 systemd[1]: Starting systemd-resolved.service... Sep 10 00:36:48.032537 systemd[1]: Starting systemd-vconsole-setup.service... Sep 10 00:36:48.032546 systemd[1]: Finished kmod-static-nodes.service. Sep 10 00:36:48.032556 kernel: audit: type=1130 audit(1757464608.024:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:48.032565 systemd[1]: Finished systemd-fsck-usr.service. Sep 10 00:36:48.032574 kernel: audit: type=1130 audit(1757464608.029:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:48.032586 systemd-journald[196]: Journal started Sep 10 00:36:48.032634 systemd-journald[196]: Runtime Journal (/run/log/journal/faff4a3b61414f49ae0ab50b6e946029) is 6.0M, max 48.4M, 42.4M free. Sep 10 00:36:48.024000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:48.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:48.033860 systemd[1]: Started systemd-journald.service. Sep 10 00:36:48.034000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:48.035544 systemd[1]: Finished systemd-vconsole-setup.service. Sep 10 00:36:48.043508 kernel: audit: type=1130 audit(1757464608.034:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:48.043540 kernel: audit: type=1130 audit(1757464608.038:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:48.038000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:48.038500 systemd-modules-load[197]: Inserted module 'overlay' Sep 10 00:36:48.043523 systemd[1]: Starting dracut-cmdline-ask.service... Sep 10 00:36:48.044603 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 10 00:36:48.053755 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 10 00:36:48.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:48.058852 kernel: audit: type=1130 audit(1757464608.054:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:48.063966 systemd-resolved[198]: Positive Trust Anchors: Sep 10 00:36:48.063982 systemd-resolved[198]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 10 00:36:48.064009 systemd-resolved[198]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 10 00:36:48.067088 systemd-resolved[198]: Defaulting to hostname 'linux'. Sep 10 00:36:48.082251 kernel: audit: type=1130 audit(1757464608.072:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:48.082317 kernel: audit: type=1130 audit(1757464608.077:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:48.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:48.077000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:48.067871 systemd[1]: Started systemd-resolved.service. Sep 10 00:36:48.073955 systemd[1]: Finished dracut-cmdline-ask.service. Sep 10 00:36:48.087098 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 10 00:36:48.078234 systemd[1]: Reached target nss-lookup.target. Sep 10 00:36:48.082572 systemd[1]: Starting dracut-cmdline.service... Sep 10 00:36:48.091742 dracut-cmdline[215]: dracut-dracut-053 Sep 10 00:36:48.093887 kernel: Bridge firewalling registered Sep 10 00:36:48.093918 dracut-cmdline[215]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ebdf135b7dd8c9596dea7f2ca48bf31be0143f7cba32a9cc0282a66ca6db3272 Sep 10 00:36:48.093922 systemd-modules-load[197]: Inserted module 'br_netfilter' Sep 10 00:36:48.116855 kernel: SCSI subsystem initialized Sep 10 00:36:48.128852 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 10 00:36:48.131117 kernel: device-mapper: uevent: version 1.0.3 Sep 10 00:36:48.131138 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Sep 10 00:36:48.134696 systemd-modules-load[197]: Inserted module 'dm_multipath' Sep 10 00:36:48.135625 systemd[1]: Finished systemd-modules-load.service. Sep 10 00:36:48.136000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:48.138075 systemd[1]: Starting systemd-sysctl.service... Sep 10 00:36:48.141598 kernel: audit: type=1130 audit(1757464608.136:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:48.147164 systemd[1]: Finished systemd-sysctl.service. Sep 10 00:36:48.152030 kernel: audit: type=1130 audit(1757464608.147:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:48.147000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:48.153844 kernel: Loading iSCSI transport class v2.0-870. Sep 10 00:36:48.169860 kernel: iscsi: registered transport (tcp) Sep 10 00:36:48.190232 kernel: iscsi: registered transport (qla4xxx) Sep 10 00:36:48.190273 kernel: QLogic iSCSI HBA Driver Sep 10 00:36:48.212915 systemd[1]: Finished dracut-cmdline.service. Sep 10 00:36:48.212000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:48.214918 systemd[1]: Starting dracut-pre-udev.service... Sep 10 00:36:48.262869 kernel: raid6: avx2x4 gen() 22573 MB/s Sep 10 00:36:48.279873 kernel: raid6: avx2x4 xor() 6459 MB/s Sep 10 00:36:48.296870 kernel: raid6: avx2x2 gen() 25321 MB/s Sep 10 00:36:48.313860 kernel: raid6: avx2x2 xor() 17474 MB/s Sep 10 00:36:48.330888 kernel: raid6: avx2x1 gen() 23255 MB/s Sep 10 00:36:48.347869 kernel: raid6: avx2x1 xor() 14583 MB/s Sep 10 00:36:48.364872 kernel: raid6: sse2x4 gen() 14529 MB/s Sep 10 00:36:48.381850 kernel: raid6: sse2x4 xor() 7299 MB/s Sep 10 00:36:48.398846 kernel: raid6: sse2x2 gen() 16130 MB/s Sep 10 00:36:48.415851 kernel: raid6: sse2x2 xor() 9399 MB/s Sep 10 00:36:48.432856 kernel: raid6: sse2x1 gen() 11970 MB/s Sep 10 00:36:48.450173 kernel: raid6: sse2x1 xor() 7592 MB/s Sep 10 00:36:48.450193 kernel: raid6: using algorithm avx2x2 gen() 25321 MB/s Sep 10 00:36:48.450204 kernel: raid6: .... xor() 17474 MB/s, rmw enabled Sep 10 00:36:48.450863 kernel: raid6: using avx2x2 recovery algorithm Sep 10 00:36:48.463851 kernel: xor: automatically using best checksumming function avx Sep 10 00:36:48.556858 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Sep 10 00:36:48.564945 systemd[1]: Finished dracut-pre-udev.service. Sep 10 00:36:48.564000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:48.565000 audit: BPF prog-id=7 op=LOAD Sep 10 00:36:48.566000 audit: BPF prog-id=8 op=LOAD Sep 10 00:36:48.567219 systemd[1]: Starting systemd-udevd.service... Sep 10 00:36:48.582492 systemd-udevd[398]: Using default interface naming scheme 'v252'. Sep 10 00:36:48.587384 systemd[1]: Started systemd-udevd.service. Sep 10 00:36:48.589000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:48.590951 systemd[1]: Starting dracut-pre-trigger.service... Sep 10 00:36:48.601204 dracut-pre-trigger[408]: rd.md=0: removing MD RAID activation Sep 10 00:36:48.626997 systemd[1]: Finished dracut-pre-trigger.service. Sep 10 00:36:48.627000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:48.628615 systemd[1]: Starting systemd-udev-trigger.service... Sep 10 00:36:48.662567 systemd[1]: Finished systemd-udev-trigger.service. Sep 10 00:36:48.662000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:48.689403 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 10 00:36:48.696357 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 10 00:36:48.696377 kernel: GPT:9289727 != 19775487 Sep 10 00:36:48.696390 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 10 00:36:48.696402 kernel: GPT:9289727 != 19775487 Sep 10 00:36:48.696414 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 10 00:36:48.696425 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 00:36:48.701855 kernel: libata version 3.00 loaded. Sep 10 00:36:48.703845 kernel: cryptd: max_cpu_qlen set to 1000 Sep 10 00:36:48.715122 kernel: AVX2 version of gcm_enc/dec engaged. Sep 10 00:36:48.715151 kernel: AES CTR mode by8 optimization enabled Sep 10 00:36:48.721343 kernel: ahci 0000:00:1f.2: version 3.0 Sep 10 00:36:48.759510 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 10 00:36:48.759532 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Sep 10 00:36:48.759653 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 10 00:36:48.759770 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (444) Sep 10 00:36:48.759785 kernel: scsi host0: ahci Sep 10 00:36:48.759945 kernel: scsi host1: ahci Sep 10 00:36:48.760061 kernel: scsi host2: ahci Sep 10 00:36:48.760198 kernel: scsi host3: ahci Sep 10 00:36:48.760325 kernel: scsi host4: ahci Sep 10 00:36:48.760428 kernel: scsi host5: ahci Sep 10 00:36:48.760533 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Sep 10 00:36:48.760547 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Sep 10 00:36:48.760559 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Sep 10 00:36:48.760571 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Sep 10 00:36:48.760586 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Sep 10 00:36:48.760598 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Sep 10 00:36:48.739202 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Sep 10 00:36:48.743530 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Sep 10 00:36:48.746101 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Sep 10 00:36:48.753753 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Sep 10 00:36:48.761071 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 10 00:36:48.763739 systemd[1]: Starting disk-uuid.service... Sep 10 00:36:48.771028 disk-uuid[539]: Primary Header is updated. Sep 10 00:36:48.771028 disk-uuid[539]: Secondary Entries is updated. Sep 10 00:36:48.771028 disk-uuid[539]: Secondary Header is updated. Sep 10 00:36:48.774565 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 00:36:48.776852 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 00:36:49.073185 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 10 00:36:49.073289 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 10 00:36:49.073302 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 10 00:36:49.073313 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 10 00:36:49.074847 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 10 00:36:49.075851 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 10 00:36:49.077235 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 10 00:36:49.077249 kernel: ata3.00: applying bridge limits Sep 10 00:36:49.077893 kernel: ata3.00: configured for UDMA/100 Sep 10 00:36:49.078852 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 10 00:36:49.122860 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 10 00:36:49.139710 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 10 00:36:49.139723 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 10 00:36:49.779866 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 00:36:49.780119 disk-uuid[540]: The operation has completed successfully. Sep 10 00:36:49.806291 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 10 00:36:49.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:49.806000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:49.806375 systemd[1]: Finished disk-uuid.service. Sep 10 00:36:49.811440 systemd[1]: Starting verity-setup.service... Sep 10 00:36:49.827854 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Sep 10 00:36:49.848951 systemd[1]: Found device dev-mapper-usr.device. Sep 10 00:36:49.850694 systemd[1]: Mounting sysusr-usr.mount... Sep 10 00:36:49.853083 systemd[1]: Finished verity-setup.service. Sep 10 00:36:49.852000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:49.919871 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Sep 10 00:36:49.920545 systemd[1]: Mounted sysusr-usr.mount. Sep 10 00:36:49.921583 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Sep 10 00:36:49.922443 systemd[1]: Starting ignition-setup.service... Sep 10 00:36:49.925396 systemd[1]: Starting parse-ip-for-networkd.service... Sep 10 00:36:49.933543 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 10 00:36:49.933600 kernel: BTRFS info (device vda6): using free space tree Sep 10 00:36:49.933614 kernel: BTRFS info (device vda6): has skinny extents Sep 10 00:36:49.942813 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 10 00:36:49.952366 systemd[1]: Finished ignition-setup.service. Sep 10 00:36:49.952000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:49.953942 systemd[1]: Starting ignition-fetch-offline.service... Sep 10 00:36:50.038912 systemd[1]: Finished parse-ip-for-networkd.service. Sep 10 00:36:50.038000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:50.040000 audit: BPF prog-id=9 op=LOAD Sep 10 00:36:50.041431 systemd[1]: Starting systemd-networkd.service... Sep 10 00:36:50.063589 systemd-networkd[722]: lo: Link UP Sep 10 00:36:50.063598 systemd-networkd[722]: lo: Gained carrier Sep 10 00:36:50.063000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:50.064157 systemd-networkd[722]: Enumeration completed Sep 10 00:36:50.064235 systemd[1]: Started systemd-networkd.service. Sep 10 00:36:50.064694 systemd[1]: Reached target network.target. Sep 10 00:36:50.065102 systemd-networkd[722]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 10 00:36:50.066251 systemd-networkd[722]: eth0: Link UP Sep 10 00:36:50.066259 systemd-networkd[722]: eth0: Gained carrier Sep 10 00:36:50.068087 systemd[1]: Starting iscsiuio.service... Sep 10 00:36:50.080485 ignition[648]: Ignition 2.14.0 Sep 10 00:36:50.080496 ignition[648]: Stage: fetch-offline Sep 10 00:36:50.080566 ignition[648]: no configs at "/usr/lib/ignition/base.d" Sep 10 00:36:50.080578 ignition[648]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 00:36:50.080711 ignition[648]: parsed url from cmdline: "" Sep 10 00:36:50.080715 ignition[648]: no config URL provided Sep 10 00:36:50.080720 ignition[648]: reading system config file "/usr/lib/ignition/user.ign" Sep 10 00:36:50.080727 ignition[648]: no config at "/usr/lib/ignition/user.ign" Sep 10 00:36:50.080747 ignition[648]: op(1): [started] loading QEMU firmware config module Sep 10 00:36:50.080752 ignition[648]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 10 00:36:50.091982 ignition[648]: op(1): [finished] loading QEMU firmware config module Sep 10 00:36:50.099897 systemd[1]: Started iscsiuio.service. Sep 10 00:36:50.100000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:50.102752 systemd[1]: Starting iscsid.service... Sep 10 00:36:50.107850 iscsid[729]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Sep 10 00:36:50.107850 iscsid[729]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Sep 10 00:36:50.107850 iscsid[729]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Sep 10 00:36:50.107850 iscsid[729]: If using hardware iscsi like qla4xxx this message can be ignored. Sep 10 00:36:50.107850 iscsid[729]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Sep 10 00:36:50.107850 iscsid[729]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Sep 10 00:36:50.117984 systemd[1]: Started iscsid.service. Sep 10 00:36:50.123000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:50.125705 systemd[1]: Starting dracut-initqueue.service... Sep 10 00:36:50.140514 systemd[1]: Finished dracut-initqueue.service. Sep 10 00:36:50.140000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:50.141316 systemd[1]: Reached target remote-fs-pre.target. Sep 10 00:36:50.143112 systemd[1]: Reached target remote-cryptsetup.target. Sep 10 00:36:50.144843 systemd[1]: Reached target remote-fs.target. Sep 10 00:36:50.147626 systemd[1]: Starting dracut-pre-mount.service... Sep 10 00:36:50.157171 systemd[1]: Finished dracut-pre-mount.service. Sep 10 00:36:50.156000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:50.165591 ignition[648]: parsing config with SHA512: 3234ba5f0dba19a3595a3caf63934f304c97325fb0ce699d633eaef43f55b9a8c8ce6436d9106781843866230f512dc910a7db5b47876237e1f0c111d8983a42 Sep 10 00:36:50.171005 systemd-networkd[722]: eth0: DHCPv4 address 10.0.0.12/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 10 00:36:50.175321 unknown[648]: fetched base config from "system" Sep 10 00:36:50.175330 unknown[648]: fetched user config from "qemu" Sep 10 00:36:50.177338 ignition[648]: fetch-offline: fetch-offline passed Sep 10 00:36:50.178239 ignition[648]: Ignition finished successfully Sep 10 00:36:50.179990 systemd[1]: Finished ignition-fetch-offline.service. Sep 10 00:36:50.180000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:50.180957 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 10 00:36:50.181761 systemd[1]: Starting ignition-kargs.service... Sep 10 00:36:50.200607 ignition[743]: Ignition 2.14.0 Sep 10 00:36:50.200618 ignition[743]: Stage: kargs Sep 10 00:36:50.200747 ignition[743]: no configs at "/usr/lib/ignition/base.d" Sep 10 00:36:50.200759 ignition[743]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 00:36:50.203047 ignition[743]: kargs: kargs passed Sep 10 00:36:50.203113 ignition[743]: Ignition finished successfully Sep 10 00:36:50.204927 systemd[1]: Finished ignition-kargs.service. Sep 10 00:36:50.205000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:50.207251 systemd[1]: Starting ignition-disks.service... Sep 10 00:36:50.216972 ignition[749]: Ignition 2.14.0 Sep 10 00:36:50.216982 ignition[749]: Stage: disks Sep 10 00:36:50.217078 ignition[749]: no configs at "/usr/lib/ignition/base.d" Sep 10 00:36:50.217088 ignition[749]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 00:36:50.219000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:50.218632 systemd[1]: Finished ignition-disks.service. Sep 10 00:36:50.217949 ignition[749]: disks: disks passed Sep 10 00:36:50.219947 systemd[1]: Reached target initrd-root-device.target. Sep 10 00:36:50.217986 ignition[749]: Ignition finished successfully Sep 10 00:36:50.221666 systemd[1]: Reached target local-fs-pre.target. Sep 10 00:36:50.222459 systemd[1]: Reached target local-fs.target. Sep 10 00:36:50.223872 systemd[1]: Reached target sysinit.target. Sep 10 00:36:50.224601 systemd[1]: Reached target basic.target. Sep 10 00:36:50.226726 systemd[1]: Starting systemd-fsck-root.service... Sep 10 00:36:50.238289 systemd-fsck[757]: ROOT: clean, 629/553520 files, 56028/553472 blocks Sep 10 00:36:50.244037 systemd[1]: Finished systemd-fsck-root.service. Sep 10 00:36:50.243000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:50.246053 systemd[1]: Mounting sysroot.mount... Sep 10 00:36:50.254868 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Sep 10 00:36:50.255522 systemd[1]: Mounted sysroot.mount. Sep 10 00:36:50.256288 systemd[1]: Reached target initrd-root-fs.target. Sep 10 00:36:50.258779 systemd[1]: Mounting sysroot-usr.mount... Sep 10 00:36:50.259821 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Sep 10 00:36:50.259864 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 10 00:36:50.259884 systemd[1]: Reached target ignition-diskful.target. Sep 10 00:36:50.261921 systemd[1]: Mounted sysroot-usr.mount. Sep 10 00:36:50.263770 systemd[1]: Starting initrd-setup-root.service... Sep 10 00:36:50.268643 initrd-setup-root[767]: cut: /sysroot/etc/passwd: No such file or directory Sep 10 00:36:50.271141 initrd-setup-root[775]: cut: /sysroot/etc/group: No such file or directory Sep 10 00:36:50.274356 initrd-setup-root[783]: cut: /sysroot/etc/shadow: No such file or directory Sep 10 00:36:50.277750 initrd-setup-root[791]: cut: /sysroot/etc/gshadow: No such file or directory Sep 10 00:36:50.536869 systemd[1]: Finished initrd-setup-root.service. Sep 10 00:36:50.536000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:50.538307 systemd[1]: Starting ignition-mount.service... Sep 10 00:36:50.540285 systemd[1]: Starting sysroot-boot.service... Sep 10 00:36:50.544653 bash[808]: umount: /sysroot/usr/share/oem: not mounted. Sep 10 00:36:50.555839 ignition[809]: INFO : Ignition 2.14.0 Sep 10 00:36:50.556819 ignition[809]: INFO : Stage: mount Sep 10 00:36:50.557906 ignition[809]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 10 00:36:50.557906 ignition[809]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 00:36:50.561485 ignition[809]: INFO : mount: mount passed Sep 10 00:36:50.561546 systemd[1]: Finished sysroot-boot.service. Sep 10 00:36:50.562000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:50.563659 ignition[809]: INFO : Ignition finished successfully Sep 10 00:36:50.564700 systemd[1]: Finished ignition-mount.service. Sep 10 00:36:50.564000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:50.860617 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 10 00:36:50.869891 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (818) Sep 10 00:36:50.869917 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 10 00:36:50.869927 kernel: BTRFS info (device vda6): using free space tree Sep 10 00:36:50.871375 kernel: BTRFS info (device vda6): has skinny extents Sep 10 00:36:50.874364 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 10 00:36:50.875225 systemd[1]: Starting ignition-files.service... Sep 10 00:36:50.892232 ignition[838]: INFO : Ignition 2.14.0 Sep 10 00:36:50.892232 ignition[838]: INFO : Stage: files Sep 10 00:36:50.894381 ignition[838]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 10 00:36:50.894381 ignition[838]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 00:36:50.894381 ignition[838]: DEBUG : files: compiled without relabeling support, skipping Sep 10 00:36:50.898529 ignition[838]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 10 00:36:50.898529 ignition[838]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 10 00:36:50.898529 ignition[838]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 10 00:36:50.898529 ignition[838]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 10 00:36:50.898529 ignition[838]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 10 00:36:50.898529 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 10 00:36:50.898529 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 10 00:36:50.898529 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 10 00:36:50.898529 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 10 00:36:50.897648 unknown[838]: wrote ssh authorized keys file for user: core Sep 10 00:36:51.183585 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 10 00:36:51.761351 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 10 00:36:51.763407 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 10 00:36:51.763407 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 10 00:36:52.021122 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Sep 10 00:36:52.025040 systemd-networkd[722]: eth0: Gained IPv6LL Sep 10 00:36:52.330406 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 10 00:36:52.330406 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Sep 10 00:36:52.333891 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Sep 10 00:36:52.335542 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 10 00:36:52.337241 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 10 00:36:52.338925 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 10 00:36:52.340620 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 10 00:36:52.342286 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 10 00:36:52.344021 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 10 00:36:52.345684 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 10 00:36:52.347385 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 10 00:36:52.349004 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 10 00:36:52.351354 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 10 00:36:52.353719 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 10 00:36:52.355761 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 10 00:36:52.760274 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Sep 10 00:36:53.724382 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 10 00:36:53.724382 ignition[838]: INFO : files: op(d): [started] processing unit "containerd.service" Sep 10 00:36:53.728517 ignition[838]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 10 00:36:53.731062 ignition[838]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 10 00:36:53.731062 ignition[838]: INFO : files: op(d): [finished] processing unit "containerd.service" Sep 10 00:36:53.734849 ignition[838]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Sep 10 00:36:53.734849 ignition[838]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 10 00:36:53.738501 ignition[838]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 10 00:36:53.738501 ignition[838]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Sep 10 00:36:53.742001 ignition[838]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Sep 10 00:36:53.742001 ignition[838]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 10 00:36:53.745294 ignition[838]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 10 00:36:53.745294 ignition[838]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Sep 10 00:36:53.745294 ignition[838]: INFO : files: op(13): [started] setting preset to enabled for "prepare-helm.service" Sep 10 00:36:53.749695 ignition[838]: INFO : files: op(13): [finished] setting preset to enabled for "prepare-helm.service" Sep 10 00:36:53.749695 ignition[838]: INFO : files: op(14): [started] setting preset to disabled for "coreos-metadata.service" Sep 10 00:36:53.752290 ignition[838]: INFO : files: op(14): op(15): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 10 00:36:53.816787 ignition[838]: INFO : files: op(14): op(15): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 10 00:36:53.818554 ignition[838]: INFO : files: op(14): [finished] setting preset to disabled for "coreos-metadata.service" Sep 10 00:36:53.818554 ignition[838]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 10 00:36:53.818554 ignition[838]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 10 00:36:53.818554 ignition[838]: INFO : files: files passed Sep 10 00:36:53.818554 ignition[838]: INFO : Ignition finished successfully Sep 10 00:36:53.825982 systemd[1]: Finished ignition-files.service. Sep 10 00:36:53.831606 kernel: kauditd_printk_skb: 25 callbacks suppressed Sep 10 00:36:53.831627 kernel: audit: type=1130 audit(1757464613.825:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:53.825000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:53.831674 systemd[1]: Starting initrd-setup-root-after-ignition.service... Sep 10 00:36:53.832248 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Sep 10 00:36:53.833390 systemd[1]: Starting ignition-quench.service... Sep 10 00:36:53.845396 kernel: audit: type=1130 audit(1757464613.838:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:53.845426 kernel: audit: type=1131 audit(1757464613.838:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:53.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:53.838000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:53.837412 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 10 00:36:53.846609 initrd-setup-root-after-ignition[863]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Sep 10 00:36:53.837499 systemd[1]: Finished ignition-quench.service. Sep 10 00:36:53.849219 initrd-setup-root-after-ignition[865]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 10 00:36:53.851322 systemd[1]: Finished initrd-setup-root-after-ignition.service. Sep 10 00:36:53.853000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:53.853685 systemd[1]: Reached target ignition-complete.target. Sep 10 00:36:53.858398 kernel: audit: type=1130 audit(1757464613.853:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:53.859132 systemd[1]: Starting initrd-parse-etc.service... Sep 10 00:36:53.870432 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 10 00:36:53.871456 systemd[1]: Finished initrd-parse-etc.service. Sep 10 00:36:53.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:53.873180 systemd[1]: Reached target initrd-fs.target. Sep 10 00:36:53.880338 kernel: audit: type=1130 audit(1757464613.873:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:53.880371 kernel: audit: type=1131 audit(1757464613.873:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:53.873000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:53.880300 systemd[1]: Reached target initrd.target. Sep 10 00:36:53.882011 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Sep 10 00:36:53.884184 systemd[1]: Starting dracut-pre-pivot.service... Sep 10 00:36:53.893663 systemd[1]: Finished dracut-pre-pivot.service. Sep 10 00:36:53.895000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:53.895983 systemd[1]: Starting initrd-cleanup.service... Sep 10 00:36:53.899416 kernel: audit: type=1130 audit(1757464613.895:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:53.905091 systemd[1]: Stopped target nss-lookup.target. Sep 10 00:36:53.906768 systemd[1]: Stopped target remote-cryptsetup.target. Sep 10 00:36:53.908587 systemd[1]: Stopped target timers.target. Sep 10 00:36:53.910220 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 10 00:36:53.911226 systemd[1]: Stopped dracut-pre-pivot.service. Sep 10 00:36:53.912000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:53.913044 systemd[1]: Stopped target initrd.target. Sep 10 00:36:53.917259 kernel: audit: type=1131 audit(1757464613.912:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:53.917329 systemd[1]: Stopped target basic.target. Sep 10 00:36:53.919011 systemd[1]: Stopped target ignition-complete.target. Sep 10 00:36:53.920870 systemd[1]: Stopped target ignition-diskful.target. Sep 10 00:36:53.922923 systemd[1]: Stopped target initrd-root-device.target. Sep 10 00:36:53.925129 systemd[1]: Stopped target remote-fs.target. Sep 10 00:36:53.926932 systemd[1]: Stopped target remote-fs-pre.target. Sep 10 00:36:53.928768 systemd[1]: Stopped target sysinit.target. Sep 10 00:36:53.930405 systemd[1]: Stopped target local-fs.target. Sep 10 00:36:53.932185 systemd[1]: Stopped target local-fs-pre.target. Sep 10 00:36:53.934130 systemd[1]: Stopped target swap.target. Sep 10 00:36:53.935764 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 10 00:36:53.936908 systemd[1]: Stopped dracut-pre-mount.service. Sep 10 00:36:53.937000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:53.938747 systemd[1]: Stopped target cryptsetup.target. Sep 10 00:36:53.943796 kernel: audit: type=1131 audit(1757464613.937:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:53.943845 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 10 00:36:53.944985 systemd[1]: Stopped dracut-initqueue.service. Sep 10 00:36:53.945000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:53.946905 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 10 00:36:53.951061 kernel: audit: type=1131 audit(1757464613.945:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:53.947026 systemd[1]: Stopped ignition-fetch-offline.service. Sep 10 00:36:53.953000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:53.953276 systemd[1]: Stopped target paths.target. Sep 10 00:36:53.955145 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 10 00:36:53.961899 systemd[1]: Stopped systemd-ask-password-console.path. Sep 10 00:36:53.964640 systemd[1]: Stopped target slices.target. Sep 10 00:36:53.966749 systemd[1]: Stopped target sockets.target. Sep 10 00:36:53.969235 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 10 00:36:53.970900 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Sep 10 00:36:53.973000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:53.973409 systemd[1]: ignition-files.service: Deactivated successfully. Sep 10 00:36:53.974643 systemd[1]: Stopped ignition-files.service. Sep 10 00:36:53.975000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:53.978817 systemd[1]: Stopping ignition-mount.service... Sep 10 00:36:53.980726 systemd[1]: Stopping iscsid.service... Sep 10 00:36:53.982184 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 10 00:36:53.983258 iscsid[729]: iscsid shutting down. Sep 10 00:36:53.983393 systemd[1]: Stopped kmod-static-nodes.service. Sep 10 00:36:53.986913 systemd[1]: Stopping sysroot-boot.service... Sep 10 00:36:53.985000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:53.988445 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 10 00:36:53.989725 systemd[1]: Stopped systemd-udev-trigger.service. Sep 10 00:36:53.991000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:53.991787 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 10 00:36:53.992985 systemd[1]: Stopped dracut-pre-trigger.service. Sep 10 00:36:53.994778 ignition[878]: INFO : Ignition 2.14.0 Sep 10 00:36:53.994778 ignition[878]: INFO : Stage: umount Sep 10 00:36:53.994000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:53.997398 ignition[878]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 10 00:36:53.997398 ignition[878]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 00:36:53.999930 ignition[878]: INFO : umount: umount passed Sep 10 00:36:53.999930 ignition[878]: INFO : Ignition finished successfully Sep 10 00:36:54.003147 systemd[1]: iscsid.service: Deactivated successfully. Sep 10 00:36:54.004209 systemd[1]: Stopped iscsid.service. Sep 10 00:36:54.005000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:54.007800 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 10 00:36:54.009496 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 10 00:36:54.010745 systemd[1]: Stopped ignition-mount.service. Sep 10 00:36:54.012000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:54.013060 systemd[1]: iscsid.socket: Deactivated successfully. Sep 10 00:36:54.015386 systemd[1]: Closed iscsid.socket. Sep 10 00:36:54.017019 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 10 00:36:54.017063 systemd[1]: Stopped ignition-disks.service. Sep 10 00:36:54.020000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:54.020110 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 10 00:36:54.020156 systemd[1]: Stopped ignition-kargs.service. Sep 10 00:36:54.022000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:54.023185 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 10 00:36:54.023241 systemd[1]: Stopped ignition-setup.service. Sep 10 00:36:54.025000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:54.026508 systemd[1]: Stopping iscsiuio.service... Sep 10 00:36:54.028816 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 10 00:36:54.029881 systemd[1]: Finished initrd-cleanup.service. Sep 10 00:36:54.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:54.031000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:54.031766 systemd[1]: iscsiuio.service: Deactivated successfully. Sep 10 00:36:54.032671 systemd[1]: Stopped iscsiuio.service. Sep 10 00:36:54.033000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:54.035174 systemd[1]: Stopped target network.target. Sep 10 00:36:54.036924 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 10 00:36:54.036956 systemd[1]: Closed iscsiuio.socket. Sep 10 00:36:54.039305 systemd[1]: Stopping systemd-networkd.service... Sep 10 00:36:54.041310 systemd[1]: Stopping systemd-resolved.service... Sep 10 00:36:54.045904 systemd-networkd[722]: eth0: DHCPv6 lease lost Sep 10 00:36:54.048390 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 10 00:36:54.048532 systemd[1]: Stopped systemd-networkd.service. Sep 10 00:36:54.047000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:54.049666 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 10 00:36:54.049691 systemd[1]: Closed systemd-networkd.socket. Sep 10 00:36:54.053000 audit: BPF prog-id=9 op=UNLOAD Sep 10 00:36:54.055137 systemd[1]: Stopping network-cleanup.service... Sep 10 00:36:54.056730 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 10 00:36:54.056788 systemd[1]: Stopped parse-ip-for-networkd.service. Sep 10 00:36:54.059000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:54.059766 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 10 00:36:54.061000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:54.059808 systemd[1]: Stopped systemd-sysctl.service. Sep 10 00:36:54.061984 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 10 00:36:54.062017 systemd[1]: Stopped systemd-modules-load.service. Sep 10 00:36:54.064000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:54.065676 systemd[1]: Stopping systemd-udevd.service... Sep 10 00:36:54.068683 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 10 00:36:54.069423 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 10 00:36:54.072000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:54.069565 systemd[1]: Stopped systemd-resolved.service. Sep 10 00:36:54.073000 audit: BPF prog-id=6 op=UNLOAD Sep 10 00:36:54.076049 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 10 00:36:54.077073 systemd[1]: Stopped systemd-udevd.service. Sep 10 00:36:54.077000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:54.079304 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 10 00:36:54.079392 systemd[1]: Stopped network-cleanup.service. Sep 10 00:36:54.081000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:54.082121 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 10 00:36:54.082187 systemd[1]: Closed systemd-udevd-control.socket. Sep 10 00:36:54.084944 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 10 00:36:54.084975 systemd[1]: Closed systemd-udevd-kernel.socket. Sep 10 00:36:54.087838 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 10 00:36:54.087905 systemd[1]: Stopped dracut-pre-udev.service. Sep 10 00:36:54.090000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:54.090993 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 10 00:36:54.091033 systemd[1]: Stopped dracut-cmdline.service. Sep 10 00:36:54.093000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:54.093120 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 10 00:36:54.094000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:54.093986 systemd[1]: Stopped dracut-cmdline-ask.service. Sep 10 00:36:54.098277 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Sep 10 00:36:54.100357 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 10 00:36:54.100441 systemd[1]: Stopped systemd-vconsole-setup.service. Sep 10 00:36:54.103000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:54.104143 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 10 00:36:54.105454 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Sep 10 00:36:54.107000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:54.107000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:54.123616 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 10 00:36:54.123707 systemd[1]: Stopped sysroot-boot.service. Sep 10 00:36:54.125000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:54.126086 systemd[1]: Reached target initrd-switch-root.target. Sep 10 00:36:54.127721 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 10 00:36:54.127757 systemd[1]: Stopped initrd-setup-root.service. Sep 10 00:36:54.129000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:54.130795 systemd[1]: Starting initrd-switch-root.service... Sep 10 00:36:54.137086 systemd[1]: Switching root. Sep 10 00:36:54.137000 audit: BPF prog-id=5 op=UNLOAD Sep 10 00:36:54.137000 audit: BPF prog-id=4 op=UNLOAD Sep 10 00:36:54.137000 audit: BPF prog-id=3 op=UNLOAD Sep 10 00:36:54.140000 audit: BPF prog-id=8 op=UNLOAD Sep 10 00:36:54.140000 audit: BPF prog-id=7 op=UNLOAD Sep 10 00:36:54.156517 systemd-journald[196]: Journal stopped Sep 10 00:36:57.770599 systemd-journald[196]: Received SIGTERM from PID 1 (systemd). Sep 10 00:36:57.770658 kernel: SELinux: Class mctp_socket not defined in policy. Sep 10 00:36:57.770671 kernel: SELinux: Class anon_inode not defined in policy. Sep 10 00:36:57.770680 kernel: SELinux: the above unknown classes and permissions will be allowed Sep 10 00:36:57.770690 kernel: SELinux: policy capability network_peer_controls=1 Sep 10 00:36:57.770698 kernel: SELinux: policy capability open_perms=1 Sep 10 00:36:57.770708 kernel: SELinux: policy capability extended_socket_class=1 Sep 10 00:36:57.770721 kernel: SELinux: policy capability always_check_network=0 Sep 10 00:36:57.770734 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 10 00:36:57.770743 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 10 00:36:57.770752 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 10 00:36:57.770761 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 10 00:36:57.770771 systemd[1]: Successfully loaded SELinux policy in 44.203ms. Sep 10 00:36:57.770802 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.421ms. Sep 10 00:36:57.770818 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 10 00:36:57.770872 systemd[1]: Detected virtualization kvm. Sep 10 00:36:57.770883 systemd[1]: Detected architecture x86-64. Sep 10 00:36:57.770893 systemd[1]: Detected first boot. Sep 10 00:36:57.770903 systemd[1]: Initializing machine ID from VM UUID. Sep 10 00:36:57.770913 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Sep 10 00:36:57.770923 systemd[1]: Populated /etc with preset unit settings. Sep 10 00:36:57.770935 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 10 00:36:57.770947 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 10 00:36:57.770958 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 10 00:36:57.770977 systemd[1]: Queued start job for default target multi-user.target. Sep 10 00:36:57.770988 systemd[1]: Unnecessary job was removed for dev-vda6.device. Sep 10 00:36:57.770999 systemd[1]: Created slice system-addon\x2dconfig.slice. Sep 10 00:36:57.771009 systemd[1]: Created slice system-addon\x2drun.slice. Sep 10 00:36:57.771020 systemd[1]: Created slice system-getty.slice. Sep 10 00:36:57.771036 systemd[1]: Created slice system-modprobe.slice. Sep 10 00:36:57.771047 systemd[1]: Created slice system-serial\x2dgetty.slice. Sep 10 00:36:57.771057 systemd[1]: Created slice system-system\x2dcloudinit.slice. Sep 10 00:36:57.771067 systemd[1]: Created slice system-systemd\x2dfsck.slice. Sep 10 00:36:57.771077 systemd[1]: Created slice user.slice. Sep 10 00:36:57.771090 systemd[1]: Started systemd-ask-password-console.path. Sep 10 00:36:57.771101 systemd[1]: Started systemd-ask-password-wall.path. Sep 10 00:36:57.771112 systemd[1]: Set up automount boot.automount. Sep 10 00:36:57.771122 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Sep 10 00:36:57.771132 systemd[1]: Reached target integritysetup.target. Sep 10 00:36:57.771144 systemd[1]: Reached target remote-cryptsetup.target. Sep 10 00:36:57.771154 systemd[1]: Reached target remote-fs.target. Sep 10 00:36:57.771164 systemd[1]: Reached target slices.target. Sep 10 00:36:57.771175 systemd[1]: Reached target swap.target. Sep 10 00:36:57.771186 systemd[1]: Reached target torcx.target. Sep 10 00:36:57.771197 systemd[1]: Reached target veritysetup.target. Sep 10 00:36:57.771207 systemd[1]: Listening on systemd-coredump.socket. Sep 10 00:36:57.771217 systemd[1]: Listening on systemd-initctl.socket. Sep 10 00:36:57.771228 systemd[1]: Listening on systemd-journald-audit.socket. Sep 10 00:36:57.771238 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 10 00:36:57.771248 systemd[1]: Listening on systemd-journald.socket. Sep 10 00:36:57.771258 systemd[1]: Listening on systemd-networkd.socket. Sep 10 00:36:57.771268 systemd[1]: Listening on systemd-udevd-control.socket. Sep 10 00:36:57.771278 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 10 00:36:57.771288 systemd[1]: Listening on systemd-userdbd.socket. Sep 10 00:36:57.771298 systemd[1]: Mounting dev-hugepages.mount... Sep 10 00:36:57.771308 systemd[1]: Mounting dev-mqueue.mount... Sep 10 00:36:57.771320 systemd[1]: Mounting media.mount... Sep 10 00:36:57.771329 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 00:36:57.771340 systemd[1]: Mounting sys-kernel-debug.mount... Sep 10 00:36:57.771350 systemd[1]: Mounting sys-kernel-tracing.mount... Sep 10 00:36:57.771361 systemd[1]: Mounting tmp.mount... Sep 10 00:36:57.771370 systemd[1]: Starting flatcar-tmpfiles.service... Sep 10 00:36:57.771384 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 10 00:36:57.771394 systemd[1]: Starting kmod-static-nodes.service... Sep 10 00:36:57.771405 systemd[1]: Starting modprobe@configfs.service... Sep 10 00:36:57.771419 systemd[1]: Starting modprobe@dm_mod.service... Sep 10 00:36:57.771432 systemd[1]: Starting modprobe@drm.service... Sep 10 00:36:57.771445 systemd[1]: Starting modprobe@efi_pstore.service... Sep 10 00:36:57.771455 systemd[1]: Starting modprobe@fuse.service... Sep 10 00:36:57.771464 systemd[1]: Starting modprobe@loop.service... Sep 10 00:36:57.771474 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 10 00:36:57.771485 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Sep 10 00:36:57.771495 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Sep 10 00:36:57.771505 systemd[1]: Starting systemd-journald.service... Sep 10 00:36:57.771516 kernel: fuse: init (API version 7.34) Sep 10 00:36:57.771526 systemd[1]: Starting systemd-modules-load.service... Sep 10 00:36:57.771536 kernel: loop: module loaded Sep 10 00:36:57.771546 systemd[1]: Starting systemd-network-generator.service... Sep 10 00:36:57.771556 systemd[1]: Starting systemd-remount-fs.service... Sep 10 00:36:57.771566 systemd[1]: Starting systemd-udev-trigger.service... Sep 10 00:36:57.771576 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 00:36:57.771589 systemd-journald[1017]: Journal started Sep 10 00:36:57.771628 systemd-journald[1017]: Runtime Journal (/run/log/journal/faff4a3b61414f49ae0ab50b6e946029) is 6.0M, max 48.4M, 42.4M free. Sep 10 00:36:57.687000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 10 00:36:57.687000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Sep 10 00:36:57.768000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Sep 10 00:36:57.768000 audit[1017]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffccab961a0 a2=4000 a3=7ffccab9623c items=0 ppid=1 pid=1017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:36:57.768000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Sep 10 00:36:57.774997 systemd[1]: Started systemd-journald.service. Sep 10 00:36:57.774000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:57.776185 systemd[1]: Mounted dev-hugepages.mount. Sep 10 00:36:57.777045 systemd[1]: Mounted dev-mqueue.mount. Sep 10 00:36:57.778106 systemd[1]: Mounted media.mount. Sep 10 00:36:57.778928 systemd[1]: Mounted sys-kernel-debug.mount. Sep 10 00:36:57.779816 systemd[1]: Mounted sys-kernel-tracing.mount. Sep 10 00:36:57.780747 systemd[1]: Mounted tmp.mount. Sep 10 00:36:57.781821 systemd[1]: Finished kmod-static-nodes.service. Sep 10 00:36:57.781000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:57.782856 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 10 00:36:57.783035 systemd[1]: Finished modprobe@configfs.service. Sep 10 00:36:57.783000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:57.783000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:57.784309 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 00:36:57.784451 systemd[1]: Finished modprobe@dm_mod.service. Sep 10 00:36:57.784000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:57.784000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:57.785577 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 10 00:36:57.785742 systemd[1]: Finished modprobe@drm.service. Sep 10 00:36:57.785000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:57.785000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:57.786702 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 00:36:57.786863 systemd[1]: Finished modprobe@efi_pstore.service. Sep 10 00:36:57.786000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:57.786000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:57.787942 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 10 00:36:57.788000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:57.788000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:57.788097 systemd[1]: Finished modprobe@fuse.service. Sep 10 00:36:57.789064 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 00:36:57.789231 systemd[1]: Finished modprobe@loop.service. Sep 10 00:36:57.789000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:57.789000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:57.790435 systemd[1]: Finished systemd-modules-load.service. Sep 10 00:36:57.790000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:57.792894 systemd[1]: Finished flatcar-tmpfiles.service. Sep 10 00:36:57.794192 systemd[1]: Finished systemd-network-generator.service. Sep 10 00:36:57.792000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:57.795581 systemd[1]: Finished systemd-remount-fs.service. Sep 10 00:36:57.794000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:57.795000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:57.796731 systemd[1]: Reached target network-pre.target. Sep 10 00:36:57.798542 systemd[1]: Mounting sys-fs-fuse-connections.mount... Sep 10 00:36:57.800239 systemd[1]: Mounting sys-kernel-config.mount... Sep 10 00:36:57.801001 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 10 00:36:57.802657 systemd[1]: Starting systemd-hwdb-update.service... Sep 10 00:36:57.804434 systemd[1]: Starting systemd-journal-flush.service... Sep 10 00:36:57.805280 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 10 00:36:57.806410 systemd[1]: Starting systemd-random-seed.service... Sep 10 00:36:57.807244 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 10 00:36:57.811692 systemd-journald[1017]: Time spent on flushing to /var/log/journal/faff4a3b61414f49ae0ab50b6e946029 is 14.922ms for 1104 entries. Sep 10 00:36:57.811692 systemd-journald[1017]: System Journal (/var/log/journal/faff4a3b61414f49ae0ab50b6e946029) is 8.0M, max 195.6M, 187.6M free. Sep 10 00:36:57.846690 systemd-journald[1017]: Received client request to flush runtime journal. Sep 10 00:36:57.827000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:57.832000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:57.844000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:57.845000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:57.814088 systemd[1]: Starting systemd-sysctl.service... Sep 10 00:36:57.816142 systemd[1]: Starting systemd-sysusers.service... Sep 10 00:36:57.820355 systemd[1]: Mounted sys-fs-fuse-connections.mount. Sep 10 00:36:57.823020 systemd[1]: Mounted sys-kernel-config.mount. Sep 10 00:36:57.827667 systemd[1]: Finished systemd-random-seed.service. Sep 10 00:36:57.828743 systemd[1]: Reached target first-boot-complete.target. Sep 10 00:36:57.832451 systemd[1]: Finished systemd-sysctl.service. Sep 10 00:36:57.843920 systemd[1]: Finished systemd-sysusers.service. Sep 10 00:36:57.845497 systemd[1]: Finished systemd-udev-trigger.service. Sep 10 00:36:57.847910 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 10 00:36:57.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:57.850217 systemd[1]: Starting systemd-udev-settle.service... Sep 10 00:36:57.851557 systemd[1]: Finished systemd-journal-flush.service. Sep 10 00:36:57.858988 udevadm[1068]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 10 00:36:57.869000 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 10 00:36:57.869000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:58.304043 systemd[1]: Finished systemd-hwdb-update.service. Sep 10 00:36:58.304000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:58.306404 systemd[1]: Starting systemd-udevd.service... Sep 10 00:36:58.323186 systemd-udevd[1071]: Using default interface naming scheme 'v252'. Sep 10 00:36:58.336120 systemd[1]: Started systemd-udevd.service. Sep 10 00:36:58.336000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:58.339215 systemd[1]: Starting systemd-networkd.service... Sep 10 00:36:58.346661 systemd[1]: Starting systemd-userdbd.service... Sep 10 00:36:58.360814 systemd[1]: Found device dev-ttyS0.device. Sep 10 00:36:58.390025 systemd[1]: Started systemd-userdbd.service. Sep 10 00:36:58.390000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:58.395854 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 10 00:36:58.404327 kernel: ACPI: button: Power Button [PWRF] Sep 10 00:36:58.403996 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 10 00:36:58.420000 audit[1079]: AVC avc: denied { confidentiality } for pid=1079 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Sep 10 00:36:58.420000 audit[1079]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=556c1240edf0 a1=338ec a2=7eff0295abc5 a3=5 items=110 ppid=1071 pid=1079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:36:58.420000 audit: CWD cwd="/" Sep 10 00:36:58.420000 audit: PATH item=0 name=(null) inode=51 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=1 name=(null) inode=10236 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=2 name=(null) inode=10236 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=3 name=(null) inode=10237 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=4 name=(null) inode=10236 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=5 name=(null) inode=10238 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=6 name=(null) inode=10236 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=7 name=(null) inode=10239 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=8 name=(null) inode=10239 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=9 name=(null) inode=10240 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=10 name=(null) inode=10239 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=11 name=(null) inode=15361 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=12 name=(null) inode=10239 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=13 name=(null) inode=15362 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=14 name=(null) inode=10239 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=15 name=(null) inode=15363 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=16 name=(null) inode=10239 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=17 name=(null) inode=15364 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=18 name=(null) inode=10236 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=19 name=(null) inode=15365 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=20 name=(null) inode=15365 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=21 name=(null) inode=15366 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=22 name=(null) inode=15365 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=23 name=(null) inode=15367 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=24 name=(null) inode=15365 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=25 name=(null) inode=15368 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=26 name=(null) inode=15365 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=27 name=(null) inode=15369 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=28 name=(null) inode=15365 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=29 name=(null) inode=15370 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=30 name=(null) inode=10236 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=31 name=(null) inode=15371 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=32 name=(null) inode=15371 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=33 name=(null) inode=15372 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=34 name=(null) inode=15371 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=35 name=(null) inode=15373 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=36 name=(null) inode=15371 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=37 name=(null) inode=15374 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=38 name=(null) inode=15371 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=39 name=(null) inode=15375 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=40 name=(null) inode=15371 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=41 name=(null) inode=15376 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=42 name=(null) inode=10236 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=43 name=(null) inode=15377 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=44 name=(null) inode=15377 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=45 name=(null) inode=15378 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=46 name=(null) inode=15377 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=47 name=(null) inode=15379 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=48 name=(null) inode=15377 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=49 name=(null) inode=15380 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=50 name=(null) inode=15377 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=51 name=(null) inode=15381 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=52 name=(null) inode=15377 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=53 name=(null) inode=15382 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=54 name=(null) inode=51 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=55 name=(null) inode=15383 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=56 name=(null) inode=15383 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=57 name=(null) inode=15384 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=58 name=(null) inode=15383 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=59 name=(null) inode=15385 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=60 name=(null) inode=15383 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=61 name=(null) inode=15386 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=62 name=(null) inode=15386 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=63 name=(null) inode=15387 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=64 name=(null) inode=15386 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=65 name=(null) inode=15388 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=66 name=(null) inode=15386 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=67 name=(null) inode=15389 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=68 name=(null) inode=15386 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=69 name=(null) inode=15390 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=70 name=(null) inode=15386 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=71 name=(null) inode=15391 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=72 name=(null) inode=15383 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=73 name=(null) inode=15392 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=74 name=(null) inode=15392 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=75 name=(null) inode=15393 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=76 name=(null) inode=15392 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=77 name=(null) inode=15394 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=78 name=(null) inode=15392 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=79 name=(null) inode=15395 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=80 name=(null) inode=15392 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=81 name=(null) inode=15396 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=82 name=(null) inode=15392 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=83 name=(null) inode=15397 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=84 name=(null) inode=15383 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=85 name=(null) inode=15398 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=86 name=(null) inode=15398 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=87 name=(null) inode=15399 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=88 name=(null) inode=15398 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=89 name=(null) inode=15400 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=90 name=(null) inode=15398 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=91 name=(null) inode=15401 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=92 name=(null) inode=15398 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=93 name=(null) inode=15402 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=94 name=(null) inode=15398 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=95 name=(null) inode=15403 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=96 name=(null) inode=15383 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=97 name=(null) inode=15404 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=98 name=(null) inode=15404 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=99 name=(null) inode=15405 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=100 name=(null) inode=15404 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=101 name=(null) inode=15406 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=102 name=(null) inode=15404 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=103 name=(null) inode=15407 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=104 name=(null) inode=15404 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=105 name=(null) inode=15408 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=106 name=(null) inode=15404 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=107 name=(null) inode=15409 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PATH item=109 name=(null) inode=15410 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 10 00:36:58.420000 audit: PROCTITLE proctitle="(udev-worker)" Sep 10 00:36:58.444350 systemd-networkd[1083]: lo: Link UP Sep 10 00:36:58.444717 systemd-networkd[1083]: lo: Gained carrier Sep 10 00:36:58.445224 systemd-networkd[1083]: Enumeration completed Sep 10 00:36:58.445350 systemd[1]: Started systemd-networkd.service. Sep 10 00:36:58.446034 systemd-networkd[1083]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 10 00:36:58.445000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:58.447365 systemd-networkd[1083]: eth0: Link UP Sep 10 00:36:58.447373 systemd-networkd[1083]: eth0: Gained carrier Sep 10 00:36:58.455992 systemd-networkd[1083]: eth0: DHCPv4 address 10.0.0.12/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 10 00:36:58.459395 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Sep 10 00:36:58.462046 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 10 00:36:58.462158 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Sep 10 00:36:58.462262 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 10 00:36:58.468845 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 10 00:36:58.472840 kernel: mousedev: PS/2 mouse device common for all mice Sep 10 00:36:58.524072 kernel: kvm: Nested Virtualization enabled Sep 10 00:36:58.524140 kernel: SVM: kvm: Nested Paging enabled Sep 10 00:36:58.524170 kernel: SVM: Virtual VMLOAD VMSAVE supported Sep 10 00:36:58.525254 kernel: SVM: Virtual GIF supported Sep 10 00:36:58.541853 kernel: EDAC MC: Ver: 3.0.0 Sep 10 00:36:58.566289 systemd[1]: Finished systemd-udev-settle.service. Sep 10 00:36:58.566000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:58.568733 systemd[1]: Starting lvm2-activation-early.service... Sep 10 00:36:58.578086 lvm[1109]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 10 00:36:58.609045 systemd[1]: Finished lvm2-activation-early.service. Sep 10 00:36:58.609000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:58.610265 systemd[1]: Reached target cryptsetup.target. Sep 10 00:36:58.612566 systemd[1]: Starting lvm2-activation.service... Sep 10 00:36:58.616179 lvm[1111]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 10 00:36:58.643070 systemd[1]: Finished lvm2-activation.service. Sep 10 00:36:58.643000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:58.644223 systemd[1]: Reached target local-fs-pre.target. Sep 10 00:36:58.645171 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 10 00:36:58.645188 systemd[1]: Reached target local-fs.target. Sep 10 00:36:58.646046 systemd[1]: Reached target machines.target. Sep 10 00:36:58.648083 systemd[1]: Starting ldconfig.service... Sep 10 00:36:58.649190 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 10 00:36:58.649242 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 10 00:36:58.650233 systemd[1]: Starting systemd-boot-update.service... Sep 10 00:36:58.651869 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Sep 10 00:36:58.653964 systemd[1]: Starting systemd-machine-id-commit.service... Sep 10 00:36:58.655858 systemd[1]: Starting systemd-sysext.service... Sep 10 00:36:58.659634 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1114 (bootctl) Sep 10 00:36:58.660879 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Sep 10 00:36:58.665000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:58.665224 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Sep 10 00:36:58.669119 systemd[1]: Unmounting usr-share-oem.mount... Sep 10 00:36:58.672735 systemd[1]: usr-share-oem.mount: Deactivated successfully. Sep 10 00:36:58.673022 systemd[1]: Unmounted usr-share-oem.mount. Sep 10 00:36:58.686863 kernel: loop0: detected capacity change from 0 to 221472 Sep 10 00:36:58.693874 systemd-fsck[1123]: fsck.fat 4.2 (2021-01-31) Sep 10 00:36:58.693874 systemd-fsck[1123]: /dev/vda1: 791 files, 120785/258078 clusters Sep 10 00:36:58.695265 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Sep 10 00:36:58.696000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:58.698689 systemd[1]: Mounting boot.mount... Sep 10 00:36:58.718027 systemd[1]: Mounted boot.mount. Sep 10 00:36:58.977389 systemd[1]: Finished systemd-boot-update.service. Sep 10 00:36:58.978848 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 10 00:36:58.978000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:58.980229 kernel: kauditd_printk_skb: 199 callbacks suppressed Sep 10 00:36:58.980280 kernel: audit: type=1130 audit(1757464618.978:123): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:58.997855 kernel: loop1: detected capacity change from 0 to 221472 Sep 10 00:36:59.002362 (sd-sysext)[1134]: Using extensions 'kubernetes'. Sep 10 00:36:59.002708 (sd-sysext)[1134]: Merged extensions into '/usr'. Sep 10 00:36:59.016568 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 00:36:59.018149 systemd[1]: Mounting usr-share-oem.mount... Sep 10 00:36:59.019279 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 10 00:36:59.020245 systemd[1]: Starting modprobe@dm_mod.service... Sep 10 00:36:59.022280 systemd[1]: Starting modprobe@efi_pstore.service... Sep 10 00:36:59.023823 ldconfig[1113]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 10 00:36:59.024268 systemd[1]: Starting modprobe@loop.service... Sep 10 00:36:59.025200 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 10 00:36:59.025318 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 10 00:36:59.025425 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 00:36:59.027881 systemd[1]: Mounted usr-share-oem.mount. Sep 10 00:36:59.029323 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 00:36:59.029457 systemd[1]: Finished modprobe@dm_mod.service. Sep 10 00:36:59.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:59.030648 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 00:36:59.030789 systemd[1]: Finished modprobe@efi_pstore.service. Sep 10 00:36:59.029000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:59.050173 kernel: audit: type=1130 audit(1757464619.029:124): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:59.050211 kernel: audit: type=1131 audit(1757464619.029:125): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:59.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:59.051484 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 00:36:59.051618 systemd[1]: Finished modprobe@loop.service. Sep 10 00:36:59.050000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:59.054849 kernel: audit: type=1130 audit(1757464619.050:126): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:59.054875 kernel: audit: type=1131 audit(1757464619.050:127): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:59.057000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:59.058434 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 10 00:36:59.058525 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 10 00:36:59.059327 systemd[1]: Finished systemd-sysext.service. Sep 10 00:36:59.057000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:59.061851 kernel: audit: type=1130 audit(1757464619.057:128): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:59.061894 kernel: audit: type=1131 audit(1757464619.057:129): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:59.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:59.066415 systemd[1]: Starting ensure-sysext.service... Sep 10 00:36:59.068848 kernel: audit: type=1130 audit(1757464619.064:130): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:59.070077 systemd[1]: Starting systemd-tmpfiles-setup.service... Sep 10 00:36:59.075490 systemd[1]: Reloading. Sep 10 00:36:59.116770 systemd-tmpfiles[1148]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Sep 10 00:36:59.118583 systemd-tmpfiles[1148]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 10 00:36:59.120468 systemd-tmpfiles[1148]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 10 00:36:59.123517 /usr/lib/systemd/system-generators/torcx-generator[1168]: time="2025-09-10T00:36:59Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 10 00:36:59.123864 /usr/lib/systemd/system-generators/torcx-generator[1168]: time="2025-09-10T00:36:59Z" level=info msg="torcx already run" Sep 10 00:36:59.211235 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 10 00:36:59.211258 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 10 00:36:59.236769 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 10 00:36:59.308280 systemd[1]: Finished ldconfig.service. Sep 10 00:36:59.308000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:59.310505 systemd[1]: Finished systemd-tmpfiles-setup.service. Sep 10 00:36:59.312855 kernel: audit: type=1130 audit(1757464619.308:131): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:59.312000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:59.315225 systemd[1]: Starting audit-rules.service... Sep 10 00:36:59.316856 kernel: audit: type=1130 audit(1757464619.312:132): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:59.317678 systemd[1]: Starting clean-ca-certificates.service... Sep 10 00:36:59.319671 systemd[1]: Starting systemd-journal-catalog-update.service... Sep 10 00:36:59.322257 systemd[1]: Starting systemd-resolved.service... Sep 10 00:36:59.324729 systemd[1]: Starting systemd-timesyncd.service... Sep 10 00:36:59.326473 systemd[1]: Starting systemd-update-utmp.service... Sep 10 00:36:59.327884 systemd[1]: Finished clean-ca-certificates.service. Sep 10 00:36:59.328000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:59.332508 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 00:36:59.332749 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 10 00:36:59.334056 systemd[1]: Starting modprobe@dm_mod.service... Sep 10 00:36:59.335982 systemd[1]: Starting modprobe@efi_pstore.service... Sep 10 00:36:59.337870 systemd[1]: Starting modprobe@loop.service... Sep 10 00:36:59.338717 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 10 00:36:59.338878 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 10 00:36:59.339033 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 10 00:36:59.339127 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 00:36:59.340348 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 00:36:59.341071 systemd[1]: Finished modprobe@dm_mod.service. Sep 10 00:36:59.341000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:59.341000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:59.342507 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 00:36:59.342685 systemd[1]: Finished modprobe@efi_pstore.service. Sep 10 00:36:59.352000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:59.352000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:59.354034 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 00:36:59.354338 systemd[1]: Finished modprobe@loop.service. Sep 10 00:36:59.354000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:59.354000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:59.355896 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 10 00:36:59.356174 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 10 00:36:59.358270 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 00:36:59.358602 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 10 00:36:59.360062 systemd[1]: Starting modprobe@dm_mod.service... Sep 10 00:36:59.362107 systemd[1]: Starting modprobe@efi_pstore.service... Sep 10 00:36:59.366093 systemd[1]: Starting modprobe@loop.service... Sep 10 00:36:59.367020 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 10 00:36:59.367180 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 10 00:36:59.367351 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 10 00:36:59.367453 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 00:36:59.368652 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 00:36:59.369104 systemd[1]: Finished modprobe@dm_mod.service. Sep 10 00:36:59.369000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:59.369000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:59.370451 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 00:36:59.370613 systemd[1]: Finished modprobe@efi_pstore.service. Sep 10 00:36:59.370000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:59.370000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:59.371856 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 10 00:36:59.372240 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 00:36:59.372741 systemd[1]: Finished modprobe@loop.service. Sep 10 00:36:59.372000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:59.372000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:59.374101 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 10 00:36:59.377632 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 00:36:59.378308 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 10 00:36:59.379842 systemd[1]: Starting modprobe@dm_mod.service... Sep 10 00:36:59.381549 systemd[1]: Starting modprobe@drm.service... Sep 10 00:36:59.383739 systemd[1]: Starting modprobe@efi_pstore.service... Sep 10 00:36:59.385333 systemd[1]: Starting modprobe@loop.service... Sep 10 00:36:59.386211 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 10 00:36:59.386325 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 10 00:36:59.387415 systemd[1]: Starting systemd-networkd-wait-online.service... Sep 10 00:36:59.388426 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 10 00:36:59.388532 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 00:36:59.393399 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 00:36:59.393527 systemd[1]: Finished modprobe@dm_mod.service. Sep 10 00:36:59.393000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:59.393000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:59.395107 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 10 00:36:59.395257 systemd[1]: Finished modprobe@drm.service. Sep 10 00:36:59.395000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:59.395000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:59.396492 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 00:36:59.396645 systemd[1]: Finished modprobe@efi_pstore.service. Sep 10 00:36:59.396000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:59.396000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:59.397971 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 00:36:59.398313 systemd[1]: Finished modprobe@loop.service. Sep 10 00:36:59.398000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:59.398000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:59.401113 systemd[1]: Finished ensure-sysext.service. Sep 10 00:36:59.401000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:59.402957 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 10 00:36:59.402991 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 10 00:36:59.474000 audit[1225]: SYSTEM_BOOT pid=1225 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Sep 10 00:36:59.478000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:36:59.478297 systemd-resolved[1223]: Positive Trust Anchors: Sep 10 00:36:59.478305 systemd-resolved[1223]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 10 00:36:59.478330 systemd-resolved[1223]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 10 00:36:59.478443 systemd[1]: Finished systemd-update-utmp.service. Sep 10 00:36:59.479613 systemd[1]: Started systemd-timesyncd.service. Sep 10 00:36:59.479000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:37:00.307537 systemd-timesyncd[1224]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 10 00:37:00.307599 systemd-timesyncd[1224]: Initial clock synchronization to Wed 2025-09-10 00:37:00.307407 UTC. Sep 10 00:37:00.307635 systemd[1]: Reached target time-set.target. Sep 10 00:37:00.337917 systemd-resolved[1223]: Defaulting to hostname 'linux'. Sep 10 00:37:00.339313 systemd[1]: Started systemd-resolved.service. Sep 10 00:37:00.339000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:37:00.340812 systemd[1]: Finished systemd-journal-catalog-update.service. Sep 10 00:37:00.341000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 10 00:37:00.342090 systemd[1]: Reached target network.target. Sep 10 00:37:00.343096 systemd[1]: Reached target nss-lookup.target. Sep 10 00:37:00.343000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Sep 10 00:37:00.343000 audit[1269]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff5030f880 a2=420 a3=0 items=0 ppid=1219 pid=1269 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 10 00:37:00.343000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Sep 10 00:37:00.344344 augenrules[1269]: No rules Sep 10 00:37:00.345694 systemd[1]: Starting systemd-update-done.service... Sep 10 00:37:00.347199 systemd[1]: Finished audit-rules.service. Sep 10 00:37:00.351258 systemd[1]: Finished systemd-update-done.service. Sep 10 00:37:00.353850 systemd[1]: Reached target sysinit.target. Sep 10 00:37:00.355041 systemd[1]: Started motdgen.path. Sep 10 00:37:00.356002 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Sep 10 00:37:00.359206 systemd[1]: Started logrotate.timer. Sep 10 00:37:00.360267 systemd[1]: Started mdadm.timer. Sep 10 00:37:00.361306 systemd[1]: Started systemd-tmpfiles-clean.timer. Sep 10 00:37:00.362430 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 10 00:37:00.362467 systemd[1]: Reached target paths.target. Sep 10 00:37:00.363443 systemd[1]: Reached target timers.target. Sep 10 00:37:00.364727 systemd[1]: Listening on dbus.socket. Sep 10 00:37:00.367102 systemd[1]: Starting docker.socket... Sep 10 00:37:00.405956 systemd[1]: Listening on sshd.socket. Sep 10 00:37:00.407464 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 10 00:37:00.407935 systemd[1]: Listening on docker.socket. Sep 10 00:37:00.408784 systemd[1]: Reached target sockets.target. Sep 10 00:37:00.409558 systemd[1]: Reached target basic.target. Sep 10 00:37:00.410454 systemd[1]: System is tainted: cgroupsv1 Sep 10 00:37:00.410511 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 10 00:37:00.410530 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 10 00:37:00.411541 systemd[1]: Starting containerd.service... Sep 10 00:37:00.413434 systemd[1]: Starting dbus.service... Sep 10 00:37:00.415086 systemd[1]: Starting enable-oem-cloudinit.service... Sep 10 00:37:00.417099 systemd[1]: Starting extend-filesystems.service... Sep 10 00:37:00.418330 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Sep 10 00:37:00.419293 systemd[1]: Starting motdgen.service... Sep 10 00:37:00.420996 systemd[1]: Starting prepare-helm.service... Sep 10 00:37:00.423988 systemd[1]: Starting ssh-key-proc-cmdline.service... Sep 10 00:37:00.425234 jq[1282]: false Sep 10 00:37:00.426309 systemd[1]: Starting sshd-keygen.service... Sep 10 00:37:00.428854 systemd[1]: Starting systemd-logind.service... Sep 10 00:37:00.431445 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 10 00:37:00.448248 dbus-daemon[1281]: [system] SELinux support is enabled Sep 10 00:37:00.431518 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 10 00:37:00.432540 systemd[1]: Starting update-engine.service... Sep 10 00:37:00.456336 jq[1297]: true Sep 10 00:37:00.434147 systemd[1]: Starting update-ssh-keys-after-ignition.service... Sep 10 00:37:00.441709 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 10 00:37:00.443234 systemd[1]: Finished systemd-machine-id-commit.service. Sep 10 00:37:00.445734 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 10 00:37:00.445966 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Sep 10 00:37:00.447155 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 10 00:37:00.447394 systemd[1]: Finished ssh-key-proc-cmdline.service. Sep 10 00:37:00.448729 systemd[1]: Started dbus.service. Sep 10 00:37:00.454594 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 10 00:37:00.454671 systemd[1]: Reached target system-config.target. Sep 10 00:37:00.458113 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 10 00:37:00.479367 extend-filesystems[1284]: Found loop1 Sep 10 00:37:00.479367 extend-filesystems[1284]: Found sr0 Sep 10 00:37:00.479367 extend-filesystems[1284]: Found vda Sep 10 00:37:00.479367 extend-filesystems[1284]: Found vda1 Sep 10 00:37:00.479367 extend-filesystems[1284]: Found vda2 Sep 10 00:37:00.479367 extend-filesystems[1284]: Found vda3 Sep 10 00:37:00.479367 extend-filesystems[1284]: Found usr Sep 10 00:37:00.479367 extend-filesystems[1284]: Found vda4 Sep 10 00:37:00.479367 extend-filesystems[1284]: Found vda6 Sep 10 00:37:00.479367 extend-filesystems[1284]: Found vda7 Sep 10 00:37:00.479367 extend-filesystems[1284]: Found vda9 Sep 10 00:37:00.479367 extend-filesystems[1284]: Checking size of /dev/vda9 Sep 10 00:37:00.774447 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 10 00:37:00.774675 tar[1302]: linux-amd64/helm Sep 10 00:37:00.458137 systemd[1]: Reached target user-config.target. Sep 10 00:37:00.775186 extend-filesystems[1284]: Resized partition /dev/vda9 Sep 10 00:37:00.473110 systemd[1]: motdgen.service: Deactivated successfully. Sep 10 00:37:00.777483 extend-filesystems[1333]: resize2fs 1.46.5 (30-Dec-2021) Sep 10 00:37:00.795954 jq[1306]: true Sep 10 00:37:00.473352 systemd[1]: Finished motdgen.service. Sep 10 00:37:00.796609 update_engine[1295]: I0910 00:37:00.779650 1295 main.cc:92] Flatcar Update Engine starting Sep 10 00:37:00.796609 update_engine[1295]: I0910 00:37:00.784600 1295 update_check_scheduler.cc:74] Next update check in 6m13s Sep 10 00:37:00.783305 systemd[1]: Started update-engine.service. Sep 10 00:37:00.787175 systemd[1]: Started locksmithd.service. Sep 10 00:37:00.796419 systemd-logind[1293]: Watching system buttons on /dev/input/event1 (Power Button) Sep 10 00:37:00.796434 systemd-logind[1293]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 10 00:37:00.797717 systemd-logind[1293]: New seat seat0. Sep 10 00:37:00.798481 env[1312]: time="2025-09-10T00:37:00.798416752Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Sep 10 00:37:00.852654 systemd-networkd[1083]: eth0: Gained IPv6LL Sep 10 00:37:00.866864 systemd[1]: Started systemd-logind.service. Sep 10 00:37:00.868783 systemd[1]: Finished systemd-networkd-wait-online.service. Sep 10 00:37:00.870910 systemd[1]: Reached target network-online.target. Sep 10 00:37:00.877004 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 10 00:37:00.874162 systemd[1]: Starting kubelet.service... Sep 10 00:37:00.904825 extend-filesystems[1333]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 10 00:37:00.904825 extend-filesystems[1333]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 10 00:37:00.904825 extend-filesystems[1333]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 10 00:37:00.928939 env[1312]: time="2025-09-10T00:37:00.890399052Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 10 00:37:00.928939 env[1312]: time="2025-09-10T00:37:00.898931423Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 10 00:37:00.928939 env[1312]: time="2025-09-10T00:37:00.900293638Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.191-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 10 00:37:00.928939 env[1312]: time="2025-09-10T00:37:00.900325538Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 10 00:37:00.928939 env[1312]: time="2025-09-10T00:37:00.900612526Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 10 00:37:00.928939 env[1312]: time="2025-09-10T00:37:00.900627724Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 10 00:37:00.928939 env[1312]: time="2025-09-10T00:37:00.900639396Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Sep 10 00:37:00.928939 env[1312]: time="2025-09-10T00:37:00.900648874Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 10 00:37:00.928939 env[1312]: time="2025-09-10T00:37:00.900719717Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 10 00:37:00.928939 env[1312]: time="2025-09-10T00:37:00.900926284Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 10 00:37:00.929729 extend-filesystems[1284]: Resized filesystem in /dev/vda9 Sep 10 00:37:00.905632 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 10 00:37:00.931806 bash[1340]: Updated "/home/core/.ssh/authorized_keys" Sep 10 00:37:00.931927 env[1312]: time="2025-09-10T00:37:00.901053292Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 10 00:37:00.931927 env[1312]: time="2025-09-10T00:37:00.901066908Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 10 00:37:00.931927 env[1312]: time="2025-09-10T00:37:00.901105200Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Sep 10 00:37:00.931927 env[1312]: time="2025-09-10T00:37:00.901119126Z" level=info msg="metadata content store policy set" policy=shared Sep 10 00:37:00.931927 env[1312]: time="2025-09-10T00:37:00.931642197Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 10 00:37:00.931927 env[1312]: time="2025-09-10T00:37:00.931718370Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 10 00:37:00.931927 env[1312]: time="2025-09-10T00:37:00.931761802Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 10 00:37:00.931927 env[1312]: time="2025-09-10T00:37:00.931810734Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 10 00:37:00.931927 env[1312]: time="2025-09-10T00:37:00.931847162Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 10 00:37:00.931927 env[1312]: time="2025-09-10T00:37:00.931862611Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 10 00:37:00.931927 env[1312]: time="2025-09-10T00:37:00.931876196Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 10 00:37:00.931927 env[1312]: time="2025-09-10T00:37:00.931914147Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 10 00:37:00.931927 env[1312]: time="2025-09-10T00:37:00.931929336Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Sep 10 00:37:00.905927 systemd[1]: Finished extend-filesystems.service. Sep 10 00:37:00.932417 env[1312]: time="2025-09-10T00:37:00.931945186Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 10 00:37:00.932417 env[1312]: time="2025-09-10T00:37:00.931966125Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 10 00:37:00.932417 env[1312]: time="2025-09-10T00:37:00.932000960Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 10 00:37:00.932417 env[1312]: time="2025-09-10T00:37:00.932154759Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 10 00:37:00.932417 env[1312]: time="2025-09-10T00:37:00.932259826Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 10 00:37:00.929300 systemd[1]: Finished update-ssh-keys-after-ignition.service. Sep 10 00:37:00.932779 env[1312]: time="2025-09-10T00:37:00.932747250Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 10 00:37:00.932831 env[1312]: time="2025-09-10T00:37:00.932806791Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 10 00:37:00.932831 env[1312]: time="2025-09-10T00:37:00.932824244Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 10 00:37:00.932906 env[1312]: time="2025-09-10T00:37:00.932893885Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 10 00:37:00.932943 env[1312]: time="2025-09-10T00:37:00.932909634Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 10 00:37:00.932972 env[1312]: time="2025-09-10T00:37:00.932940572Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 10 00:37:00.932972 env[1312]: time="2025-09-10T00:37:00.932954629Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 10 00:37:00.932972 env[1312]: time="2025-09-10T00:37:00.932968344Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 10 00:37:00.933058 env[1312]: time="2025-09-10T00:37:00.932982471Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 10 00:37:00.933058 env[1312]: time="2025-09-10T00:37:00.932995896Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 10 00:37:00.933058 env[1312]: time="2025-09-10T00:37:00.933021364Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 10 00:37:00.933058 env[1312]: time="2025-09-10T00:37:00.933039318Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 10 00:37:00.933216 env[1312]: time="2025-09-10T00:37:00.933190742Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 10 00:37:00.933216 env[1312]: time="2025-09-10T00:37:00.933214316Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 10 00:37:00.933327 env[1312]: time="2025-09-10T00:37:00.933231748Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 10 00:37:00.933327 env[1312]: time="2025-09-10T00:37:00.933259420Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 10 00:37:00.933327 env[1312]: time="2025-09-10T00:37:00.933276833Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Sep 10 00:37:00.933327 env[1312]: time="2025-09-10T00:37:00.933288525Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 10 00:37:00.933438 env[1312]: time="2025-09-10T00:37:00.933330774Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Sep 10 00:37:00.933438 env[1312]: time="2025-09-10T00:37:00.933373805Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 10 00:37:00.934147 env[1312]: time="2025-09-10T00:37:00.933994629Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 10 00:37:00.934147 env[1312]: time="2025-09-10T00:37:00.934080280Z" level=info msg="Connect containerd service" Sep 10 00:37:00.934948 env[1312]: time="2025-09-10T00:37:00.934135794Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 10 00:37:00.934948 env[1312]: time="2025-09-10T00:37:00.934914524Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 10 00:37:00.939473 env[1312]: time="2025-09-10T00:37:00.935207904Z" level=info msg="Start subscribing containerd event" Sep 10 00:37:00.939473 env[1312]: time="2025-09-10T00:37:00.935277214Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 10 00:37:00.939473 env[1312]: time="2025-09-10T00:37:00.935291822Z" level=info msg="Start recovering state" Sep 10 00:37:00.939473 env[1312]: time="2025-09-10T00:37:00.935335614Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 10 00:37:00.939473 env[1312]: time="2025-09-10T00:37:00.935380578Z" level=info msg="Start event monitor" Sep 10 00:37:00.939473 env[1312]: time="2025-09-10T00:37:00.935398402Z" level=info msg="Start snapshots syncer" Sep 10 00:37:00.939473 env[1312]: time="2025-09-10T00:37:00.935431183Z" level=info msg="Start cni network conf syncer for default" Sep 10 00:37:00.939473 env[1312]: time="2025-09-10T00:37:00.935440621Z" level=info msg="Start streaming server" Sep 10 00:37:00.939473 env[1312]: time="2025-09-10T00:37:00.935833287Z" level=info msg="containerd successfully booted in 0.297835s" Sep 10 00:37:00.935449 systemd[1]: Started containerd.service. Sep 10 00:37:01.020962 sshd_keygen[1300]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 10 00:37:01.039268 locksmithd[1342]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 10 00:37:01.058444 systemd[1]: Finished sshd-keygen.service. Sep 10 00:37:01.066221 systemd[1]: Starting issuegen.service... Sep 10 00:37:01.072603 systemd[1]: issuegen.service: Deactivated successfully. Sep 10 00:37:01.072849 systemd[1]: Finished issuegen.service. Sep 10 00:37:01.075509 systemd[1]: Starting systemd-user-sessions.service... Sep 10 00:37:01.081932 systemd[1]: Finished systemd-user-sessions.service. Sep 10 00:37:01.085825 systemd[1]: Started getty@tty1.service. Sep 10 00:37:01.089838 systemd[1]: Started serial-getty@ttyS0.service. Sep 10 00:37:01.092405 systemd[1]: Reached target getty.target. Sep 10 00:37:01.360947 tar[1302]: linux-amd64/LICENSE Sep 10 00:37:01.360947 tar[1302]: linux-amd64/README.md Sep 10 00:37:01.367530 systemd[1]: Finished prepare-helm.service. Sep 10 00:37:01.768235 systemd[1]: Created slice system-sshd.slice. Sep 10 00:37:01.803957 systemd[1]: Started sshd@0-10.0.0.12:22-10.0.0.1:41420.service. Sep 10 00:37:01.843864 sshd[1380]: Accepted publickey for core from 10.0.0.1 port 41420 ssh2: RSA SHA256:naKAIq1jxVWXXwxrf2qyS4Axg4ReIMCTV7B/o3rV64U Sep 10 00:37:01.845466 sshd[1380]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:37:01.855306 systemd-logind[1293]: New session 1 of user core. Sep 10 00:37:01.856299 systemd[1]: Created slice user-500.slice. Sep 10 00:37:01.858222 systemd[1]: Starting user-runtime-dir@500.service... Sep 10 00:37:01.867943 systemd[1]: Finished user-runtime-dir@500.service. Sep 10 00:37:01.870237 systemd[1]: Starting user@500.service... Sep 10 00:37:01.873449 (systemd)[1385]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:37:01.941726 systemd[1385]: Queued start job for default target default.target. Sep 10 00:37:01.941958 systemd[1385]: Reached target paths.target. Sep 10 00:37:01.941973 systemd[1385]: Reached target sockets.target. Sep 10 00:37:01.941984 systemd[1385]: Reached target timers.target. Sep 10 00:37:01.941994 systemd[1385]: Reached target basic.target. Sep 10 00:37:01.942125 systemd[1]: Started user@500.service. Sep 10 00:37:01.943485 systemd[1385]: Reached target default.target. Sep 10 00:37:01.943568 systemd[1385]: Startup finished in 63ms. Sep 10 00:37:01.943784 systemd[1]: Started session-1.scope. Sep 10 00:37:01.994722 systemd[1]: Started sshd@1-10.0.0.12:22-10.0.0.1:41422.service. Sep 10 00:37:02.032031 sshd[1394]: Accepted publickey for core from 10.0.0.1 port 41422 ssh2: RSA SHA256:naKAIq1jxVWXXwxrf2qyS4Axg4ReIMCTV7B/o3rV64U Sep 10 00:37:02.073050 sshd[1394]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:37:02.076376 systemd-logind[1293]: New session 2 of user core. Sep 10 00:37:02.077068 systemd[1]: Started session-2.scope. Sep 10 00:37:02.141346 sshd[1394]: pam_unix(sshd:session): session closed for user core Sep 10 00:37:02.144033 systemd[1]: Started sshd@2-10.0.0.12:22-10.0.0.1:41436.service. Sep 10 00:37:02.145809 systemd[1]: sshd@1-10.0.0.12:22-10.0.0.1:41422.service: Deactivated successfully. Sep 10 00:37:02.146659 systemd[1]: session-2.scope: Deactivated successfully. Sep 10 00:37:02.148149 systemd-logind[1293]: Session 2 logged out. Waiting for processes to exit. Sep 10 00:37:02.149088 systemd-logind[1293]: Removed session 2. Sep 10 00:37:02.235066 sshd[1399]: Accepted publickey for core from 10.0.0.1 port 41436 ssh2: RSA SHA256:naKAIq1jxVWXXwxrf2qyS4Axg4ReIMCTV7B/o3rV64U Sep 10 00:37:02.236364 sshd[1399]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:37:02.239750 systemd-logind[1293]: New session 3 of user core. Sep 10 00:37:02.240461 systemd[1]: Started session-3.scope. Sep 10 00:37:02.298543 sshd[1399]: pam_unix(sshd:session): session closed for user core Sep 10 00:37:02.300754 systemd[1]: sshd@2-10.0.0.12:22-10.0.0.1:41436.service: Deactivated successfully. Sep 10 00:37:02.301517 systemd[1]: session-3.scope: Deactivated successfully. Sep 10 00:37:02.301607 systemd-logind[1293]: Session 3 logged out. Waiting for processes to exit. Sep 10 00:37:02.302448 systemd-logind[1293]: Removed session 3. Sep 10 00:37:02.689168 systemd[1]: Started kubelet.service. Sep 10 00:37:02.690690 systemd[1]: Reached target multi-user.target. Sep 10 00:37:02.693480 systemd[1]: Starting systemd-update-utmp-runlevel.service... Sep 10 00:37:02.699933 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Sep 10 00:37:02.700138 systemd[1]: Finished systemd-update-utmp-runlevel.service. Sep 10 00:37:02.703182 systemd[1]: Startup finished in 7.171s (kernel) + 7.675s (userspace) = 14.847s. Sep 10 00:37:03.428823 kubelet[1413]: E0910 00:37:03.428721 1413 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 10 00:37:03.431693 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 10 00:37:03.432071 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 10 00:37:12.302297 systemd[1]: Started sshd@3-10.0.0.12:22-10.0.0.1:40990.service. Sep 10 00:37:12.341843 sshd[1423]: Accepted publickey for core from 10.0.0.1 port 40990 ssh2: RSA SHA256:naKAIq1jxVWXXwxrf2qyS4Axg4ReIMCTV7B/o3rV64U Sep 10 00:37:12.343166 sshd[1423]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:37:12.347260 systemd-logind[1293]: New session 4 of user core. Sep 10 00:37:12.348333 systemd[1]: Started session-4.scope. Sep 10 00:37:12.402530 sshd[1423]: pam_unix(sshd:session): session closed for user core Sep 10 00:37:12.405301 systemd[1]: Started sshd@4-10.0.0.12:22-10.0.0.1:40994.service. Sep 10 00:37:12.405829 systemd[1]: sshd@3-10.0.0.12:22-10.0.0.1:40990.service: Deactivated successfully. Sep 10 00:37:12.406792 systemd[1]: session-4.scope: Deactivated successfully. Sep 10 00:37:12.407322 systemd-logind[1293]: Session 4 logged out. Waiting for processes to exit. Sep 10 00:37:12.408331 systemd-logind[1293]: Removed session 4. Sep 10 00:37:12.441087 sshd[1429]: Accepted publickey for core from 10.0.0.1 port 40994 ssh2: RSA SHA256:naKAIq1jxVWXXwxrf2qyS4Axg4ReIMCTV7B/o3rV64U Sep 10 00:37:12.442084 sshd[1429]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:37:12.445554 systemd-logind[1293]: New session 5 of user core. Sep 10 00:37:12.446203 systemd[1]: Started session-5.scope. Sep 10 00:37:12.497507 sshd[1429]: pam_unix(sshd:session): session closed for user core Sep 10 00:37:12.500804 systemd[1]: Started sshd@5-10.0.0.12:22-10.0.0.1:41002.service. Sep 10 00:37:12.501245 systemd[1]: sshd@4-10.0.0.12:22-10.0.0.1:40994.service: Deactivated successfully. Sep 10 00:37:12.502242 systemd[1]: session-5.scope: Deactivated successfully. Sep 10 00:37:12.502317 systemd-logind[1293]: Session 5 logged out. Waiting for processes to exit. Sep 10 00:37:12.503212 systemd-logind[1293]: Removed session 5. Sep 10 00:37:12.537600 sshd[1436]: Accepted publickey for core from 10.0.0.1 port 41002 ssh2: RSA SHA256:naKAIq1jxVWXXwxrf2qyS4Axg4ReIMCTV7B/o3rV64U Sep 10 00:37:12.538745 sshd[1436]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:37:12.542003 systemd-logind[1293]: New session 6 of user core. Sep 10 00:37:12.542875 systemd[1]: Started session-6.scope. Sep 10 00:37:12.596550 sshd[1436]: pam_unix(sshd:session): session closed for user core Sep 10 00:37:12.598658 systemd[1]: Started sshd@6-10.0.0.12:22-10.0.0.1:41018.service. Sep 10 00:37:12.599664 systemd[1]: sshd@5-10.0.0.12:22-10.0.0.1:41002.service: Deactivated successfully. Sep 10 00:37:12.600389 systemd[1]: session-6.scope: Deactivated successfully. Sep 10 00:37:12.600796 systemd-logind[1293]: Session 6 logged out. Waiting for processes to exit. Sep 10 00:37:12.601417 systemd-logind[1293]: Removed session 6. Sep 10 00:37:12.635877 sshd[1442]: Accepted publickey for core from 10.0.0.1 port 41018 ssh2: RSA SHA256:naKAIq1jxVWXXwxrf2qyS4Axg4ReIMCTV7B/o3rV64U Sep 10 00:37:12.637004 sshd[1442]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:37:12.640000 systemd-logind[1293]: New session 7 of user core. Sep 10 00:37:12.640653 systemd[1]: Started session-7.scope. Sep 10 00:37:12.696862 sudo[1448]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 10 00:37:12.697104 sudo[1448]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 10 00:37:12.729882 systemd[1]: Starting docker.service... Sep 10 00:37:12.866962 env[1460]: time="2025-09-10T00:37:12.866805168Z" level=info msg="Starting up" Sep 10 00:37:12.868304 env[1460]: time="2025-09-10T00:37:12.868254917Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 10 00:37:12.868304 env[1460]: time="2025-09-10T00:37:12.868287428Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 10 00:37:12.868450 env[1460]: time="2025-09-10T00:37:12.868312705Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 10 00:37:12.868450 env[1460]: time="2025-09-10T00:37:12.868331260Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 10 00:37:12.870156 env[1460]: time="2025-09-10T00:37:12.870125826Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 10 00:37:12.870156 env[1460]: time="2025-09-10T00:37:12.870143960Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 10 00:37:12.870156 env[1460]: time="2025-09-10T00:37:12.870154630Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 10 00:37:12.870253 env[1460]: time="2025-09-10T00:37:12.870161653Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 10 00:37:13.511071 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 10 00:37:13.511258 systemd[1]: Stopped kubelet.service. Sep 10 00:37:13.512639 systemd[1]: Starting kubelet.service... Sep 10 00:37:13.516383 env[1460]: time="2025-09-10T00:37:13.516338465Z" level=warning msg="Your kernel does not support cgroup blkio weight" Sep 10 00:37:13.516383 env[1460]: time="2025-09-10T00:37:13.516365566Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Sep 10 00:37:13.516590 env[1460]: time="2025-09-10T00:37:13.516535154Z" level=info msg="Loading containers: start." Sep 10 00:37:13.674171 systemd[1]: Started kubelet.service. Sep 10 00:37:14.027567 kubelet[1512]: E0910 00:37:14.027397 1512 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 10 00:37:14.030951 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 10 00:37:14.031083 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 10 00:37:14.037529 kernel: Initializing XFRM netlink socket Sep 10 00:37:14.064993 env[1460]: time="2025-09-10T00:37:14.064947613Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Sep 10 00:37:14.112568 systemd-networkd[1083]: docker0: Link UP Sep 10 00:37:14.126810 env[1460]: time="2025-09-10T00:37:14.126764722Z" level=info msg="Loading containers: done." Sep 10 00:37:14.140914 env[1460]: time="2025-09-10T00:37:14.140862921Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 10 00:37:14.141074 env[1460]: time="2025-09-10T00:37:14.141024615Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Sep 10 00:37:14.141123 env[1460]: time="2025-09-10T00:37:14.141097091Z" level=info msg="Daemon has completed initialization" Sep 10 00:37:14.158591 systemd[1]: Started docker.service. Sep 10 00:37:14.169280 env[1460]: time="2025-09-10T00:37:14.169202689Z" level=info msg="API listen on /run/docker.sock" Sep 10 00:37:15.026759 env[1312]: time="2025-09-10T00:37:15.026691867Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\"" Sep 10 00:37:15.722359 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4033082779.mount: Deactivated successfully. Sep 10 00:37:17.509287 env[1312]: time="2025-09-10T00:37:17.509211157Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:37:17.511439 env[1312]: time="2025-09-10T00:37:17.511397898Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:37:17.514007 env[1312]: time="2025-09-10T00:37:17.513947028Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:37:17.515822 env[1312]: time="2025-09-10T00:37:17.515790225Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:37:17.516640 env[1312]: time="2025-09-10T00:37:17.516610283Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\" returns image reference \"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\"" Sep 10 00:37:17.517599 env[1312]: time="2025-09-10T00:37:17.517571666Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\"" Sep 10 00:37:19.598071 env[1312]: time="2025-09-10T00:37:19.597976317Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:37:19.600432 env[1312]: time="2025-09-10T00:37:19.600357553Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:37:19.602343 env[1312]: time="2025-09-10T00:37:19.602292000Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:37:19.604435 env[1312]: time="2025-09-10T00:37:19.604377822Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:37:19.605133 env[1312]: time="2025-09-10T00:37:19.605103523Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\" returns image reference \"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\"" Sep 10 00:37:19.605715 env[1312]: time="2025-09-10T00:37:19.605679824Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\"" Sep 10 00:37:21.589820 env[1312]: time="2025-09-10T00:37:21.589740876Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:37:21.591659 env[1312]: time="2025-09-10T00:37:21.591543257Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:37:21.594308 env[1312]: time="2025-09-10T00:37:21.594247748Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:37:21.596245 env[1312]: time="2025-09-10T00:37:21.596186114Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:37:21.597000 env[1312]: time="2025-09-10T00:37:21.596954905Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\" returns image reference \"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\"" Sep 10 00:37:21.597602 env[1312]: time="2025-09-10T00:37:21.597571311Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\"" Sep 10 00:37:23.703763 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2047448153.mount: Deactivated successfully. Sep 10 00:37:24.283173 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 10 00:37:24.283396 systemd[1]: Stopped kubelet.service. Sep 10 00:37:24.285049 systemd[1]: Starting kubelet.service... Sep 10 00:37:24.907672 systemd[1]: Started kubelet.service. Sep 10 00:37:25.078974 kubelet[1615]: E0910 00:37:25.078920 1615 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 10 00:37:25.079506 env[1312]: time="2025-09-10T00:37:25.079445328Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:37:25.080589 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 10 00:37:25.080752 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 10 00:37:25.081976 env[1312]: time="2025-09-10T00:37:25.081941900Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:37:25.083489 env[1312]: time="2025-09-10T00:37:25.083466389Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:37:25.085577 env[1312]: time="2025-09-10T00:37:25.085549666Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:37:25.086056 env[1312]: time="2025-09-10T00:37:25.086020348Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\" returns image reference \"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\"" Sep 10 00:37:25.086597 env[1312]: time="2025-09-10T00:37:25.086573075Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 10 00:37:25.788956 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2460849349.mount: Deactivated successfully. Sep 10 00:37:27.875625 env[1312]: time="2025-09-10T00:37:27.875532660Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:37:27.877832 env[1312]: time="2025-09-10T00:37:27.877767110Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:37:27.880634 env[1312]: time="2025-09-10T00:37:27.880561440Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:37:27.884562 env[1312]: time="2025-09-10T00:37:27.884480540Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:37:27.885701 env[1312]: time="2025-09-10T00:37:27.885654482Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 10 00:37:27.886517 env[1312]: time="2025-09-10T00:37:27.886453741Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 10 00:37:28.573467 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount85132641.mount: Deactivated successfully. Sep 10 00:37:28.580628 env[1312]: time="2025-09-10T00:37:28.580557182Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:37:28.582422 env[1312]: time="2025-09-10T00:37:28.582374110Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:37:28.583741 env[1312]: time="2025-09-10T00:37:28.583697762Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:37:28.585034 env[1312]: time="2025-09-10T00:37:28.585010484Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:37:28.585445 env[1312]: time="2025-09-10T00:37:28.585410043Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 10 00:37:28.585939 env[1312]: time="2025-09-10T00:37:28.585920270Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 10 00:37:29.685001 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1562809161.mount: Deactivated successfully. Sep 10 00:37:33.508540 env[1312]: time="2025-09-10T00:37:33.508458469Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:37:33.510685 env[1312]: time="2025-09-10T00:37:33.510658078Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:37:33.512782 env[1312]: time="2025-09-10T00:37:33.512742094Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:37:33.515754 env[1312]: time="2025-09-10T00:37:33.515697496Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:37:33.516781 env[1312]: time="2025-09-10T00:37:33.516735285Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 10 00:37:35.281829 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 10 00:37:35.282077 systemd[1]: Stopped kubelet.service. Sep 10 00:37:35.283885 systemd[1]: Starting kubelet.service... Sep 10 00:37:35.399451 systemd[1]: Started kubelet.service. Sep 10 00:37:35.475651 kubelet[1652]: E0910 00:37:35.475583 1652 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 10 00:37:35.477412 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 10 00:37:35.477597 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 10 00:37:36.008197 systemd[1]: Stopped kubelet.service. Sep 10 00:37:36.010284 systemd[1]: Starting kubelet.service... Sep 10 00:37:36.039227 systemd[1]: Reloading. Sep 10 00:37:36.111774 /usr/lib/systemd/system-generators/torcx-generator[1690]: time="2025-09-10T00:37:36Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 10 00:37:36.111798 /usr/lib/systemd/system-generators/torcx-generator[1690]: time="2025-09-10T00:37:36Z" level=info msg="torcx already run" Sep 10 00:37:37.218936 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 10 00:37:37.218954 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 10 00:37:37.241419 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 10 00:37:37.331931 systemd[1]: Started kubelet.service. Sep 10 00:37:37.333861 systemd[1]: Stopping kubelet.service... Sep 10 00:37:37.334365 systemd[1]: kubelet.service: Deactivated successfully. Sep 10 00:37:37.334670 systemd[1]: Stopped kubelet.service. Sep 10 00:37:37.340217 systemd[1]: Starting kubelet.service... Sep 10 00:37:37.434024 systemd[1]: Started kubelet.service. Sep 10 00:37:37.579600 kubelet[1750]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 00:37:37.579600 kubelet[1750]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 10 00:37:37.579600 kubelet[1750]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 00:37:37.580089 kubelet[1750]: I0910 00:37:37.579673 1750 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 10 00:37:37.873723 kubelet[1750]: I0910 00:37:37.873595 1750 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 10 00:37:37.873723 kubelet[1750]: I0910 00:37:37.873634 1750 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 10 00:37:37.873911 kubelet[1750]: I0910 00:37:37.873903 1750 server.go:934] "Client rotation is on, will bootstrap in background" Sep 10 00:37:37.927590 kubelet[1750]: I0910 00:37:37.927506 1750 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 10 00:37:37.927928 kubelet[1750]: E0910 00:37:37.927887 1750 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.12:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:37:37.958180 kubelet[1750]: E0910 00:37:37.958133 1750 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 10 00:37:37.958180 kubelet[1750]: I0910 00:37:37.958166 1750 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 10 00:37:37.971155 kubelet[1750]: I0910 00:37:37.971092 1750 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 10 00:37:37.973343 kubelet[1750]: I0910 00:37:37.973312 1750 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 10 00:37:37.973510 kubelet[1750]: I0910 00:37:37.973463 1750 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 10 00:37:37.973767 kubelet[1750]: I0910 00:37:37.973507 1750 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 10 00:37:37.973894 kubelet[1750]: I0910 00:37:37.973788 1750 topology_manager.go:138] "Creating topology manager with none policy" Sep 10 00:37:37.973894 kubelet[1750]: I0910 00:37:37.973801 1750 container_manager_linux.go:300] "Creating device plugin manager" Sep 10 00:37:37.973972 kubelet[1750]: I0910 00:37:37.973957 1750 state_mem.go:36] "Initialized new in-memory state store" Sep 10 00:37:37.981300 kubelet[1750]: I0910 00:37:37.981247 1750 kubelet.go:408] "Attempting to sync node with API server" Sep 10 00:37:37.981300 kubelet[1750]: I0910 00:37:37.981294 1750 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 10 00:37:37.981520 kubelet[1750]: I0910 00:37:37.981356 1750 kubelet.go:314] "Adding apiserver pod source" Sep 10 00:37:37.981520 kubelet[1750]: I0910 00:37:37.981389 1750 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 10 00:37:38.000706 kubelet[1750]: W0910 00:37:38.000606 1750 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.12:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused Sep 10 00:37:38.000706 kubelet[1750]: E0910 00:37:38.000691 1750 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.12:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:37:38.000998 kubelet[1750]: W0910 00:37:38.000879 1750 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused Sep 10 00:37:38.000998 kubelet[1750]: E0910 00:37:38.000958 1750 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:37:38.006665 kubelet[1750]: I0910 00:37:38.006634 1750 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 10 00:37:38.007183 kubelet[1750]: I0910 00:37:38.007164 1750 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 10 00:37:38.007244 kubelet[1750]: W0910 00:37:38.007234 1750 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 10 00:37:38.010229 kubelet[1750]: I0910 00:37:38.010205 1750 server.go:1274] "Started kubelet" Sep 10 00:37:38.010321 kubelet[1750]: I0910 00:37:38.010291 1750 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 10 00:37:38.011544 kubelet[1750]: I0910 00:37:38.011523 1750 server.go:449] "Adding debug handlers to kubelet server" Sep 10 00:37:38.014705 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Sep 10 00:37:38.014942 kubelet[1750]: I0910 00:37:38.014907 1750 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 10 00:37:38.015263 kubelet[1750]: I0910 00:37:38.015247 1750 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 10 00:37:38.017464 kubelet[1750]: I0910 00:37:38.017362 1750 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 10 00:37:38.017637 kubelet[1750]: I0910 00:37:38.017609 1750 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 10 00:37:38.018568 kubelet[1750]: I0910 00:37:38.018366 1750 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 10 00:37:38.018568 kubelet[1750]: I0910 00:37:38.018514 1750 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 10 00:37:38.018568 kubelet[1750]: I0910 00:37:38.018568 1750 reconciler.go:26] "Reconciler: start to sync state" Sep 10 00:37:38.019471 kubelet[1750]: W0910 00:37:38.018971 1750 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused Sep 10 00:37:38.019471 kubelet[1750]: E0910 00:37:38.019026 1750 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:37:38.019471 kubelet[1750]: I0910 00:37:38.019196 1750 factory.go:221] Registration of the systemd container factory successfully Sep 10 00:37:38.019471 kubelet[1750]: I0910 00:37:38.019263 1750 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 10 00:37:38.019938 kubelet[1750]: E0910 00:37:38.019919 1750 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:37:38.020114 kubelet[1750]: E0910 00:37:38.020077 1750 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.12:6443: connect: connection refused" interval="200ms" Sep 10 00:37:38.020209 kubelet[1750]: E0910 00:37:38.020090 1750 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 10 00:37:38.021185 kubelet[1750]: I0910 00:37:38.021164 1750 factory.go:221] Registration of the containerd container factory successfully Sep 10 00:37:38.032543 kubelet[1750]: E0910 00:37:38.027857 1750 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.12:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.12:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1863c4cb85ffe765 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-10 00:37:38.010171237 +0000 UTC m=+0.572025095,LastTimestamp:2025-09-10 00:37:38.010171237 +0000 UTC m=+0.572025095,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 10 00:37:38.089103 kubelet[1750]: I0910 00:37:38.089049 1750 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 10 00:37:38.090579 kubelet[1750]: I0910 00:37:38.090557 1750 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 10 00:37:38.090633 kubelet[1750]: I0910 00:37:38.090594 1750 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 10 00:37:38.090633 kubelet[1750]: I0910 00:37:38.090629 1750 kubelet.go:2321] "Starting kubelet main sync loop" Sep 10 00:37:38.090723 kubelet[1750]: E0910 00:37:38.090691 1750 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 10 00:37:38.091789 kubelet[1750]: W0910 00:37:38.091569 1750 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused Sep 10 00:37:38.091789 kubelet[1750]: E0910 00:37:38.091602 1750 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:37:38.098893 kubelet[1750]: I0910 00:37:38.098864 1750 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 10 00:37:38.098893 kubelet[1750]: I0910 00:37:38.098879 1750 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 10 00:37:38.098893 kubelet[1750]: I0910 00:37:38.098903 1750 state_mem.go:36] "Initialized new in-memory state store" Sep 10 00:37:38.120216 kubelet[1750]: E0910 00:37:38.120174 1750 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:37:38.191600 kubelet[1750]: E0910 00:37:38.191444 1750 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 10 00:37:38.221088 kubelet[1750]: E0910 00:37:38.221025 1750 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:37:38.221416 kubelet[1750]: E0910 00:37:38.221363 1750 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.12:6443: connect: connection refused" interval="400ms" Sep 10 00:37:38.321886 kubelet[1750]: E0910 00:37:38.321819 1750 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:37:38.392158 kubelet[1750]: E0910 00:37:38.392076 1750 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 10 00:37:38.422731 kubelet[1750]: E0910 00:37:38.422682 1750 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:37:38.523947 kubelet[1750]: E0910 00:37:38.523796 1750 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:37:38.605278 kubelet[1750]: I0910 00:37:38.605189 1750 policy_none.go:49] "None policy: Start" Sep 10 00:37:38.606476 kubelet[1750]: I0910 00:37:38.606443 1750 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 10 00:37:38.606584 kubelet[1750]: I0910 00:37:38.606515 1750 state_mem.go:35] "Initializing new in-memory state store" Sep 10 00:37:38.622488 kubelet[1750]: E0910 00:37:38.622408 1750 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.12:6443: connect: connection refused" interval="800ms" Sep 10 00:37:38.624458 kubelet[1750]: E0910 00:37:38.624430 1750 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:37:38.657262 kubelet[1750]: I0910 00:37:38.657209 1750 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 10 00:37:38.657450 kubelet[1750]: I0910 00:37:38.657378 1750 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 10 00:37:38.657450 kubelet[1750]: I0910 00:37:38.657394 1750 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 10 00:37:38.657822 kubelet[1750]: I0910 00:37:38.657779 1750 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 10 00:37:38.659257 kubelet[1750]: E0910 00:37:38.659231 1750 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 10 00:37:38.759518 kubelet[1750]: I0910 00:37:38.759441 1750 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 10 00:37:38.760024 kubelet[1750]: E0910 00:37:38.759961 1750 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.12:6443/api/v1/nodes\": dial tcp 10.0.0.12:6443: connect: connection refused" node="localhost" Sep 10 00:37:38.823193 kubelet[1750]: I0910 00:37:38.823136 1750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:37:38.823193 kubelet[1750]: I0910 00:37:38.823186 1750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 10 00:37:38.823407 kubelet[1750]: I0910 00:37:38.823211 1750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/83894b0c15ab9ace956581ad7c666c30-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"83894b0c15ab9ace956581ad7c666c30\") " pod="kube-system/kube-apiserver-localhost" Sep 10 00:37:38.823407 kubelet[1750]: I0910 00:37:38.823233 1750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/83894b0c15ab9ace956581ad7c666c30-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"83894b0c15ab9ace956581ad7c666c30\") " pod="kube-system/kube-apiserver-localhost" Sep 10 00:37:38.823407 kubelet[1750]: I0910 00:37:38.823262 1750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:37:38.823407 kubelet[1750]: I0910 00:37:38.823282 1750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:37:38.823407 kubelet[1750]: I0910 00:37:38.823300 1750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:37:38.823566 kubelet[1750]: I0910 00:37:38.823321 1750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:37:38.823566 kubelet[1750]: I0910 00:37:38.823342 1750 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/83894b0c15ab9ace956581ad7c666c30-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"83894b0c15ab9ace956581ad7c666c30\") " pod="kube-system/kube-apiserver-localhost" Sep 10 00:37:38.830827 kubelet[1750]: W0910 00:37:38.830728 1750 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.12:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused Sep 10 00:37:38.830882 kubelet[1750]: E0910 00:37:38.830844 1750 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.12:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:37:38.924515 kubelet[1750]: W0910 00:37:38.924385 1750 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused Sep 10 00:37:38.924722 kubelet[1750]: E0910 00:37:38.924528 1750 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:37:38.962480 kubelet[1750]: I0910 00:37:38.962421 1750 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 10 00:37:38.963010 kubelet[1750]: E0910 00:37:38.962945 1750 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.12:6443/api/v1/nodes\": dial tcp 10.0.0.12:6443: connect: connection refused" node="localhost" Sep 10 00:37:39.099176 kubelet[1750]: E0910 00:37:39.099044 1750 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:37:39.099927 kubelet[1750]: E0910 00:37:39.099466 1750 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:37:39.099991 env[1312]: time="2025-09-10T00:37:39.099631954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,}" Sep 10 00:37:39.100451 env[1312]: time="2025-09-10T00:37:39.100419203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,}" Sep 10 00:37:39.108651 kubelet[1750]: E0910 00:37:39.108624 1750 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:37:39.109073 env[1312]: time="2025-09-10T00:37:39.109036090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:83894b0c15ab9ace956581ad7c666c30,Namespace:kube-system,Attempt:0,}" Sep 10 00:37:39.364621 kubelet[1750]: I0910 00:37:39.364514 1750 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 10 00:37:39.364978 kubelet[1750]: E0910 00:37:39.364921 1750 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.12:6443/api/v1/nodes\": dial tcp 10.0.0.12:6443: connect: connection refused" node="localhost" Sep 10 00:37:39.423667 kubelet[1750]: E0910 00:37:39.423604 1750 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.12:6443: connect: connection refused" interval="1.6s" Sep 10 00:37:39.529139 kubelet[1750]: W0910 00:37:39.529057 1750 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused Sep 10 00:37:39.529263 kubelet[1750]: E0910 00:37:39.529141 1750 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:37:39.618202 kubelet[1750]: W0910 00:37:39.618059 1750 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused Sep 10 00:37:39.618202 kubelet[1750]: E0910 00:37:39.618136 1750 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:37:39.866268 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2933967272.mount: Deactivated successfully. Sep 10 00:37:39.873882 env[1312]: time="2025-09-10T00:37:39.873770078Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:37:39.876929 env[1312]: time="2025-09-10T00:37:39.876865671Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:37:39.877750 env[1312]: time="2025-09-10T00:37:39.877714237Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:37:39.878670 env[1312]: time="2025-09-10T00:37:39.878640704Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:37:39.881150 env[1312]: time="2025-09-10T00:37:39.881125416Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:37:39.882361 env[1312]: time="2025-09-10T00:37:39.882328922Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:37:39.883530 env[1312]: time="2025-09-10T00:37:39.883510797Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:37:39.884814 env[1312]: time="2025-09-10T00:37:39.884776043Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:37:39.886835 env[1312]: time="2025-09-10T00:37:39.886810843Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:37:39.888159 env[1312]: time="2025-09-10T00:37:39.888119501Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:37:39.889562 env[1312]: time="2025-09-10T00:37:39.889535676Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:37:39.891067 env[1312]: time="2025-09-10T00:37:39.891030180Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:37:39.923695 env[1312]: time="2025-09-10T00:37:39.923609906Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:37:39.923960 env[1312]: time="2025-09-10T00:37:39.923659692Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:37:39.923960 env[1312]: time="2025-09-10T00:37:39.923669851Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:37:39.923960 env[1312]: time="2025-09-10T00:37:39.923790722Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0cf0d94930117f883b99d3b7846eb235eae0ee4cedc00a07588638097ea1470e pid=1792 runtime=io.containerd.runc.v2 Sep 10 00:37:39.939200 env[1312]: time="2025-09-10T00:37:39.938898390Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:37:39.939200 env[1312]: time="2025-09-10T00:37:39.938977853Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:37:39.939200 env[1312]: time="2025-09-10T00:37:39.938993181Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:37:39.939403 env[1312]: time="2025-09-10T00:37:39.939254302Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f236efaec103e8d83b18259a9f2f6cb40dce40fd07c47be932cccdda4a6719ab pid=1806 runtime=io.containerd.runc.v2 Sep 10 00:37:39.957951 env[1312]: time="2025-09-10T00:37:39.957858231Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:37:39.958178 env[1312]: time="2025-09-10T00:37:39.958119251Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:37:39.958178 env[1312]: time="2025-09-10T00:37:39.958137286Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:37:39.958522 env[1312]: time="2025-09-10T00:37:39.958466757Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/78af766b6bc532a13315e7397a2681348c3bfde3a9ce5fdbe37941b535c2a219 pid=1834 runtime=io.containerd.runc.v2 Sep 10 00:37:40.048437 kubelet[1750]: E0910 00:37:40.048354 1750 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.12:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:37:40.156981 env[1312]: time="2025-09-10T00:37:40.156840176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"f236efaec103e8d83b18259a9f2f6cb40dce40fd07c47be932cccdda4a6719ab\"" Sep 10 00:37:40.157312 env[1312]: time="2025-09-10T00:37:40.157158406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"78af766b6bc532a13315e7397a2681348c3bfde3a9ce5fdbe37941b535c2a219\"" Sep 10 00:37:40.159059 env[1312]: time="2025-09-10T00:37:40.159015741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:83894b0c15ab9ace956581ad7c666c30,Namespace:kube-system,Attempt:0,} returns sandbox id \"0cf0d94930117f883b99d3b7846eb235eae0ee4cedc00a07588638097ea1470e\"" Sep 10 00:37:40.159607 kubelet[1750]: E0910 00:37:40.159588 1750 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:37:40.159672 kubelet[1750]: E0910 00:37:40.159619 1750 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:37:40.159844 kubelet[1750]: E0910 00:37:40.159831 1750 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:37:40.161659 env[1312]: time="2025-09-10T00:37:40.161627359Z" level=info msg="CreateContainer within sandbox \"f236efaec103e8d83b18259a9f2f6cb40dce40fd07c47be932cccdda4a6719ab\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 10 00:37:40.161720 env[1312]: time="2025-09-10T00:37:40.161691202Z" level=info msg="CreateContainer within sandbox \"0cf0d94930117f883b99d3b7846eb235eae0ee4cedc00a07588638097ea1470e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 10 00:37:40.161758 env[1312]: time="2025-09-10T00:37:40.161644573Z" level=info msg="CreateContainer within sandbox \"78af766b6bc532a13315e7397a2681348c3bfde3a9ce5fdbe37941b535c2a219\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 10 00:37:40.166290 kubelet[1750]: I0910 00:37:40.166263 1750 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 10 00:37:40.166675 kubelet[1750]: E0910 00:37:40.166647 1750 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.12:6443/api/v1/nodes\": dial tcp 10.0.0.12:6443: connect: connection refused" node="localhost" Sep 10 00:37:40.191001 env[1312]: time="2025-09-10T00:37:40.190943294Z" level=info msg="CreateContainer within sandbox \"0cf0d94930117f883b99d3b7846eb235eae0ee4cedc00a07588638097ea1470e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d9245cdcbee9ab16df08e50248aaa062d644473f1c7e0b31778a17692d7ddcb6\"" Sep 10 00:37:40.191463 env[1312]: time="2025-09-10T00:37:40.191438251Z" level=info msg="StartContainer for \"d9245cdcbee9ab16df08e50248aaa062d644473f1c7e0b31778a17692d7ddcb6\"" Sep 10 00:37:40.195686 env[1312]: time="2025-09-10T00:37:40.195647559Z" level=info msg="CreateContainer within sandbox \"78af766b6bc532a13315e7397a2681348c3bfde3a9ce5fdbe37941b535c2a219\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"021fca3c26819976057a06dcc83bcc99c67d525e462d52307cb6bfe0f96657d9\"" Sep 10 00:37:40.196098 env[1312]: time="2025-09-10T00:37:40.196065539Z" level=info msg="StartContainer for \"021fca3c26819976057a06dcc83bcc99c67d525e462d52307cb6bfe0f96657d9\"" Sep 10 00:37:40.197246 env[1312]: time="2025-09-10T00:37:40.197197816Z" level=info msg="CreateContainer within sandbox \"f236efaec103e8d83b18259a9f2f6cb40dce40fd07c47be932cccdda4a6719ab\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"043c32d2786704a77505c408c5ce0ffa37f559b8e9d56bb07f8184a4184691bc\"" Sep 10 00:37:40.197649 env[1312]: time="2025-09-10T00:37:40.197624313Z" level=info msg="StartContainer for \"043c32d2786704a77505c408c5ce0ffa37f559b8e9d56bb07f8184a4184691bc\"" Sep 10 00:37:40.267473 env[1312]: time="2025-09-10T00:37:40.267427144Z" level=info msg="StartContainer for \"043c32d2786704a77505c408c5ce0ffa37f559b8e9d56bb07f8184a4184691bc\" returns successfully" Sep 10 00:37:40.271342 env[1312]: time="2025-09-10T00:37:40.271312922Z" level=info msg="StartContainer for \"d9245cdcbee9ab16df08e50248aaa062d644473f1c7e0b31778a17692d7ddcb6\" returns successfully" Sep 10 00:37:40.297781 env[1312]: time="2025-09-10T00:37:40.297709819Z" level=info msg="StartContainer for \"021fca3c26819976057a06dcc83bcc99c67d525e462d52307cb6bfe0f96657d9\" returns successfully" Sep 10 00:37:41.100768 kubelet[1750]: E0910 00:37:41.100710 1750 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:37:41.102789 kubelet[1750]: E0910 00:37:41.102740 1750 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:37:41.104034 kubelet[1750]: E0910 00:37:41.104019 1750 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:37:41.657517 kubelet[1750]: E0910 00:37:41.657451 1750 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 10 00:37:41.768750 kubelet[1750]: I0910 00:37:41.768705 1750 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 10 00:37:41.912956 kubelet[1750]: I0910 00:37:41.912820 1750 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 10 00:37:42.002898 kubelet[1750]: I0910 00:37:42.002839 1750 apiserver.go:52] "Watching apiserver" Sep 10 00:37:42.019552 kubelet[1750]: I0910 00:37:42.019473 1750 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 10 00:37:42.207558 kubelet[1750]: E0910 00:37:42.207369 1750 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 10 00:37:42.208089 kubelet[1750]: E0910 00:37:42.207615 1750 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:37:44.094801 kubelet[1750]: E0910 00:37:44.094757 1750 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:37:44.109020 kubelet[1750]: E0910 00:37:44.108977 1750 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:37:44.743537 systemd[1]: Reloading. Sep 10 00:37:44.800953 /usr/lib/systemd/system-generators/torcx-generator[2048]: time="2025-09-10T00:37:44Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 10 00:37:44.801345 /usr/lib/systemd/system-generators/torcx-generator[2048]: time="2025-09-10T00:37:44Z" level=info msg="torcx already run" Sep 10 00:37:44.885267 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 10 00:37:44.885285 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 10 00:37:44.903087 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 10 00:37:44.986243 systemd[1]: Stopping kubelet.service... Sep 10 00:37:45.011051 systemd[1]: kubelet.service: Deactivated successfully. Sep 10 00:37:45.011369 systemd[1]: Stopped kubelet.service. Sep 10 00:37:45.013250 systemd[1]: Starting kubelet.service... Sep 10 00:37:45.153843 systemd[1]: Started kubelet.service. Sep 10 00:37:45.194831 kubelet[2104]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 00:37:45.194831 kubelet[2104]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 10 00:37:45.194831 kubelet[2104]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 00:37:45.195354 kubelet[2104]: I0910 00:37:45.194909 2104 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 10 00:37:45.201888 kubelet[2104]: I0910 00:37:45.201829 2104 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 10 00:37:45.201888 kubelet[2104]: I0910 00:37:45.201867 2104 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 10 00:37:45.202163 kubelet[2104]: I0910 00:37:45.202141 2104 server.go:934] "Client rotation is on, will bootstrap in background" Sep 10 00:37:45.203345 kubelet[2104]: I0910 00:37:45.203321 2104 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 10 00:37:45.205155 kubelet[2104]: I0910 00:37:45.205128 2104 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 10 00:37:45.210775 kubelet[2104]: E0910 00:37:45.210730 2104 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 10 00:37:45.210878 kubelet[2104]: I0910 00:37:45.210778 2104 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 10 00:37:45.214690 kubelet[2104]: I0910 00:37:45.214656 2104 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 10 00:37:45.215336 kubelet[2104]: I0910 00:37:45.215310 2104 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 10 00:37:45.215486 kubelet[2104]: I0910 00:37:45.215452 2104 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 10 00:37:45.215916 kubelet[2104]: I0910 00:37:45.215482 2104 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 10 00:37:45.216048 kubelet[2104]: I0910 00:37:45.215925 2104 topology_manager.go:138] "Creating topology manager with none policy" Sep 10 00:37:45.216048 kubelet[2104]: I0910 00:37:45.215940 2104 container_manager_linux.go:300] "Creating device plugin manager" Sep 10 00:37:45.216048 kubelet[2104]: I0910 00:37:45.215977 2104 state_mem.go:36] "Initialized new in-memory state store" Sep 10 00:37:45.216130 kubelet[2104]: I0910 00:37:45.216085 2104 kubelet.go:408] "Attempting to sync node with API server" Sep 10 00:37:45.216130 kubelet[2104]: I0910 00:37:45.216104 2104 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 10 00:37:45.216193 kubelet[2104]: I0910 00:37:45.216137 2104 kubelet.go:314] "Adding apiserver pod source" Sep 10 00:37:45.216193 kubelet[2104]: I0910 00:37:45.216151 2104 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 10 00:37:45.217548 kubelet[2104]: I0910 00:37:45.217449 2104 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 10 00:37:45.217973 kubelet[2104]: I0910 00:37:45.217955 2104 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 10 00:37:45.218431 kubelet[2104]: I0910 00:37:45.218410 2104 server.go:1274] "Started kubelet" Sep 10 00:37:45.218717 kubelet[2104]: I0910 00:37:45.218687 2104 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 10 00:37:45.218923 kubelet[2104]: I0910 00:37:45.218849 2104 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 10 00:37:45.219166 kubelet[2104]: I0910 00:37:45.219144 2104 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 10 00:37:45.220146 kubelet[2104]: I0910 00:37:45.220130 2104 server.go:449] "Adding debug handlers to kubelet server" Sep 10 00:37:45.222186 kubelet[2104]: E0910 00:37:45.222157 2104 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 10 00:37:45.222629 kubelet[2104]: I0910 00:37:45.222608 2104 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 10 00:37:45.223324 kubelet[2104]: I0910 00:37:45.223309 2104 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 10 00:37:45.224299 kubelet[2104]: I0910 00:37:45.224268 2104 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 10 00:37:45.224698 kubelet[2104]: I0910 00:37:45.224677 2104 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 10 00:37:45.224820 kubelet[2104]: I0910 00:37:45.224790 2104 reconciler.go:26] "Reconciler: start to sync state" Sep 10 00:37:45.225049 kubelet[2104]: E0910 00:37:45.225022 2104 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:37:45.228943 kubelet[2104]: I0910 00:37:45.228694 2104 factory.go:221] Registration of the systemd container factory successfully Sep 10 00:37:45.228943 kubelet[2104]: I0910 00:37:45.228795 2104 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 10 00:37:45.232105 kubelet[2104]: I0910 00:37:45.232085 2104 factory.go:221] Registration of the containerd container factory successfully Sep 10 00:37:45.245718 kubelet[2104]: I0910 00:37:45.245640 2104 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 10 00:37:45.248271 kubelet[2104]: I0910 00:37:45.248258 2104 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 10 00:37:45.248352 kubelet[2104]: I0910 00:37:45.248338 2104 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 10 00:37:45.248436 kubelet[2104]: I0910 00:37:45.248421 2104 kubelet.go:2321] "Starting kubelet main sync loop" Sep 10 00:37:45.248583 kubelet[2104]: E0910 00:37:45.248565 2104 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 10 00:37:45.279222 kubelet[2104]: I0910 00:37:45.279116 2104 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 10 00:37:45.279222 kubelet[2104]: I0910 00:37:45.279135 2104 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 10 00:37:45.279222 kubelet[2104]: I0910 00:37:45.279155 2104 state_mem.go:36] "Initialized new in-memory state store" Sep 10 00:37:45.279423 kubelet[2104]: I0910 00:37:45.279309 2104 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 10 00:37:45.279423 kubelet[2104]: I0910 00:37:45.279320 2104 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 10 00:37:45.279423 kubelet[2104]: I0910 00:37:45.279337 2104 policy_none.go:49] "None policy: Start" Sep 10 00:37:45.280026 kubelet[2104]: I0910 00:37:45.280012 2104 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 10 00:37:45.280026 kubelet[2104]: I0910 00:37:45.280028 2104 state_mem.go:35] "Initializing new in-memory state store" Sep 10 00:37:45.280156 kubelet[2104]: I0910 00:37:45.280144 2104 state_mem.go:75] "Updated machine memory state" Sep 10 00:37:45.281379 kubelet[2104]: I0910 00:37:45.281356 2104 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 10 00:37:45.281582 kubelet[2104]: I0910 00:37:45.281570 2104 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 10 00:37:45.281634 kubelet[2104]: I0910 00:37:45.281599 2104 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 10 00:37:45.281793 kubelet[2104]: I0910 00:37:45.281773 2104 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 10 00:37:45.386038 kubelet[2104]: I0910 00:37:45.385995 2104 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 10 00:37:45.425835 kubelet[2104]: I0910 00:37:45.425762 2104 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/83894b0c15ab9ace956581ad7c666c30-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"83894b0c15ab9ace956581ad7c666c30\") " pod="kube-system/kube-apiserver-localhost" Sep 10 00:37:45.425835 kubelet[2104]: I0910 00:37:45.425832 2104 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:37:45.426033 kubelet[2104]: I0910 00:37:45.425857 2104 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:37:45.426033 kubelet[2104]: I0910 00:37:45.425881 2104 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:37:45.426033 kubelet[2104]: I0910 00:37:45.425903 2104 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:37:45.426033 kubelet[2104]: I0910 00:37:45.425923 2104 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:37:45.426033 kubelet[2104]: I0910 00:37:45.425940 2104 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/83894b0c15ab9ace956581ad7c666c30-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"83894b0c15ab9ace956581ad7c666c30\") " pod="kube-system/kube-apiserver-localhost" Sep 10 00:37:45.426157 kubelet[2104]: I0910 00:37:45.425963 2104 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/83894b0c15ab9ace956581ad7c666c30-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"83894b0c15ab9ace956581ad7c666c30\") " pod="kube-system/kube-apiserver-localhost" Sep 10 00:37:45.426157 kubelet[2104]: I0910 00:37:45.425984 2104 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 10 00:37:45.476100 kubelet[2104]: E0910 00:37:45.476047 2104 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 10 00:37:45.705823 kubelet[2104]: E0910 00:37:45.705773 2104 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:37:45.705974 kubelet[2104]: E0910 00:37:45.705782 2104 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:37:45.777078 kubelet[2104]: E0910 00:37:45.777007 2104 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:37:45.778201 kubelet[2104]: I0910 00:37:45.778177 2104 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Sep 10 00:37:45.778295 kubelet[2104]: I0910 00:37:45.778260 2104 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 10 00:37:45.885038 update_engine[1295]: I0910 00:37:45.884956 1295 update_attempter.cc:509] Updating boot flags... Sep 10 00:37:46.120477 sudo[2154]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 10 00:37:46.120675 sudo[2154]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Sep 10 00:37:46.217460 kubelet[2104]: I0910 00:37:46.217369 2104 apiserver.go:52] "Watching apiserver" Sep 10 00:37:46.225621 kubelet[2104]: I0910 00:37:46.225573 2104 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 10 00:37:46.257029 kubelet[2104]: E0910 00:37:46.256975 2104 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:37:46.257029 kubelet[2104]: E0910 00:37:46.256975 2104 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:37:46.272226 kubelet[2104]: E0910 00:37:46.272176 2104 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 10 00:37:46.272438 kubelet[2104]: E0910 00:37:46.272417 2104 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:37:46.284112 kubelet[2104]: I0910 00:37:46.283871 2104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.28383932 podStartE2EDuration="1.28383932s" podCreationTimestamp="2025-09-10 00:37:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:37:46.276696466 +0000 UTC m=+1.116219086" watchObservedRunningTime="2025-09-10 00:37:46.28383932 +0000 UTC m=+1.123361941" Sep 10 00:37:46.292716 kubelet[2104]: I0910 00:37:46.292663 2104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.29248421 podStartE2EDuration="2.29248421s" podCreationTimestamp="2025-09-10 00:37:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:37:46.284135102 +0000 UTC m=+1.123657722" watchObservedRunningTime="2025-09-10 00:37:46.29248421 +0000 UTC m=+1.132006830" Sep 10 00:37:46.292992 kubelet[2104]: I0910 00:37:46.292774 2104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.2927681899999999 podStartE2EDuration="1.29276819s" podCreationTimestamp="2025-09-10 00:37:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:37:46.292408086 +0000 UTC m=+1.131930706" watchObservedRunningTime="2025-09-10 00:37:46.29276819 +0000 UTC m=+1.132290830" Sep 10 00:37:46.702283 sudo[2154]: pam_unix(sudo:session): session closed for user root Sep 10 00:37:47.258963 kubelet[2104]: E0910 00:37:47.258915 2104 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:37:48.236419 kubelet[2104]: E0910 00:37:48.236372 2104 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:37:48.260958 kubelet[2104]: E0910 00:37:48.260915 2104 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:37:48.842071 kubelet[2104]: E0910 00:37:48.842021 2104 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:37:49.180177 sudo[1448]: pam_unix(sudo:session): session closed for user root Sep 10 00:37:49.181778 sshd[1442]: pam_unix(sshd:session): session closed for user core Sep 10 00:37:49.184673 systemd[1]: sshd@6-10.0.0.12:22-10.0.0.1:41018.service: Deactivated successfully. Sep 10 00:37:49.185948 systemd-logind[1293]: Session 7 logged out. Waiting for processes to exit. Sep 10 00:37:49.185999 systemd[1]: session-7.scope: Deactivated successfully. Sep 10 00:37:49.186825 systemd-logind[1293]: Removed session 7. Sep 10 00:37:49.378811 kubelet[2104]: E0910 00:37:49.378749 2104 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:37:50.065629 kubelet[2104]: I0910 00:37:50.065584 2104 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 10 00:37:50.065938 env[1312]: time="2025-09-10T00:37:50.065890526Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 10 00:37:50.066475 kubelet[2104]: I0910 00:37:50.066458 2104 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 10 00:37:51.056672 kubelet[2104]: I0910 00:37:51.056599 2104 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czwdw\" (UniqueName: \"kubernetes.io/projected/6633c719-8189-467b-bdb6-567fa652ae08-kube-api-access-czwdw\") pod \"kube-proxy-hz262\" (UID: \"6633c719-8189-467b-bdb6-567fa652ae08\") " pod="kube-system/kube-proxy-hz262" Sep 10 00:37:51.056672 kubelet[2104]: I0910 00:37:51.056656 2104 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ab6e75b4-2401-4c17-bb89-7a450c5017a6-etc-cni-netd\") pod \"cilium-q6gm9\" (UID: \"ab6e75b4-2401-4c17-bb89-7a450c5017a6\") " pod="kube-system/cilium-q6gm9" Sep 10 00:37:51.056672 kubelet[2104]: I0910 00:37:51.056688 2104 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ab6e75b4-2401-4c17-bb89-7a450c5017a6-cilium-cgroup\") pod \"cilium-q6gm9\" (UID: \"ab6e75b4-2401-4c17-bb89-7a450c5017a6\") " pod="kube-system/cilium-q6gm9" Sep 10 00:37:51.057225 kubelet[2104]: I0910 00:37:51.056711 2104 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ab6e75b4-2401-4c17-bb89-7a450c5017a6-clustermesh-secrets\") pod \"cilium-q6gm9\" (UID: \"ab6e75b4-2401-4c17-bb89-7a450c5017a6\") " pod="kube-system/cilium-q6gm9" Sep 10 00:37:51.057225 kubelet[2104]: I0910 00:37:51.056731 2104 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ab6e75b4-2401-4c17-bb89-7a450c5017a6-cilium-config-path\") pod \"cilium-q6gm9\" (UID: \"ab6e75b4-2401-4c17-bb89-7a450c5017a6\") " pod="kube-system/cilium-q6gm9" Sep 10 00:37:51.057225 kubelet[2104]: I0910 00:37:51.056749 2104 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ab6e75b4-2401-4c17-bb89-7a450c5017a6-bpf-maps\") pod \"cilium-q6gm9\" (UID: \"ab6e75b4-2401-4c17-bb89-7a450c5017a6\") " pod="kube-system/cilium-q6gm9" Sep 10 00:37:51.057225 kubelet[2104]: I0910 00:37:51.056773 2104 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ab6e75b4-2401-4c17-bb89-7a450c5017a6-lib-modules\") pod \"cilium-q6gm9\" (UID: \"ab6e75b4-2401-4c17-bb89-7a450c5017a6\") " pod="kube-system/cilium-q6gm9" Sep 10 00:37:51.057225 kubelet[2104]: I0910 00:37:51.056799 2104 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ab6e75b4-2401-4c17-bb89-7a450c5017a6-hostproc\") pod \"cilium-q6gm9\" (UID: \"ab6e75b4-2401-4c17-bb89-7a450c5017a6\") " pod="kube-system/cilium-q6gm9" Sep 10 00:37:51.057225 kubelet[2104]: I0910 00:37:51.056814 2104 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6633c719-8189-467b-bdb6-567fa652ae08-xtables-lock\") pod \"kube-proxy-hz262\" (UID: \"6633c719-8189-467b-bdb6-567fa652ae08\") " pod="kube-system/kube-proxy-hz262" Sep 10 00:37:51.057448 kubelet[2104]: I0910 00:37:51.056827 2104 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ab6e75b4-2401-4c17-bb89-7a450c5017a6-cilium-run\") pod \"cilium-q6gm9\" (UID: \"ab6e75b4-2401-4c17-bb89-7a450c5017a6\") " pod="kube-system/cilium-q6gm9" Sep 10 00:37:51.057448 kubelet[2104]: I0910 00:37:51.056841 2104 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ab6e75b4-2401-4c17-bb89-7a450c5017a6-xtables-lock\") pod \"cilium-q6gm9\" (UID: \"ab6e75b4-2401-4c17-bb89-7a450c5017a6\") " pod="kube-system/cilium-q6gm9" Sep 10 00:37:51.057448 kubelet[2104]: I0910 00:37:51.056863 2104 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ab6e75b4-2401-4c17-bb89-7a450c5017a6-host-proc-sys-kernel\") pod \"cilium-q6gm9\" (UID: \"ab6e75b4-2401-4c17-bb89-7a450c5017a6\") " pod="kube-system/cilium-q6gm9" Sep 10 00:37:51.057448 kubelet[2104]: I0910 00:37:51.056883 2104 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqwbz\" (UniqueName: \"kubernetes.io/projected/ab6e75b4-2401-4c17-bb89-7a450c5017a6-kube-api-access-gqwbz\") pod \"cilium-q6gm9\" (UID: \"ab6e75b4-2401-4c17-bb89-7a450c5017a6\") " pod="kube-system/cilium-q6gm9" Sep 10 00:37:51.057448 kubelet[2104]: I0910 00:37:51.056914 2104 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ab6e75b4-2401-4c17-bb89-7a450c5017a6-hubble-tls\") pod \"cilium-q6gm9\" (UID: \"ab6e75b4-2401-4c17-bb89-7a450c5017a6\") " pod="kube-system/cilium-q6gm9" Sep 10 00:37:51.057448 kubelet[2104]: I0910 00:37:51.056942 2104 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6633c719-8189-467b-bdb6-567fa652ae08-kube-proxy\") pod \"kube-proxy-hz262\" (UID: \"6633c719-8189-467b-bdb6-567fa652ae08\") " pod="kube-system/kube-proxy-hz262" Sep 10 00:37:51.057635 kubelet[2104]: I0910 00:37:51.056972 2104 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ab6e75b4-2401-4c17-bb89-7a450c5017a6-cni-path\") pod \"cilium-q6gm9\" (UID: \"ab6e75b4-2401-4c17-bb89-7a450c5017a6\") " pod="kube-system/cilium-q6gm9" Sep 10 00:37:51.057635 kubelet[2104]: I0910 00:37:51.056988 2104 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ab6e75b4-2401-4c17-bb89-7a450c5017a6-host-proc-sys-net\") pod \"cilium-q6gm9\" (UID: \"ab6e75b4-2401-4c17-bb89-7a450c5017a6\") " pod="kube-system/cilium-q6gm9" Sep 10 00:37:51.057635 kubelet[2104]: I0910 00:37:51.057002 2104 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6633c719-8189-467b-bdb6-567fa652ae08-lib-modules\") pod \"kube-proxy-hz262\" (UID: \"6633c719-8189-467b-bdb6-567fa652ae08\") " pod="kube-system/kube-proxy-hz262" Sep 10 00:37:51.157746 kubelet[2104]: I0910 00:37:51.157682 2104 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/61ec1db8-411c-4ac4-99cc-e07c70e56eac-cilium-config-path\") pod \"cilium-operator-5d85765b45-4tl8c\" (UID: \"61ec1db8-411c-4ac4-99cc-e07c70e56eac\") " pod="kube-system/cilium-operator-5d85765b45-4tl8c" Sep 10 00:37:51.157746 kubelet[2104]: I0910 00:37:51.157753 2104 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mq55\" (UniqueName: \"kubernetes.io/projected/61ec1db8-411c-4ac4-99cc-e07c70e56eac-kube-api-access-9mq55\") pod \"cilium-operator-5d85765b45-4tl8c\" (UID: \"61ec1db8-411c-4ac4-99cc-e07c70e56eac\") " pod="kube-system/cilium-operator-5d85765b45-4tl8c" Sep 10 00:37:51.158381 kubelet[2104]: I0910 00:37:51.158353 2104 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Sep 10 00:37:51.271840 kubelet[2104]: E0910 00:37:51.271801 2104 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:37:51.272511 env[1312]: time="2025-09-10T00:37:51.272437523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hz262,Uid:6633c719-8189-467b-bdb6-567fa652ae08,Namespace:kube-system,Attempt:0,}" Sep 10 00:37:51.277753 kubelet[2104]: E0910 00:37:51.277713 2104 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:37:51.278796 env[1312]: time="2025-09-10T00:37:51.278721748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-q6gm9,Uid:ab6e75b4-2401-4c17-bb89-7a450c5017a6,Namespace:kube-system,Attempt:0,}" Sep 10 00:37:51.294866 env[1312]: time="2025-09-10T00:37:51.294765721Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:37:51.294866 env[1312]: time="2025-09-10T00:37:51.294820946Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:37:51.294866 env[1312]: time="2025-09-10T00:37:51.294834352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:37:51.295249 env[1312]: time="2025-09-10T00:37:51.295175698Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4277f836a86649cd1c668273f3c9640be575c6b449a2d6279d703c9c98cfa9e9 pid=2213 runtime=io.containerd.runc.v2 Sep 10 00:37:51.304084 env[1312]: time="2025-09-10T00:37:51.303985547Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:37:51.304084 env[1312]: time="2025-09-10T00:37:51.304039600Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:37:51.304084 env[1312]: time="2025-09-10T00:37:51.304055701Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:37:51.304466 env[1312]: time="2025-09-10T00:37:51.304428537Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c5833686b1433264ea602d3ff2336c558bc671ce3651325adf9c8d8d91f18dfd pid=2238 runtime=io.containerd.runc.v2 Sep 10 00:37:51.337719 env[1312]: time="2025-09-10T00:37:51.336452614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hz262,Uid:6633c719-8189-467b-bdb6-567fa652ae08,Namespace:kube-system,Attempt:0,} returns sandbox id \"4277f836a86649cd1c668273f3c9640be575c6b449a2d6279d703c9c98cfa9e9\"" Sep 10 00:37:51.337901 kubelet[2104]: E0910 00:37:51.337268 2104 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:37:51.341086 env[1312]: time="2025-09-10T00:37:51.341022301Z" level=info msg="CreateContainer within sandbox \"4277f836a86649cd1c668273f3c9640be575c6b449a2d6279d703c9c98cfa9e9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 10 00:37:51.355432 env[1312]: time="2025-09-10T00:37:51.355386633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-q6gm9,Uid:ab6e75b4-2401-4c17-bb89-7a450c5017a6,Namespace:kube-system,Attempt:0,} returns sandbox id \"c5833686b1433264ea602d3ff2336c558bc671ce3651325adf9c8d8d91f18dfd\"" Sep 10 00:37:51.356352 kubelet[2104]: E0910 00:37:51.356243 2104 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:37:51.358256 env[1312]: time="2025-09-10T00:37:51.358218227Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 10 00:37:51.365086 env[1312]: time="2025-09-10T00:37:51.365024201Z" level=info msg="CreateContainer within sandbox \"4277f836a86649cd1c668273f3c9640be575c6b449a2d6279d703c9c98cfa9e9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d0ed3cc99fc447c41161aa2acfa2c61c35390e62fe6477300e4adfb32752cea0\"" Sep 10 00:37:51.367348 env[1312]: time="2025-09-10T00:37:51.367318166Z" level=info msg="StartContainer for \"d0ed3cc99fc447c41161aa2acfa2c61c35390e62fe6477300e4adfb32752cea0\"" Sep 10 00:37:51.414828 env[1312]: time="2025-09-10T00:37:51.414777906Z" level=info msg="StartContainer for \"d0ed3cc99fc447c41161aa2acfa2c61c35390e62fe6477300e4adfb32752cea0\" returns successfully" Sep 10 00:37:51.426408 kubelet[2104]: E0910 00:37:51.426375 2104 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:37:51.426985 env[1312]: time="2025-09-10T00:37:51.426946698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-4tl8c,Uid:61ec1db8-411c-4ac4-99cc-e07c70e56eac,Namespace:kube-system,Attempt:0,}" Sep 10 00:37:51.442002 env[1312]: time="2025-09-10T00:37:51.441940703Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:37:51.442002 env[1312]: time="2025-09-10T00:37:51.441974267Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:37:51.442308 env[1312]: time="2025-09-10T00:37:51.442264767Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:37:51.442516 env[1312]: time="2025-09-10T00:37:51.442468492Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1bb5cc10ed07d256cc972eeb8befbd6867ac9390dc13973b01a50a12df790b61 pid=2331 runtime=io.containerd.runc.v2 Sep 10 00:37:51.489073 env[1312]: time="2025-09-10T00:37:51.489033457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-4tl8c,Uid:61ec1db8-411c-4ac4-99cc-e07c70e56eac,Namespace:kube-system,Attempt:0,} returns sandbox id \"1bb5cc10ed07d256cc972eeb8befbd6867ac9390dc13973b01a50a12df790b61\"" Sep 10 00:37:51.490261 kubelet[2104]: E0910 00:37:51.489844 2104 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:37:52.269679 kubelet[2104]: E0910 00:37:52.269641 2104 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:37:52.374857 kubelet[2104]: I0910 00:37:52.374786 2104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-hz262" podStartSLOduration=2.374751745 podStartE2EDuration="2.374751745s" podCreationTimestamp="2025-09-10 00:37:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:37:52.374722831 +0000 UTC m=+7.214245471" watchObservedRunningTime="2025-09-10 00:37:52.374751745 +0000 UTC m=+7.214274385" Sep 10 00:37:56.668242 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1248596317.mount: Deactivated successfully. Sep 10 00:37:58.243183 kubelet[2104]: E0910 00:37:58.243143 2104 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:37:58.847467 kubelet[2104]: E0910 00:37:58.847420 2104 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:37:59.385413 kubelet[2104]: E0910 00:37:59.385377 2104 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:38:00.202673 env[1312]: time="2025-09-10T00:38:00.202596774Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:38:00.205244 env[1312]: time="2025-09-10T00:38:00.205201958Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:38:00.207074 env[1312]: time="2025-09-10T00:38:00.207024545Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:38:00.208390 env[1312]: time="2025-09-10T00:38:00.208311122Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 10 00:38:00.213168 env[1312]: time="2025-09-10T00:38:00.213102248Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 10 00:38:00.215382 env[1312]: time="2025-09-10T00:38:00.215333326Z" level=info msg="CreateContainer within sandbox \"c5833686b1433264ea602d3ff2336c558bc671ce3651325adf9c8d8d91f18dfd\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 10 00:38:00.229341 env[1312]: time="2025-09-10T00:38:00.229276863Z" level=info msg="CreateContainer within sandbox \"c5833686b1433264ea602d3ff2336c558bc671ce3651325adf9c8d8d91f18dfd\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6d51ef98bd5d8fa2cbd50dff4d26a1d3856e0b0445416f55d534af3d8d3bebaa\"" Sep 10 00:38:00.229980 env[1312]: time="2025-09-10T00:38:00.229954601Z" level=info msg="StartContainer for \"6d51ef98bd5d8fa2cbd50dff4d26a1d3856e0b0445416f55d534af3d8d3bebaa\"" Sep 10 00:38:00.248423 systemd[1]: run-containerd-runc-k8s.io-6d51ef98bd5d8fa2cbd50dff4d26a1d3856e0b0445416f55d534af3d8d3bebaa-runc.2Aut5T.mount: Deactivated successfully. Sep 10 00:38:00.280247 env[1312]: time="2025-09-10T00:38:00.280165601Z" level=info msg="StartContainer for \"6d51ef98bd5d8fa2cbd50dff4d26a1d3856e0b0445416f55d534af3d8d3bebaa\" returns successfully" Sep 10 00:38:00.296374 kubelet[2104]: E0910 00:38:00.295723 2104 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:38:00.296374 kubelet[2104]: E0910 00:38:00.296236 2104 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:38:01.087673 env[1312]: time="2025-09-10T00:38:01.087585686Z" level=info msg="shim disconnected" id=6d51ef98bd5d8fa2cbd50dff4d26a1d3856e0b0445416f55d534af3d8d3bebaa Sep 10 00:38:01.087673 env[1312]: time="2025-09-10T00:38:01.087669213Z" level=warning msg="cleaning up after shim disconnected" id=6d51ef98bd5d8fa2cbd50dff4d26a1d3856e0b0445416f55d534af3d8d3bebaa namespace=k8s.io Sep 10 00:38:01.087673 env[1312]: time="2025-09-10T00:38:01.087683771Z" level=info msg="cleaning up dead shim" Sep 10 00:38:01.095958 env[1312]: time="2025-09-10T00:38:01.095891134Z" level=warning msg="cleanup warnings time=\"2025-09-10T00:38:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2544 runtime=io.containerd.runc.v2\n" Sep 10 00:38:01.226902 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6d51ef98bd5d8fa2cbd50dff4d26a1d3856e0b0445416f55d534af3d8d3bebaa-rootfs.mount: Deactivated successfully. Sep 10 00:38:01.299000 kubelet[2104]: E0910 00:38:01.298952 2104 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:38:01.300793 env[1312]: time="2025-09-10T00:38:01.300729998Z" level=info msg="CreateContainer within sandbox \"c5833686b1433264ea602d3ff2336c558bc671ce3651325adf9c8d8d91f18dfd\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 10 00:38:01.906784 env[1312]: time="2025-09-10T00:38:01.906683497Z" level=info msg="CreateContainer within sandbox \"c5833686b1433264ea602d3ff2336c558bc671ce3651325adf9c8d8d91f18dfd\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8c032f0458585e4e03b811874fcfb3c93a848073db9464bb6bafe7ce40dfbbd4\"" Sep 10 00:38:01.907381 env[1312]: time="2025-09-10T00:38:01.907327671Z" level=info msg="StartContainer for \"8c032f0458585e4e03b811874fcfb3c93a848073db9464bb6bafe7ce40dfbbd4\"" Sep 10 00:38:01.949428 env[1312]: time="2025-09-10T00:38:01.949380656Z" level=info msg="StartContainer for \"8c032f0458585e4e03b811874fcfb3c93a848073db9464bb6bafe7ce40dfbbd4\" returns successfully" Sep 10 00:38:01.959131 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 10 00:38:01.959465 systemd[1]: Stopped systemd-sysctl.service. Sep 10 00:38:01.959684 systemd[1]: Stopping systemd-sysctl.service... Sep 10 00:38:01.961298 systemd[1]: Starting systemd-sysctl.service... Sep 10 00:38:01.968986 systemd[1]: Finished systemd-sysctl.service. Sep 10 00:38:01.985512 env[1312]: time="2025-09-10T00:38:01.985440783Z" level=info msg="shim disconnected" id=8c032f0458585e4e03b811874fcfb3c93a848073db9464bb6bafe7ce40dfbbd4 Sep 10 00:38:01.985512 env[1312]: time="2025-09-10T00:38:01.985486308Z" level=warning msg="cleaning up after shim disconnected" id=8c032f0458585e4e03b811874fcfb3c93a848073db9464bb6bafe7ce40dfbbd4 namespace=k8s.io Sep 10 00:38:01.985512 env[1312]: time="2025-09-10T00:38:01.985506786Z" level=info msg="cleaning up dead shim" Sep 10 00:38:01.991537 env[1312]: time="2025-09-10T00:38:01.991469158Z" level=warning msg="cleanup warnings time=\"2025-09-10T00:38:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2609 runtime=io.containerd.runc.v2\n" Sep 10 00:38:02.225849 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8c032f0458585e4e03b811874fcfb3c93a848073db9464bb6bafe7ce40dfbbd4-rootfs.mount: Deactivated successfully. Sep 10 00:38:02.302480 kubelet[2104]: E0910 00:38:02.302425 2104 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:38:02.304860 env[1312]: time="2025-09-10T00:38:02.304812822Z" level=info msg="CreateContainer within sandbox \"c5833686b1433264ea602d3ff2336c558bc671ce3651325adf9c8d8d91f18dfd\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 10 00:38:02.325026 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3884698236.mount: Deactivated successfully. Sep 10 00:38:02.331526 env[1312]: time="2025-09-10T00:38:02.331430699Z" level=info msg="CreateContainer within sandbox \"c5833686b1433264ea602d3ff2336c558bc671ce3651325adf9c8d8d91f18dfd\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"bc36983c4d9dd3e3cadab7aadb4510e2a115139e0db66c46571bc35261b9a1bd\"" Sep 10 00:38:02.333920 env[1312]: time="2025-09-10T00:38:02.333879274Z" level=info msg="StartContainer for \"bc36983c4d9dd3e3cadab7aadb4510e2a115139e0db66c46571bc35261b9a1bd\"" Sep 10 00:38:02.402276 env[1312]: time="2025-09-10T00:38:02.402215780Z" level=info msg="StartContainer for \"bc36983c4d9dd3e3cadab7aadb4510e2a115139e0db66c46571bc35261b9a1bd\" returns successfully" Sep 10 00:38:02.749549 env[1312]: time="2025-09-10T00:38:02.749440515Z" level=info msg="shim disconnected" id=bc36983c4d9dd3e3cadab7aadb4510e2a115139e0db66c46571bc35261b9a1bd Sep 10 00:38:02.749549 env[1312]: time="2025-09-10T00:38:02.749549681Z" level=warning msg="cleaning up after shim disconnected" id=bc36983c4d9dd3e3cadab7aadb4510e2a115139e0db66c46571bc35261b9a1bd namespace=k8s.io Sep 10 00:38:02.749819 env[1312]: time="2025-09-10T00:38:02.749559590Z" level=info msg="cleaning up dead shim" Sep 10 00:38:02.756341 env[1312]: time="2025-09-10T00:38:02.756294223Z" level=warning msg="cleanup warnings time=\"2025-09-10T00:38:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2664 runtime=io.containerd.runc.v2\n" Sep 10 00:38:03.225687 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bc36983c4d9dd3e3cadab7aadb4510e2a115139e0db66c46571bc35261b9a1bd-rootfs.mount: Deactivated successfully. Sep 10 00:38:03.305447 kubelet[2104]: E0910 00:38:03.305399 2104 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:38:03.307170 env[1312]: time="2025-09-10T00:38:03.307131564Z" level=info msg="CreateContainer within sandbox \"c5833686b1433264ea602d3ff2336c558bc671ce3651325adf9c8d8d91f18dfd\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 10 00:38:03.332799 env[1312]: time="2025-09-10T00:38:03.332736547Z" level=info msg="CreateContainer within sandbox \"c5833686b1433264ea602d3ff2336c558bc671ce3651325adf9c8d8d91f18dfd\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"6d88b68abe61689bd8aea4fd25d637f74a13e18c1f99bdcd86ad74a843f1741d\"" Sep 10 00:38:03.333257 env[1312]: time="2025-09-10T00:38:03.333219336Z" level=info msg="StartContainer for \"6d88b68abe61689bd8aea4fd25d637f74a13e18c1f99bdcd86ad74a843f1741d\"" Sep 10 00:38:03.378429 env[1312]: time="2025-09-10T00:38:03.378375026Z" level=info msg="StartContainer for \"6d88b68abe61689bd8aea4fd25d637f74a13e18c1f99bdcd86ad74a843f1741d\" returns successfully" Sep 10 00:38:03.581408 env[1312]: time="2025-09-10T00:38:03.581324483Z" level=info msg="shim disconnected" id=6d88b68abe61689bd8aea4fd25d637f74a13e18c1f99bdcd86ad74a843f1741d Sep 10 00:38:03.581408 env[1312]: time="2025-09-10T00:38:03.581395616Z" level=warning msg="cleaning up after shim disconnected" id=6d88b68abe61689bd8aea4fd25d637f74a13e18c1f99bdcd86ad74a843f1741d namespace=k8s.io Sep 10 00:38:03.581408 env[1312]: time="2025-09-10T00:38:03.581408601Z" level=info msg="cleaning up dead shim" Sep 10 00:38:03.589107 env[1312]: time="2025-09-10T00:38:03.589049969Z" level=warning msg="cleanup warnings time=\"2025-09-10T00:38:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2717 runtime=io.containerd.runc.v2\n" Sep 10 00:38:03.608788 env[1312]: time="2025-09-10T00:38:03.608711444Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:38:03.610700 env[1312]: time="2025-09-10T00:38:03.610659354Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:38:03.612298 env[1312]: time="2025-09-10T00:38:03.612271331Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 10 00:38:03.612862 env[1312]: time="2025-09-10T00:38:03.612822569Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 10 00:38:03.617084 env[1312]: time="2025-09-10T00:38:03.617039014Z" level=info msg="CreateContainer within sandbox \"1bb5cc10ed07d256cc972eeb8befbd6867ac9390dc13973b01a50a12df790b61\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 10 00:38:03.628571 env[1312]: time="2025-09-10T00:38:03.628488907Z" level=info msg="CreateContainer within sandbox \"1bb5cc10ed07d256cc972eeb8befbd6867ac9390dc13973b01a50a12df790b61\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"3dfb3eecd67d3838e60727ddd75d6a9e22aac0f3376d854d9d6a9f147a5962e6\"" Sep 10 00:38:03.629056 env[1312]: time="2025-09-10T00:38:03.629002445Z" level=info msg="StartContainer for \"3dfb3eecd67d3838e60727ddd75d6a9e22aac0f3376d854d9d6a9f147a5962e6\"" Sep 10 00:38:03.693851 env[1312]: time="2025-09-10T00:38:03.693781588Z" level=info msg="StartContainer for \"3dfb3eecd67d3838e60727ddd75d6a9e22aac0f3376d854d9d6a9f147a5962e6\" returns successfully" Sep 10 00:38:04.226805 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6d88b68abe61689bd8aea4fd25d637f74a13e18c1f99bdcd86ad74a843f1741d-rootfs.mount: Deactivated successfully. Sep 10 00:38:04.314834 kubelet[2104]: E0910 00:38:04.314788 2104 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:38:04.321091 kubelet[2104]: E0910 00:38:04.321043 2104 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:38:04.324200 env[1312]: time="2025-09-10T00:38:04.324139426Z" level=info msg="CreateContainer within sandbox \"c5833686b1433264ea602d3ff2336c558bc671ce3651325adf9c8d8d91f18dfd\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 10 00:38:04.344982 kubelet[2104]: I0910 00:38:04.344907 2104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-4tl8c" podStartSLOduration=1.220808975 podStartE2EDuration="13.34488384s" podCreationTimestamp="2025-09-10 00:37:51 +0000 UTC" firstStartedPulling="2025-09-10 00:37:51.490605375 +0000 UTC m=+6.330127985" lastFinishedPulling="2025-09-10 00:38:03.61468023 +0000 UTC m=+18.454202850" observedRunningTime="2025-09-10 00:38:04.332425534 +0000 UTC m=+19.171948154" watchObservedRunningTime="2025-09-10 00:38:04.34488384 +0000 UTC m=+19.184406460" Sep 10 00:38:04.371529 env[1312]: time="2025-09-10T00:38:04.371204147Z" level=info msg="CreateContainer within sandbox \"c5833686b1433264ea602d3ff2336c558bc671ce3651325adf9c8d8d91f18dfd\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"88cb827dabd28a341302fb6d9a9eeaa17b7bf69a0e21784f5dc9eb3d356d9140\"" Sep 10 00:38:04.372962 env[1312]: time="2025-09-10T00:38:04.372931551Z" level=info msg="StartContainer for \"88cb827dabd28a341302fb6d9a9eeaa17b7bf69a0e21784f5dc9eb3d356d9140\"" Sep 10 00:38:04.462949 env[1312]: time="2025-09-10T00:38:04.462904995Z" level=info msg="StartContainer for \"88cb827dabd28a341302fb6d9a9eeaa17b7bf69a0e21784f5dc9eb3d356d9140\" returns successfully" Sep 10 00:38:04.616001 kubelet[2104]: I0910 00:38:04.615971 2104 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 10 00:38:04.855324 kubelet[2104]: I0910 00:38:04.855225 2104 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtzsp\" (UniqueName: \"kubernetes.io/projected/405fd55f-6d66-4f3c-82f4-814359e6eb03-kube-api-access-qtzsp\") pod \"coredns-7c65d6cfc9-xn58s\" (UID: \"405fd55f-6d66-4f3c-82f4-814359e6eb03\") " pod="kube-system/coredns-7c65d6cfc9-xn58s" Sep 10 00:38:04.855324 kubelet[2104]: I0910 00:38:04.855285 2104 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dksw9\" (UniqueName: \"kubernetes.io/projected/ecfc463b-03ef-402f-a889-a17f00bfc67f-kube-api-access-dksw9\") pod \"coredns-7c65d6cfc9-xdqfl\" (UID: \"ecfc463b-03ef-402f-a889-a17f00bfc67f\") " pod="kube-system/coredns-7c65d6cfc9-xdqfl" Sep 10 00:38:04.855324 kubelet[2104]: I0910 00:38:04.855315 2104 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/405fd55f-6d66-4f3c-82f4-814359e6eb03-config-volume\") pod \"coredns-7c65d6cfc9-xn58s\" (UID: \"405fd55f-6d66-4f3c-82f4-814359e6eb03\") " pod="kube-system/coredns-7c65d6cfc9-xn58s" Sep 10 00:38:04.855324 kubelet[2104]: I0910 00:38:04.855338 2104 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ecfc463b-03ef-402f-a889-a17f00bfc67f-config-volume\") pod \"coredns-7c65d6cfc9-xdqfl\" (UID: \"ecfc463b-03ef-402f-a889-a17f00bfc67f\") " pod="kube-system/coredns-7c65d6cfc9-xdqfl" Sep 10 00:38:05.227899 systemd[1]: run-containerd-runc-k8s.io-88cb827dabd28a341302fb6d9a9eeaa17b7bf69a0e21784f5dc9eb3d356d9140-runc.iKJEqj.mount: Deactivated successfully. Sep 10 00:38:05.305068 kubelet[2104]: E0910 00:38:05.305020 2104 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:38:05.305293 kubelet[2104]: E0910 00:38:05.305160 2104 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:38:05.305664 env[1312]: time="2025-09-10T00:38:05.305616154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-xn58s,Uid:405fd55f-6d66-4f3c-82f4-814359e6eb03,Namespace:kube-system,Attempt:0,}" Sep 10 00:38:05.306110 env[1312]: time="2025-09-10T00:38:05.306051595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-xdqfl,Uid:ecfc463b-03ef-402f-a889-a17f00bfc67f,Namespace:kube-system,Attempt:0,}" Sep 10 00:38:05.329384 kubelet[2104]: E0910 00:38:05.329327 2104 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:38:05.329878 kubelet[2104]: E0910 00:38:05.329526 2104 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:38:06.331793 kubelet[2104]: E0910 00:38:06.331735 2104 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:38:07.315034 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Sep 10 00:38:07.315157 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Sep 10 00:38:07.315803 systemd-networkd[1083]: cilium_host: Link UP Sep 10 00:38:07.315910 systemd-networkd[1083]: cilium_net: Link UP Sep 10 00:38:07.316026 systemd-networkd[1083]: cilium_net: Gained carrier Sep 10 00:38:07.316176 systemd-networkd[1083]: cilium_host: Gained carrier Sep 10 00:38:07.333562 kubelet[2104]: E0910 00:38:07.333530 2104 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:38:07.399399 systemd-networkd[1083]: cilium_vxlan: Link UP Sep 10 00:38:07.399407 systemd-networkd[1083]: cilium_vxlan: Gained carrier Sep 10 00:38:07.609537 kernel: NET: Registered PF_ALG protocol family Sep 10 00:38:07.860625 systemd-networkd[1083]: cilium_host: Gained IPv6LL Sep 10 00:38:07.987670 systemd-networkd[1083]: cilium_net: Gained IPv6LL Sep 10 00:38:08.184706 systemd-networkd[1083]: lxc_health: Link UP Sep 10 00:38:08.196766 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 10 00:38:08.196633 systemd-networkd[1083]: lxc_health: Gained carrier Sep 10 00:38:08.667971 systemd-networkd[1083]: lxcf2f49c46d69e: Link UP Sep 10 00:38:08.673554 kernel: eth0: renamed from tmpe2ef2 Sep 10 00:38:08.683433 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 10 00:38:08.683575 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcf2f49c46d69e: link becomes ready Sep 10 00:38:08.683534 systemd-networkd[1083]: lxcf2f49c46d69e: Gained carrier Sep 10 00:38:08.697400 systemd-networkd[1083]: lxc7b4677cee57a: Link UP Sep 10 00:38:08.746549 kernel: eth0: renamed from tmp7b2f6 Sep 10 00:38:08.753319 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc7b4677cee57a: link becomes ready Sep 10 00:38:08.752609 systemd-networkd[1083]: lxc7b4677cee57a: Gained carrier Sep 10 00:38:09.205754 systemd-networkd[1083]: cilium_vxlan: Gained IPv6LL Sep 10 00:38:09.280260 kubelet[2104]: E0910 00:38:09.280217 2104 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:38:09.295329 kubelet[2104]: I0910 00:38:09.295265 2104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-q6gm9" podStartSLOduration=10.439972612 podStartE2EDuration="19.295246105s" podCreationTimestamp="2025-09-10 00:37:50 +0000 UTC" firstStartedPulling="2025-09-10 00:37:51.35732748 +0000 UTC m=+6.196850100" lastFinishedPulling="2025-09-10 00:38:00.212600972 +0000 UTC m=+15.052123593" observedRunningTime="2025-09-10 00:38:05.49690651 +0000 UTC m=+20.336429150" watchObservedRunningTime="2025-09-10 00:38:09.295246105 +0000 UTC m=+24.134768735" Sep 10 00:38:09.343097 kubelet[2104]: E0910 00:38:09.343067 2104 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:38:09.907649 systemd-networkd[1083]: lxcf2f49c46d69e: Gained IPv6LL Sep 10 00:38:09.971624 systemd-networkd[1083]: lxc7b4677cee57a: Gained IPv6LL Sep 10 00:38:10.227696 systemd-networkd[1083]: lxc_health: Gained IPv6LL Sep 10 00:38:10.344575 kubelet[2104]: E0910 00:38:10.344543 2104 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:38:12.304467 env[1312]: time="2025-09-10T00:38:12.304366069Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:38:12.304467 env[1312]: time="2025-09-10T00:38:12.304455558Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:38:12.304467 env[1312]: time="2025-09-10T00:38:12.304476908Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:38:12.304986 env[1312]: time="2025-09-10T00:38:12.304722058Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7b2f6af60db66f3fd81eb80086747d15935d1b68ce0c3033f57a7a78f858830f pid=3314 runtime=io.containerd.runc.v2 Sep 10 00:38:12.334804 systemd-resolved[1223]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 00:38:12.398131 env[1312]: time="2025-09-10T00:38:12.398046985Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:38:12.398131 env[1312]: time="2025-09-10T00:38:12.398088803Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:38:12.398131 env[1312]: time="2025-09-10T00:38:12.398100035Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:38:12.398402 env[1312]: time="2025-09-10T00:38:12.398232954Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e2ef2ab6f50895d3a39fb5c8b4f704132b5f3cfa08db2eb38e10dde56ebc4641 pid=3347 runtime=io.containerd.runc.v2 Sep 10 00:38:12.405252 env[1312]: time="2025-09-10T00:38:12.405198792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-xn58s,Uid:405fd55f-6d66-4f3c-82f4-814359e6eb03,Namespace:kube-system,Attempt:0,} returns sandbox id \"7b2f6af60db66f3fd81eb80086747d15935d1b68ce0c3033f57a7a78f858830f\"" Sep 10 00:38:12.406051 kubelet[2104]: E0910 00:38:12.406021 2104 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:38:12.408810 env[1312]: time="2025-09-10T00:38:12.408708110Z" level=info msg="CreateContainer within sandbox \"7b2f6af60db66f3fd81eb80086747d15935d1b68ce0c3033f57a7a78f858830f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 10 00:38:12.425471 systemd-resolved[1223]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 00:38:12.432457 env[1312]: time="2025-09-10T00:38:12.432398615Z" level=info msg="CreateContainer within sandbox \"7b2f6af60db66f3fd81eb80086747d15935d1b68ce0c3033f57a7a78f858830f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4043d32482502b87c06351b892012cfb2adc3d82db80abbc30787b6f0914d470\"" Sep 10 00:38:12.434691 env[1312]: time="2025-09-10T00:38:12.433415888Z" level=info msg="StartContainer for \"4043d32482502b87c06351b892012cfb2adc3d82db80abbc30787b6f0914d470\"" Sep 10 00:38:12.451795 env[1312]: time="2025-09-10T00:38:12.451746755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-xdqfl,Uid:ecfc463b-03ef-402f-a889-a17f00bfc67f,Namespace:kube-system,Attempt:0,} returns sandbox id \"e2ef2ab6f50895d3a39fb5c8b4f704132b5f3cfa08db2eb38e10dde56ebc4641\"" Sep 10 00:38:12.452629 kubelet[2104]: E0910 00:38:12.452604 2104 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:38:12.456979 env[1312]: time="2025-09-10T00:38:12.456766273Z" level=info msg="CreateContainer within sandbox \"e2ef2ab6f50895d3a39fb5c8b4f704132b5f3cfa08db2eb38e10dde56ebc4641\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 10 00:38:12.475733 env[1312]: time="2025-09-10T00:38:12.475675918Z" level=info msg="CreateContainer within sandbox \"e2ef2ab6f50895d3a39fb5c8b4f704132b5f3cfa08db2eb38e10dde56ebc4641\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7ebe54d3a0ef36924fbbbe52c02eafb21890d8f680f02f6d025d6e6aedf712dc\"" Sep 10 00:38:12.477295 env[1312]: time="2025-09-10T00:38:12.476342311Z" level=info msg="StartContainer for \"7ebe54d3a0ef36924fbbbe52c02eafb21890d8f680f02f6d025d6e6aedf712dc\"" Sep 10 00:38:12.481228 env[1312]: time="2025-09-10T00:38:12.481192070Z" level=info msg="StartContainer for \"4043d32482502b87c06351b892012cfb2adc3d82db80abbc30787b6f0914d470\" returns successfully" Sep 10 00:38:12.707535 env[1312]: time="2025-09-10T00:38:12.707348095Z" level=info msg="StartContainer for \"7ebe54d3a0ef36924fbbbe52c02eafb21890d8f680f02f6d025d6e6aedf712dc\" returns successfully" Sep 10 00:38:13.308773 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount769793397.mount: Deactivated successfully. Sep 10 00:38:13.350152 kubelet[2104]: E0910 00:38:13.350021 2104 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:38:13.351992 kubelet[2104]: E0910 00:38:13.351960 2104 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:38:13.797682 kubelet[2104]: I0910 00:38:13.797584 2104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-xdqfl" podStartSLOduration=22.797555849 podStartE2EDuration="22.797555849s" podCreationTimestamp="2025-09-10 00:37:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:38:13.628186391 +0000 UTC m=+28.467709021" watchObservedRunningTime="2025-09-10 00:38:13.797555849 +0000 UTC m=+28.637078469" Sep 10 00:38:14.353941 kubelet[2104]: E0910 00:38:14.353892 2104 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:38:14.353941 kubelet[2104]: E0910 00:38:14.353913 2104 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:38:15.355751 kubelet[2104]: E0910 00:38:15.355713 2104 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:38:15.356097 kubelet[2104]: E0910 00:38:15.355924 2104 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:38:23.451638 systemd[1]: Started sshd@7-10.0.0.12:22-10.0.0.1:43746.service. Sep 10 00:38:23.495447 sshd[3470]: Accepted publickey for core from 10.0.0.1 port 43746 ssh2: RSA SHA256:naKAIq1jxVWXXwxrf2qyS4Axg4ReIMCTV7B/o3rV64U Sep 10 00:38:23.496878 sshd[3470]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:38:23.501039 systemd-logind[1293]: New session 8 of user core. Sep 10 00:38:23.501963 systemd[1]: Started session-8.scope. Sep 10 00:38:24.085950 sshd[3470]: pam_unix(sshd:session): session closed for user core Sep 10 00:38:24.089042 systemd[1]: sshd@7-10.0.0.12:22-10.0.0.1:43746.service: Deactivated successfully. Sep 10 00:38:24.090011 systemd[1]: session-8.scope: Deactivated successfully. Sep 10 00:38:24.091113 systemd-logind[1293]: Session 8 logged out. Waiting for processes to exit. Sep 10 00:38:24.092156 systemd-logind[1293]: Removed session 8. Sep 10 00:38:29.090103 systemd[1]: Started sshd@8-10.0.0.12:22-10.0.0.1:43762.service. Sep 10 00:38:29.127430 sshd[3505]: Accepted publickey for core from 10.0.0.1 port 43762 ssh2: RSA SHA256:naKAIq1jxVWXXwxrf2qyS4Axg4ReIMCTV7B/o3rV64U Sep 10 00:38:29.128625 sshd[3505]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:38:29.132592 systemd-logind[1293]: New session 9 of user core. Sep 10 00:38:29.133400 systemd[1]: Started session-9.scope. Sep 10 00:38:29.272434 sshd[3505]: pam_unix(sshd:session): session closed for user core Sep 10 00:38:29.274398 systemd[1]: sshd@8-10.0.0.12:22-10.0.0.1:43762.service: Deactivated successfully. Sep 10 00:38:29.275354 systemd-logind[1293]: Session 9 logged out. Waiting for processes to exit. Sep 10 00:38:29.275362 systemd[1]: session-9.scope: Deactivated successfully. Sep 10 00:38:29.276231 systemd-logind[1293]: Removed session 9. Sep 10 00:38:34.275767 systemd[1]: Started sshd@9-10.0.0.12:22-10.0.0.1:33628.service. Sep 10 00:38:34.312223 sshd[3520]: Accepted publickey for core from 10.0.0.1 port 33628 ssh2: RSA SHA256:naKAIq1jxVWXXwxrf2qyS4Axg4ReIMCTV7B/o3rV64U Sep 10 00:38:34.313552 sshd[3520]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:38:34.317107 systemd-logind[1293]: New session 10 of user core. Sep 10 00:38:34.317781 systemd[1]: Started session-10.scope. Sep 10 00:38:34.449152 sshd[3520]: pam_unix(sshd:session): session closed for user core Sep 10 00:38:34.452122 systemd[1]: sshd@9-10.0.0.12:22-10.0.0.1:33628.service: Deactivated successfully. Sep 10 00:38:34.453058 systemd-logind[1293]: Session 10 logged out. Waiting for processes to exit. Sep 10 00:38:34.453068 systemd[1]: session-10.scope: Deactivated successfully. Sep 10 00:38:34.453855 systemd-logind[1293]: Removed session 10. Sep 10 00:38:39.452074 systemd[1]: Started sshd@10-10.0.0.12:22-10.0.0.1:33644.service. Sep 10 00:38:39.838966 sshd[3535]: Accepted publickey for core from 10.0.0.1 port 33644 ssh2: RSA SHA256:naKAIq1jxVWXXwxrf2qyS4Axg4ReIMCTV7B/o3rV64U Sep 10 00:38:39.840097 sshd[3535]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:38:39.844360 systemd-logind[1293]: New session 11 of user core. Sep 10 00:38:39.845385 systemd[1]: Started session-11.scope. Sep 10 00:38:40.014224 sshd[3535]: pam_unix(sshd:session): session closed for user core Sep 10 00:38:40.017189 systemd[1]: sshd@10-10.0.0.12:22-10.0.0.1:33644.service: Deactivated successfully. Sep 10 00:38:40.018651 systemd[1]: session-11.scope: Deactivated successfully. Sep 10 00:38:40.019132 systemd-logind[1293]: Session 11 logged out. Waiting for processes to exit. Sep 10 00:38:40.020243 systemd-logind[1293]: Removed session 11. Sep 10 00:38:45.018029 systemd[1]: Started sshd@11-10.0.0.12:22-10.0.0.1:54776.service. Sep 10 00:38:45.121259 sshd[3551]: Accepted publickey for core from 10.0.0.1 port 54776 ssh2: RSA SHA256:naKAIq1jxVWXXwxrf2qyS4Axg4ReIMCTV7B/o3rV64U Sep 10 00:38:45.122380 sshd[3551]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:38:45.126121 systemd-logind[1293]: New session 12 of user core. Sep 10 00:38:45.126884 systemd[1]: Started session-12.scope. Sep 10 00:38:45.240557 sshd[3551]: pam_unix(sshd:session): session closed for user core Sep 10 00:38:45.242941 systemd[1]: sshd@11-10.0.0.12:22-10.0.0.1:54776.service: Deactivated successfully. Sep 10 00:38:45.244086 systemd-logind[1293]: Session 12 logged out. Waiting for processes to exit. Sep 10 00:38:45.244149 systemd[1]: session-12.scope: Deactivated successfully. Sep 10 00:38:45.245000 systemd-logind[1293]: Removed session 12. Sep 10 00:38:50.243896 systemd[1]: Started sshd@12-10.0.0.12:22-10.0.0.1:40798.service. Sep 10 00:38:50.282190 sshd[3568]: Accepted publickey for core from 10.0.0.1 port 40798 ssh2: RSA SHA256:naKAIq1jxVWXXwxrf2qyS4Axg4ReIMCTV7B/o3rV64U Sep 10 00:38:50.283744 sshd[3568]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:38:50.289005 systemd-logind[1293]: New session 13 of user core. Sep 10 00:38:50.289817 systemd[1]: Started session-13.scope. Sep 10 00:38:50.591643 sshd[3568]: pam_unix(sshd:session): session closed for user core Sep 10 00:38:50.594844 systemd[1]: Started sshd@13-10.0.0.12:22-10.0.0.1:40814.service. Sep 10 00:38:50.595430 systemd[1]: sshd@12-10.0.0.12:22-10.0.0.1:40798.service: Deactivated successfully. Sep 10 00:38:50.596953 systemd[1]: session-13.scope: Deactivated successfully. Sep 10 00:38:50.597440 systemd-logind[1293]: Session 13 logged out. Waiting for processes to exit. Sep 10 00:38:50.598437 systemd-logind[1293]: Removed session 13. Sep 10 00:38:50.637682 sshd[3582]: Accepted publickey for core from 10.0.0.1 port 40814 ssh2: RSA SHA256:naKAIq1jxVWXXwxrf2qyS4Axg4ReIMCTV7B/o3rV64U Sep 10 00:38:50.639375 sshd[3582]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:38:50.644363 systemd-logind[1293]: New session 14 of user core. Sep 10 00:38:50.645570 systemd[1]: Started session-14.scope. Sep 10 00:38:51.032978 sshd[3582]: pam_unix(sshd:session): session closed for user core Sep 10 00:38:51.037086 systemd[1]: Started sshd@14-10.0.0.12:22-10.0.0.1:40818.service. Sep 10 00:38:51.038983 systemd[1]: sshd@13-10.0.0.12:22-10.0.0.1:40814.service: Deactivated successfully. Sep 10 00:38:51.039780 systemd-logind[1293]: Session 14 logged out. Waiting for processes to exit. Sep 10 00:38:51.039970 systemd[1]: session-14.scope: Deactivated successfully. Sep 10 00:38:51.041954 systemd-logind[1293]: Removed session 14. Sep 10 00:38:51.090432 sshd[3594]: Accepted publickey for core from 10.0.0.1 port 40818 ssh2: RSA SHA256:naKAIq1jxVWXXwxrf2qyS4Axg4ReIMCTV7B/o3rV64U Sep 10 00:38:51.091854 sshd[3594]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:38:51.096799 systemd-logind[1293]: New session 15 of user core. Sep 10 00:38:51.097967 systemd[1]: Started session-15.scope. Sep 10 00:38:51.218919 sshd[3594]: pam_unix(sshd:session): session closed for user core Sep 10 00:38:51.221569 systemd[1]: sshd@14-10.0.0.12:22-10.0.0.1:40818.service: Deactivated successfully. Sep 10 00:38:51.222708 systemd-logind[1293]: Session 15 logged out. Waiting for processes to exit. Sep 10 00:38:51.222773 systemd[1]: session-15.scope: Deactivated successfully. Sep 10 00:38:51.223542 systemd-logind[1293]: Removed session 15. Sep 10 00:38:56.223065 systemd[1]: Started sshd@15-10.0.0.12:22-10.0.0.1:40820.service. Sep 10 00:38:56.259106 sshd[3613]: Accepted publickey for core from 10.0.0.1 port 40820 ssh2: RSA SHA256:naKAIq1jxVWXXwxrf2qyS4Axg4ReIMCTV7B/o3rV64U Sep 10 00:38:56.260204 sshd[3613]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:38:56.264812 systemd-logind[1293]: New session 16 of user core. Sep 10 00:38:56.265859 systemd[1]: Started session-16.scope. Sep 10 00:38:56.391419 sshd[3613]: pam_unix(sshd:session): session closed for user core Sep 10 00:38:56.393483 systemd[1]: sshd@15-10.0.0.12:22-10.0.0.1:40820.service: Deactivated successfully. Sep 10 00:38:56.394580 systemd[1]: session-16.scope: Deactivated successfully. Sep 10 00:38:56.394738 systemd-logind[1293]: Session 16 logged out. Waiting for processes to exit. Sep 10 00:38:56.395651 systemd-logind[1293]: Removed session 16. Sep 10 00:38:59.250111 kubelet[2104]: E0910 00:38:59.250071 2104 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:39:01.395965 systemd[1]: Started sshd@16-10.0.0.12:22-10.0.0.1:33636.service. Sep 10 00:39:01.437142 sshd[3627]: Accepted publickey for core from 10.0.0.1 port 33636 ssh2: RSA SHA256:naKAIq1jxVWXXwxrf2qyS4Axg4ReIMCTV7B/o3rV64U Sep 10 00:39:01.438866 sshd[3627]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:39:01.444821 systemd-logind[1293]: New session 17 of user core. Sep 10 00:39:01.445895 systemd[1]: Started session-17.scope. Sep 10 00:39:01.562547 sshd[3627]: pam_unix(sshd:session): session closed for user core Sep 10 00:39:01.565112 systemd[1]: sshd@16-10.0.0.12:22-10.0.0.1:33636.service: Deactivated successfully. Sep 10 00:39:01.565932 systemd[1]: session-17.scope: Deactivated successfully. Sep 10 00:39:01.566765 systemd-logind[1293]: Session 17 logged out. Waiting for processes to exit. Sep 10 00:39:01.567450 systemd-logind[1293]: Removed session 17. Sep 10 00:39:06.565890 systemd[1]: Started sshd@17-10.0.0.12:22-10.0.0.1:33652.service. Sep 10 00:39:06.603737 sshd[3641]: Accepted publickey for core from 10.0.0.1 port 33652 ssh2: RSA SHA256:naKAIq1jxVWXXwxrf2qyS4Axg4ReIMCTV7B/o3rV64U Sep 10 00:39:06.605071 sshd[3641]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:39:06.608805 systemd-logind[1293]: New session 18 of user core. Sep 10 00:39:06.609769 systemd[1]: Started session-18.scope. Sep 10 00:39:06.730922 sshd[3641]: pam_unix(sshd:session): session closed for user core Sep 10 00:39:06.733658 systemd[1]: Started sshd@18-10.0.0.12:22-10.0.0.1:33660.service. Sep 10 00:39:06.734061 systemd[1]: sshd@17-10.0.0.12:22-10.0.0.1:33652.service: Deactivated successfully. Sep 10 00:39:06.735210 systemd[1]: session-18.scope: Deactivated successfully. Sep 10 00:39:06.735239 systemd-logind[1293]: Session 18 logged out. Waiting for processes to exit. Sep 10 00:39:06.736372 systemd-logind[1293]: Removed session 18. Sep 10 00:39:06.775765 sshd[3653]: Accepted publickey for core from 10.0.0.1 port 33660 ssh2: RSA SHA256:naKAIq1jxVWXXwxrf2qyS4Axg4ReIMCTV7B/o3rV64U Sep 10 00:39:06.777660 sshd[3653]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:39:06.783394 systemd-logind[1293]: New session 19 of user core. Sep 10 00:39:06.784376 systemd[1]: Started session-19.scope. Sep 10 00:39:07.585552 sshd[3653]: pam_unix(sshd:session): session closed for user core Sep 10 00:39:07.587976 systemd[1]: Started sshd@19-10.0.0.12:22-10.0.0.1:33662.service. Sep 10 00:39:07.588415 systemd[1]: sshd@18-10.0.0.12:22-10.0.0.1:33660.service: Deactivated successfully. Sep 10 00:39:07.589597 systemd-logind[1293]: Session 19 logged out. Waiting for processes to exit. Sep 10 00:39:07.589630 systemd[1]: session-19.scope: Deactivated successfully. Sep 10 00:39:07.590442 systemd-logind[1293]: Removed session 19. Sep 10 00:39:07.626185 sshd[3667]: Accepted publickey for core from 10.0.0.1 port 33662 ssh2: RSA SHA256:naKAIq1jxVWXXwxrf2qyS4Axg4ReIMCTV7B/o3rV64U Sep 10 00:39:07.627783 sshd[3667]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:39:07.631852 systemd-logind[1293]: New session 20 of user core. Sep 10 00:39:07.632821 systemd[1]: Started session-20.scope. Sep 10 00:39:09.315289 sshd[3667]: pam_unix(sshd:session): session closed for user core Sep 10 00:39:09.317853 systemd[1]: Started sshd@20-10.0.0.12:22-10.0.0.1:33668.service. Sep 10 00:39:09.320424 systemd[1]: sshd@19-10.0.0.12:22-10.0.0.1:33662.service: Deactivated successfully. Sep 10 00:39:09.321961 systemd[1]: session-20.scope: Deactivated successfully. Sep 10 00:39:09.322610 systemd-logind[1293]: Session 20 logged out. Waiting for processes to exit. Sep 10 00:39:09.323755 systemd-logind[1293]: Removed session 20. Sep 10 00:39:09.367811 sshd[3688]: Accepted publickey for core from 10.0.0.1 port 33668 ssh2: RSA SHA256:naKAIq1jxVWXXwxrf2qyS4Axg4ReIMCTV7B/o3rV64U Sep 10 00:39:09.369392 sshd[3688]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:39:09.373112 systemd-logind[1293]: New session 21 of user core. Sep 10 00:39:09.373855 systemd[1]: Started session-21.scope. Sep 10 00:39:10.030898 sshd[3688]: pam_unix(sshd:session): session closed for user core Sep 10 00:39:10.034265 systemd[1]: Started sshd@21-10.0.0.12:22-10.0.0.1:57972.service. Sep 10 00:39:10.034946 systemd[1]: sshd@20-10.0.0.12:22-10.0.0.1:33668.service: Deactivated successfully. Sep 10 00:39:10.036773 systemd[1]: session-21.scope: Deactivated successfully. Sep 10 00:39:10.037293 systemd-logind[1293]: Session 21 logged out. Waiting for processes to exit. Sep 10 00:39:10.038100 systemd-logind[1293]: Removed session 21. Sep 10 00:39:10.069791 sshd[3703]: Accepted publickey for core from 10.0.0.1 port 57972 ssh2: RSA SHA256:naKAIq1jxVWXXwxrf2qyS4Axg4ReIMCTV7B/o3rV64U Sep 10 00:39:10.071052 sshd[3703]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:39:10.074947 systemd-logind[1293]: New session 22 of user core. Sep 10 00:39:10.076040 systemd[1]: Started session-22.scope. Sep 10 00:39:10.195992 sshd[3703]: pam_unix(sshd:session): session closed for user core Sep 10 00:39:10.198112 systemd[1]: sshd@21-10.0.0.12:22-10.0.0.1:57972.service: Deactivated successfully. Sep 10 00:39:10.199225 systemd-logind[1293]: Session 22 logged out. Waiting for processes to exit. Sep 10 00:39:10.199305 systemd[1]: session-22.scope: Deactivated successfully. Sep 10 00:39:10.200247 systemd-logind[1293]: Removed session 22. Sep 10 00:39:15.201514 systemd[1]: Started sshd@22-10.0.0.12:22-10.0.0.1:57974.service. Sep 10 00:39:15.242908 sshd[3718]: Accepted publickey for core from 10.0.0.1 port 57974 ssh2: RSA SHA256:naKAIq1jxVWXXwxrf2qyS4Axg4ReIMCTV7B/o3rV64U Sep 10 00:39:15.244401 sshd[3718]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:39:15.249010 systemd-logind[1293]: New session 23 of user core. Sep 10 00:39:15.250063 systemd[1]: Started session-23.scope. Sep 10 00:39:15.251359 kubelet[2104]: E0910 00:39:15.251326 2104 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:39:15.384603 sshd[3718]: pam_unix(sshd:session): session closed for user core Sep 10 00:39:15.387607 systemd[1]: sshd@22-10.0.0.12:22-10.0.0.1:57974.service: Deactivated successfully. Sep 10 00:39:15.388529 systemd[1]: session-23.scope: Deactivated successfully. Sep 10 00:39:15.389462 systemd-logind[1293]: Session 23 logged out. Waiting for processes to exit. Sep 10 00:39:15.390293 systemd-logind[1293]: Removed session 23. Sep 10 00:39:18.249531 kubelet[2104]: E0910 00:39:18.249470 2104 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:39:19.250070 kubelet[2104]: E0910 00:39:19.250015 2104 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:39:20.387708 systemd[1]: Started sshd@23-10.0.0.12:22-10.0.0.1:34130.service. Sep 10 00:39:20.424512 sshd[3735]: Accepted publickey for core from 10.0.0.1 port 34130 ssh2: RSA SHA256:naKAIq1jxVWXXwxrf2qyS4Axg4ReIMCTV7B/o3rV64U Sep 10 00:39:20.425875 sshd[3735]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:39:20.430624 systemd-logind[1293]: New session 24 of user core. Sep 10 00:39:20.431722 systemd[1]: Started session-24.scope. Sep 10 00:39:20.584323 sshd[3735]: pam_unix(sshd:session): session closed for user core Sep 10 00:39:20.586722 systemd[1]: sshd@23-10.0.0.12:22-10.0.0.1:34130.service: Deactivated successfully. Sep 10 00:39:20.587553 systemd[1]: session-24.scope: Deactivated successfully. Sep 10 00:39:20.588520 systemd-logind[1293]: Session 24 logged out. Waiting for processes to exit. Sep 10 00:39:20.589346 systemd-logind[1293]: Removed session 24. Sep 10 00:39:23.250008 kubelet[2104]: E0910 00:39:23.249953 2104 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:39:25.588110 systemd[1]: Started sshd@24-10.0.0.12:22-10.0.0.1:34134.service. Sep 10 00:39:25.625248 sshd[3751]: Accepted publickey for core from 10.0.0.1 port 34134 ssh2: RSA SHA256:naKAIq1jxVWXXwxrf2qyS4Axg4ReIMCTV7B/o3rV64U Sep 10 00:39:25.626560 sshd[3751]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:39:25.630147 systemd-logind[1293]: New session 25 of user core. Sep 10 00:39:25.631140 systemd[1]: Started session-25.scope. Sep 10 00:39:25.733154 sshd[3751]: pam_unix(sshd:session): session closed for user core Sep 10 00:39:25.735652 systemd[1]: sshd@24-10.0.0.12:22-10.0.0.1:34134.service: Deactivated successfully. Sep 10 00:39:25.736699 systemd-logind[1293]: Session 25 logged out. Waiting for processes to exit. Sep 10 00:39:25.736758 systemd[1]: session-25.scope: Deactivated successfully. Sep 10 00:39:25.737713 systemd-logind[1293]: Removed session 25. Sep 10 00:39:30.737084 systemd[1]: Started sshd@25-10.0.0.12:22-10.0.0.1:47200.service. Sep 10 00:39:30.773741 sshd[3766]: Accepted publickey for core from 10.0.0.1 port 47200 ssh2: RSA SHA256:naKAIq1jxVWXXwxrf2qyS4Axg4ReIMCTV7B/o3rV64U Sep 10 00:39:30.774946 sshd[3766]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:39:30.778642 systemd-logind[1293]: New session 26 of user core. Sep 10 00:39:30.779445 systemd[1]: Started session-26.scope. Sep 10 00:39:30.886201 sshd[3766]: pam_unix(sshd:session): session closed for user core Sep 10 00:39:30.888878 systemd[1]: sshd@25-10.0.0.12:22-10.0.0.1:47200.service: Deactivated successfully. Sep 10 00:39:30.889856 systemd[1]: session-26.scope: Deactivated successfully. Sep 10 00:39:30.890672 systemd-logind[1293]: Session 26 logged out. Waiting for processes to exit. Sep 10 00:39:30.891755 systemd-logind[1293]: Removed session 26. Sep 10 00:39:31.249703 kubelet[2104]: E0910 00:39:31.249642 2104 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:39:34.249792 kubelet[2104]: E0910 00:39:34.249679 2104 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:39:35.889566 systemd[1]: Started sshd@26-10.0.0.12:22-10.0.0.1:47208.service. Sep 10 00:39:35.925530 sshd[3781]: Accepted publickey for core from 10.0.0.1 port 47208 ssh2: RSA SHA256:naKAIq1jxVWXXwxrf2qyS4Axg4ReIMCTV7B/o3rV64U Sep 10 00:39:35.926939 sshd[3781]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:39:35.931015 systemd-logind[1293]: New session 27 of user core. Sep 10 00:39:35.931772 systemd[1]: Started session-27.scope. Sep 10 00:39:36.049979 sshd[3781]: pam_unix(sshd:session): session closed for user core Sep 10 00:39:36.053049 systemd[1]: Started sshd@27-10.0.0.12:22-10.0.0.1:47210.service. Sep 10 00:39:36.053749 systemd[1]: sshd@26-10.0.0.12:22-10.0.0.1:47208.service: Deactivated successfully. Sep 10 00:39:36.054824 systemd[1]: session-27.scope: Deactivated successfully. Sep 10 00:39:36.055323 systemd-logind[1293]: Session 27 logged out. Waiting for processes to exit. Sep 10 00:39:36.056153 systemd-logind[1293]: Removed session 27. Sep 10 00:39:36.091959 sshd[3793]: Accepted publickey for core from 10.0.0.1 port 47210 ssh2: RSA SHA256:naKAIq1jxVWXXwxrf2qyS4Axg4ReIMCTV7B/o3rV64U Sep 10 00:39:36.093116 sshd[3793]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:39:36.097021 systemd-logind[1293]: New session 28 of user core. Sep 10 00:39:36.098248 systemd[1]: Started session-28.scope. Sep 10 00:39:37.473003 kubelet[2104]: I0910 00:39:37.472914 2104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-xn58s" podStartSLOduration=106.472875711 podStartE2EDuration="1m46.472875711s" podCreationTimestamp="2025-09-10 00:37:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:38:13.952786277 +0000 UTC m=+28.792308897" watchObservedRunningTime="2025-09-10 00:39:37.472875711 +0000 UTC m=+112.312398331" Sep 10 00:39:37.487029 env[1312]: time="2025-09-10T00:39:37.486954695Z" level=info msg="StopContainer for \"3dfb3eecd67d3838e60727ddd75d6a9e22aac0f3376d854d9d6a9f147a5962e6\" with timeout 30 (s)" Sep 10 00:39:37.490615 env[1312]: time="2025-09-10T00:39:37.490573961Z" level=info msg="Stop container \"3dfb3eecd67d3838e60727ddd75d6a9e22aac0f3376d854d9d6a9f147a5962e6\" with signal terminated" Sep 10 00:39:37.516322 env[1312]: time="2025-09-10T00:39:37.516204637Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 10 00:39:37.523791 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3dfb3eecd67d3838e60727ddd75d6a9e22aac0f3376d854d9d6a9f147a5962e6-rootfs.mount: Deactivated successfully. Sep 10 00:39:37.526338 env[1312]: time="2025-09-10T00:39:37.526293743Z" level=info msg="StopContainer for \"88cb827dabd28a341302fb6d9a9eeaa17b7bf69a0e21784f5dc9eb3d356d9140\" with timeout 2 (s)" Sep 10 00:39:37.526687 env[1312]: time="2025-09-10T00:39:37.526659345Z" level=info msg="Stop container \"88cb827dabd28a341302fb6d9a9eeaa17b7bf69a0e21784f5dc9eb3d356d9140\" with signal terminated" Sep 10 00:39:37.532287 systemd-networkd[1083]: lxc_health: Link DOWN Sep 10 00:39:37.532297 systemd-networkd[1083]: lxc_health: Lost carrier Sep 10 00:39:37.533940 env[1312]: time="2025-09-10T00:39:37.533871358Z" level=info msg="shim disconnected" id=3dfb3eecd67d3838e60727ddd75d6a9e22aac0f3376d854d9d6a9f147a5962e6 Sep 10 00:39:37.534012 env[1312]: time="2025-09-10T00:39:37.533940899Z" level=warning msg="cleaning up after shim disconnected" id=3dfb3eecd67d3838e60727ddd75d6a9e22aac0f3376d854d9d6a9f147a5962e6 namespace=k8s.io Sep 10 00:39:37.534012 env[1312]: time="2025-09-10T00:39:37.533950637Z" level=info msg="cleaning up dead shim" Sep 10 00:39:37.541196 env[1312]: time="2025-09-10T00:39:37.541119790Z" level=warning msg="cleanup warnings time=\"2025-09-10T00:39:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3850 runtime=io.containerd.runc.v2\n" Sep 10 00:39:37.547583 env[1312]: time="2025-09-10T00:39:37.547539714Z" level=info msg="StopContainer for \"3dfb3eecd67d3838e60727ddd75d6a9e22aac0f3376d854d9d6a9f147a5962e6\" returns successfully" Sep 10 00:39:37.548434 env[1312]: time="2025-09-10T00:39:37.548394892Z" level=info msg="StopPodSandbox for \"1bb5cc10ed07d256cc972eeb8befbd6867ac9390dc13973b01a50a12df790b61\"" Sep 10 00:39:37.548568 env[1312]: time="2025-09-10T00:39:37.548506313Z" level=info msg="Container to stop \"3dfb3eecd67d3838e60727ddd75d6a9e22aac0f3376d854d9d6a9f147a5962e6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 10 00:39:37.551295 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1bb5cc10ed07d256cc972eeb8befbd6867ac9390dc13973b01a50a12df790b61-shm.mount: Deactivated successfully. Sep 10 00:39:37.579570 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-88cb827dabd28a341302fb6d9a9eeaa17b7bf69a0e21784f5dc9eb3d356d9140-rootfs.mount: Deactivated successfully. Sep 10 00:39:37.585189 env[1312]: time="2025-09-10T00:39:37.585143531Z" level=info msg="shim disconnected" id=88cb827dabd28a341302fb6d9a9eeaa17b7bf69a0e21784f5dc9eb3d356d9140 Sep 10 00:39:37.585189 env[1312]: time="2025-09-10T00:39:37.585189778Z" level=warning msg="cleaning up after shim disconnected" id=88cb827dabd28a341302fb6d9a9eeaa17b7bf69a0e21784f5dc9eb3d356d9140 namespace=k8s.io Sep 10 00:39:37.585383 env[1312]: time="2025-09-10T00:39:37.585199507Z" level=info msg="cleaning up dead shim" Sep 10 00:39:37.585834 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1bb5cc10ed07d256cc972eeb8befbd6867ac9390dc13973b01a50a12df790b61-rootfs.mount: Deactivated successfully. Sep 10 00:39:37.590053 env[1312]: time="2025-09-10T00:39:37.589988866Z" level=info msg="shim disconnected" id=1bb5cc10ed07d256cc972eeb8befbd6867ac9390dc13973b01a50a12df790b61 Sep 10 00:39:37.590810 env[1312]: time="2025-09-10T00:39:37.590786946Z" level=warning msg="cleaning up after shim disconnected" id=1bb5cc10ed07d256cc972eeb8befbd6867ac9390dc13973b01a50a12df790b61 namespace=k8s.io Sep 10 00:39:37.590946 env[1312]: time="2025-09-10T00:39:37.590923986Z" level=info msg="cleaning up dead shim" Sep 10 00:39:37.593949 env[1312]: time="2025-09-10T00:39:37.593893443Z" level=warning msg="cleanup warnings time=\"2025-09-10T00:39:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3898 runtime=io.containerd.runc.v2\n" Sep 10 00:39:37.597737 env[1312]: time="2025-09-10T00:39:37.597678994Z" level=warning msg="cleanup warnings time=\"2025-09-10T00:39:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3908 runtime=io.containerd.runc.v2\n" Sep 10 00:39:37.597935 env[1312]: time="2025-09-10T00:39:37.597905663Z" level=info msg="StopContainer for \"88cb827dabd28a341302fb6d9a9eeaa17b7bf69a0e21784f5dc9eb3d356d9140\" returns successfully" Sep 10 00:39:37.597998 env[1312]: time="2025-09-10T00:39:37.597955227Z" level=info msg="TearDown network for sandbox \"1bb5cc10ed07d256cc972eeb8befbd6867ac9390dc13973b01a50a12df790b61\" successfully" Sep 10 00:39:37.597998 env[1312]: time="2025-09-10T00:39:37.597974303Z" level=info msg="StopPodSandbox for \"1bb5cc10ed07d256cc972eeb8befbd6867ac9390dc13973b01a50a12df790b61\" returns successfully" Sep 10 00:39:37.599675 env[1312]: time="2025-09-10T00:39:37.599649482Z" level=info msg="StopPodSandbox for \"c5833686b1433264ea602d3ff2336c558bc671ce3651325adf9c8d8d91f18dfd\"" Sep 10 00:39:37.599739 env[1312]: time="2025-09-10T00:39:37.599709345Z" level=info msg="Container to stop \"6d51ef98bd5d8fa2cbd50dff4d26a1d3856e0b0445416f55d534af3d8d3bebaa\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 10 00:39:37.599739 env[1312]: time="2025-09-10T00:39:37.599722740Z" level=info msg="Container to stop \"8c032f0458585e4e03b811874fcfb3c93a848073db9464bb6bafe7ce40dfbbd4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 10 00:39:37.599739 env[1312]: time="2025-09-10T00:39:37.599732268Z" level=info msg="Container to stop \"88cb827dabd28a341302fb6d9a9eeaa17b7bf69a0e21784f5dc9eb3d356d9140\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 10 00:39:37.599973 env[1312]: time="2025-09-10T00:39:37.599744291Z" level=info msg="Container to stop \"bc36983c4d9dd3e3cadab7aadb4510e2a115139e0db66c46571bc35261b9a1bd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 10 00:39:37.599973 env[1312]: time="2025-09-10T00:39:37.599754911Z" level=info msg="Container to stop \"6d88b68abe61689bd8aea4fd25d637f74a13e18c1f99bdcd86ad74a843f1741d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 10 00:39:37.628247 env[1312]: time="2025-09-10T00:39:37.628179092Z" level=info msg="shim disconnected" id=c5833686b1433264ea602d3ff2336c558bc671ce3651325adf9c8d8d91f18dfd Sep 10 00:39:37.628247 env[1312]: time="2025-09-10T00:39:37.628245949Z" level=warning msg="cleaning up after shim disconnected" id=c5833686b1433264ea602d3ff2336c558bc671ce3651325adf9c8d8d91f18dfd namespace=k8s.io Sep 10 00:39:37.628247 env[1312]: time="2025-09-10T00:39:37.628258723Z" level=info msg="cleaning up dead shim" Sep 10 00:39:37.637135 env[1312]: time="2025-09-10T00:39:37.637079880Z" level=warning msg="cleanup warnings time=\"2025-09-10T00:39:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3942 runtime=io.containerd.runc.v2\n" Sep 10 00:39:37.637515 env[1312]: time="2025-09-10T00:39:37.637477282Z" level=info msg="TearDown network for sandbox \"c5833686b1433264ea602d3ff2336c558bc671ce3651325adf9c8d8d91f18dfd\" successfully" Sep 10 00:39:37.637567 env[1312]: time="2025-09-10T00:39:37.637515564Z" level=info msg="StopPodSandbox for \"c5833686b1433264ea602d3ff2336c558bc671ce3651325adf9c8d8d91f18dfd\" returns successfully" Sep 10 00:39:37.797066 kubelet[2104]: I0910 00:39:37.796984 2104 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ab6e75b4-2401-4c17-bb89-7a450c5017a6-xtables-lock\") pod \"ab6e75b4-2401-4c17-bb89-7a450c5017a6\" (UID: \"ab6e75b4-2401-4c17-bb89-7a450c5017a6\") " Sep 10 00:39:37.797066 kubelet[2104]: I0910 00:39:37.797036 2104 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ab6e75b4-2401-4c17-bb89-7a450c5017a6-bpf-maps\") pod \"ab6e75b4-2401-4c17-bb89-7a450c5017a6\" (UID: \"ab6e75b4-2401-4c17-bb89-7a450c5017a6\") " Sep 10 00:39:37.797066 kubelet[2104]: I0910 00:39:37.797063 2104 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ab6e75b4-2401-4c17-bb89-7a450c5017a6-lib-modules\") pod \"ab6e75b4-2401-4c17-bb89-7a450c5017a6\" (UID: \"ab6e75b4-2401-4c17-bb89-7a450c5017a6\") " Sep 10 00:39:37.797309 kubelet[2104]: I0910 00:39:37.797086 2104 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ab6e75b4-2401-4c17-bb89-7a450c5017a6-host-proc-sys-kernel\") pod \"ab6e75b4-2401-4c17-bb89-7a450c5017a6\" (UID: \"ab6e75b4-2401-4c17-bb89-7a450c5017a6\") " Sep 10 00:39:37.797309 kubelet[2104]: I0910 00:39:37.797112 2104 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gqwbz\" (UniqueName: \"kubernetes.io/projected/ab6e75b4-2401-4c17-bb89-7a450c5017a6-kube-api-access-gqwbz\") pod \"ab6e75b4-2401-4c17-bb89-7a450c5017a6\" (UID: \"ab6e75b4-2401-4c17-bb89-7a450c5017a6\") " Sep 10 00:39:37.797309 kubelet[2104]: I0910 00:39:37.797129 2104 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/61ec1db8-411c-4ac4-99cc-e07c70e56eac-cilium-config-path\") pod \"61ec1db8-411c-4ac4-99cc-e07c70e56eac\" (UID: \"61ec1db8-411c-4ac4-99cc-e07c70e56eac\") " Sep 10 00:39:37.797309 kubelet[2104]: I0910 00:39:37.797143 2104 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ab6e75b4-2401-4c17-bb89-7a450c5017a6-cilium-cgroup\") pod \"ab6e75b4-2401-4c17-bb89-7a450c5017a6\" (UID: \"ab6e75b4-2401-4c17-bb89-7a450c5017a6\") " Sep 10 00:39:37.797309 kubelet[2104]: I0910 00:39:37.797156 2104 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ab6e75b4-2401-4c17-bb89-7a450c5017a6-cni-path\") pod \"ab6e75b4-2401-4c17-bb89-7a450c5017a6\" (UID: \"ab6e75b4-2401-4c17-bb89-7a450c5017a6\") " Sep 10 00:39:37.797309 kubelet[2104]: I0910 00:39:37.797171 2104 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ab6e75b4-2401-4c17-bb89-7a450c5017a6-etc-cni-netd\") pod \"ab6e75b4-2401-4c17-bb89-7a450c5017a6\" (UID: \"ab6e75b4-2401-4c17-bb89-7a450c5017a6\") " Sep 10 00:39:37.797465 kubelet[2104]: I0910 00:39:37.797182 2104 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ab6e75b4-2401-4c17-bb89-7a450c5017a6-hostproc\") pod \"ab6e75b4-2401-4c17-bb89-7a450c5017a6\" (UID: \"ab6e75b4-2401-4c17-bb89-7a450c5017a6\") " Sep 10 00:39:37.797465 kubelet[2104]: I0910 00:39:37.797168 2104 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab6e75b4-2401-4c17-bb89-7a450c5017a6-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "ab6e75b4-2401-4c17-bb89-7a450c5017a6" (UID: "ab6e75b4-2401-4c17-bb89-7a450c5017a6"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 00:39:37.797465 kubelet[2104]: I0910 00:39:37.797212 2104 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab6e75b4-2401-4c17-bb89-7a450c5017a6-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "ab6e75b4-2401-4c17-bb89-7a450c5017a6" (UID: "ab6e75b4-2401-4c17-bb89-7a450c5017a6"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 00:39:37.797465 kubelet[2104]: I0910 00:39:37.797235 2104 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab6e75b4-2401-4c17-bb89-7a450c5017a6-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "ab6e75b4-2401-4c17-bb89-7a450c5017a6" (UID: "ab6e75b4-2401-4c17-bb89-7a450c5017a6"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 00:39:37.797465 kubelet[2104]: I0910 00:39:37.797254 2104 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab6e75b4-2401-4c17-bb89-7a450c5017a6-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "ab6e75b4-2401-4c17-bb89-7a450c5017a6" (UID: "ab6e75b4-2401-4c17-bb89-7a450c5017a6"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 00:39:37.797620 kubelet[2104]: I0910 00:39:37.797193 2104 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ab6e75b4-2401-4c17-bb89-7a450c5017a6-cilium-run\") pod \"ab6e75b4-2401-4c17-bb89-7a450c5017a6\" (UID: \"ab6e75b4-2401-4c17-bb89-7a450c5017a6\") " Sep 10 00:39:37.797620 kubelet[2104]: I0910 00:39:37.797318 2104 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ab6e75b4-2401-4c17-bb89-7a450c5017a6-hubble-tls\") pod \"ab6e75b4-2401-4c17-bb89-7a450c5017a6\" (UID: \"ab6e75b4-2401-4c17-bb89-7a450c5017a6\") " Sep 10 00:39:37.797620 kubelet[2104]: I0910 00:39:37.797347 2104 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ab6e75b4-2401-4c17-bb89-7a450c5017a6-clustermesh-secrets\") pod \"ab6e75b4-2401-4c17-bb89-7a450c5017a6\" (UID: \"ab6e75b4-2401-4c17-bb89-7a450c5017a6\") " Sep 10 00:39:37.797620 kubelet[2104]: I0910 00:39:37.797369 2104 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ab6e75b4-2401-4c17-bb89-7a450c5017a6-cilium-config-path\") pod \"ab6e75b4-2401-4c17-bb89-7a450c5017a6\" (UID: \"ab6e75b4-2401-4c17-bb89-7a450c5017a6\") " Sep 10 00:39:37.797620 kubelet[2104]: I0910 00:39:37.797385 2104 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ab6e75b4-2401-4c17-bb89-7a450c5017a6-host-proc-sys-net\") pod \"ab6e75b4-2401-4c17-bb89-7a450c5017a6\" (UID: \"ab6e75b4-2401-4c17-bb89-7a450c5017a6\") " Sep 10 00:39:37.797620 kubelet[2104]: I0910 00:39:37.797405 2104 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9mq55\" (UniqueName: \"kubernetes.io/projected/61ec1db8-411c-4ac4-99cc-e07c70e56eac-kube-api-access-9mq55\") pod \"61ec1db8-411c-4ac4-99cc-e07c70e56eac\" (UID: \"61ec1db8-411c-4ac4-99cc-e07c70e56eac\") " Sep 10 00:39:37.797754 kubelet[2104]: I0910 00:39:37.797437 2104 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ab6e75b4-2401-4c17-bb89-7a450c5017a6-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 10 00:39:37.797754 kubelet[2104]: I0910 00:39:37.797449 2104 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ab6e75b4-2401-4c17-bb89-7a450c5017a6-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 10 00:39:37.797754 kubelet[2104]: I0910 00:39:37.797458 2104 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ab6e75b4-2401-4c17-bb89-7a450c5017a6-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 10 00:39:37.797754 kubelet[2104]: I0910 00:39:37.797468 2104 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ab6e75b4-2401-4c17-bb89-7a450c5017a6-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 10 00:39:37.798850 kubelet[2104]: I0910 00:39:37.797267 2104 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab6e75b4-2401-4c17-bb89-7a450c5017a6-cni-path" (OuterVolumeSpecName: "cni-path") pod "ab6e75b4-2401-4c17-bb89-7a450c5017a6" (UID: "ab6e75b4-2401-4c17-bb89-7a450c5017a6"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 00:39:37.798942 kubelet[2104]: I0910 00:39:37.797276 2104 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab6e75b4-2401-4c17-bb89-7a450c5017a6-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "ab6e75b4-2401-4c17-bb89-7a450c5017a6" (UID: "ab6e75b4-2401-4c17-bb89-7a450c5017a6"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 00:39:37.799018 kubelet[2104]: I0910 00:39:37.797286 2104 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab6e75b4-2401-4c17-bb89-7a450c5017a6-hostproc" (OuterVolumeSpecName: "hostproc") pod "ab6e75b4-2401-4c17-bb89-7a450c5017a6" (UID: "ab6e75b4-2401-4c17-bb89-7a450c5017a6"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 00:39:37.799104 kubelet[2104]: I0910 00:39:37.797168 2104 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab6e75b4-2401-4c17-bb89-7a450c5017a6-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "ab6e75b4-2401-4c17-bb89-7a450c5017a6" (UID: "ab6e75b4-2401-4c17-bb89-7a450c5017a6"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 00:39:37.799177 kubelet[2104]: I0910 00:39:37.797739 2104 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab6e75b4-2401-4c17-bb89-7a450c5017a6-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "ab6e75b4-2401-4c17-bb89-7a450c5017a6" (UID: "ab6e75b4-2401-4c17-bb89-7a450c5017a6"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 00:39:37.799326 kubelet[2104]: I0910 00:39:37.799295 2104 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61ec1db8-411c-4ac4-99cc-e07c70e56eac-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "61ec1db8-411c-4ac4-99cc-e07c70e56eac" (UID: "61ec1db8-411c-4ac4-99cc-e07c70e56eac"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 10 00:39:37.799381 kubelet[2104]: I0910 00:39:37.799346 2104 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab6e75b4-2401-4c17-bb89-7a450c5017a6-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "ab6e75b4-2401-4c17-bb89-7a450c5017a6" (UID: "ab6e75b4-2401-4c17-bb89-7a450c5017a6"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 00:39:37.800779 kubelet[2104]: I0910 00:39:37.800753 2104 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab6e75b4-2401-4c17-bb89-7a450c5017a6-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ab6e75b4-2401-4c17-bb89-7a450c5017a6" (UID: "ab6e75b4-2401-4c17-bb89-7a450c5017a6"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 10 00:39:37.800779 kubelet[2104]: I0910 00:39:37.800764 2104 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab6e75b4-2401-4c17-bb89-7a450c5017a6-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "ab6e75b4-2401-4c17-bb89-7a450c5017a6" (UID: "ab6e75b4-2401-4c17-bb89-7a450c5017a6"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 10 00:39:37.800871 kubelet[2104]: I0910 00:39:37.800765 2104 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab6e75b4-2401-4c17-bb89-7a450c5017a6-kube-api-access-gqwbz" (OuterVolumeSpecName: "kube-api-access-gqwbz") pod "ab6e75b4-2401-4c17-bb89-7a450c5017a6" (UID: "ab6e75b4-2401-4c17-bb89-7a450c5017a6"). InnerVolumeSpecName "kube-api-access-gqwbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 10 00:39:37.801367 kubelet[2104]: I0910 00:39:37.801337 2104 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61ec1db8-411c-4ac4-99cc-e07c70e56eac-kube-api-access-9mq55" (OuterVolumeSpecName: "kube-api-access-9mq55") pod "61ec1db8-411c-4ac4-99cc-e07c70e56eac" (UID: "61ec1db8-411c-4ac4-99cc-e07c70e56eac"). InnerVolumeSpecName "kube-api-access-9mq55". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 10 00:39:37.802226 kubelet[2104]: I0910 00:39:37.802195 2104 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab6e75b4-2401-4c17-bb89-7a450c5017a6-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "ab6e75b4-2401-4c17-bb89-7a450c5017a6" (UID: "ab6e75b4-2401-4c17-bb89-7a450c5017a6"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 10 00:39:37.897659 kubelet[2104]: I0910 00:39:37.897586 2104 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gqwbz\" (UniqueName: \"kubernetes.io/projected/ab6e75b4-2401-4c17-bb89-7a450c5017a6-kube-api-access-gqwbz\") on node \"localhost\" DevicePath \"\"" Sep 10 00:39:37.897659 kubelet[2104]: I0910 00:39:37.897626 2104 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/61ec1db8-411c-4ac4-99cc-e07c70e56eac-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 10 00:39:37.897659 kubelet[2104]: I0910 00:39:37.897635 2104 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ab6e75b4-2401-4c17-bb89-7a450c5017a6-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 10 00:39:37.897659 kubelet[2104]: I0910 00:39:37.897644 2104 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ab6e75b4-2401-4c17-bb89-7a450c5017a6-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 10 00:39:37.897659 kubelet[2104]: I0910 00:39:37.897654 2104 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ab6e75b4-2401-4c17-bb89-7a450c5017a6-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 10 00:39:37.897659 kubelet[2104]: I0910 00:39:37.897661 2104 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ab6e75b4-2401-4c17-bb89-7a450c5017a6-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 10 00:39:37.897659 kubelet[2104]: I0910 00:39:37.897667 2104 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ab6e75b4-2401-4c17-bb89-7a450c5017a6-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 10 00:39:37.897659 kubelet[2104]: I0910 00:39:37.897674 2104 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ab6e75b4-2401-4c17-bb89-7a450c5017a6-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 10 00:39:37.898162 kubelet[2104]: I0910 00:39:37.897681 2104 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ab6e75b4-2401-4c17-bb89-7a450c5017a6-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 10 00:39:37.898162 kubelet[2104]: I0910 00:39:37.897687 2104 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9mq55\" (UniqueName: \"kubernetes.io/projected/61ec1db8-411c-4ac4-99cc-e07c70e56eac-kube-api-access-9mq55\") on node \"localhost\" DevicePath \"\"" Sep 10 00:39:37.898162 kubelet[2104]: I0910 00:39:37.897694 2104 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ab6e75b4-2401-4c17-bb89-7a450c5017a6-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 10 00:39:37.898162 kubelet[2104]: I0910 00:39:37.897701 2104 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ab6e75b4-2401-4c17-bb89-7a450c5017a6-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 10 00:39:38.490566 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c5833686b1433264ea602d3ff2336c558bc671ce3651325adf9c8d8d91f18dfd-rootfs.mount: Deactivated successfully. Sep 10 00:39:38.490759 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c5833686b1433264ea602d3ff2336c558bc671ce3651325adf9c8d8d91f18dfd-shm.mount: Deactivated successfully. Sep 10 00:39:38.490864 systemd[1]: var-lib-kubelet-pods-61ec1db8\x2d411c\x2d4ac4\x2d99cc\x2de07c70e56eac-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9mq55.mount: Deactivated successfully. Sep 10 00:39:38.490980 systemd[1]: var-lib-kubelet-pods-ab6e75b4\x2d2401\x2d4c17\x2dbb89\x2d7a450c5017a6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgqwbz.mount: Deactivated successfully. Sep 10 00:39:38.491111 systemd[1]: var-lib-kubelet-pods-ab6e75b4\x2d2401\x2d4c17\x2dbb89\x2d7a450c5017a6-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 10 00:39:38.491233 systemd[1]: var-lib-kubelet-pods-ab6e75b4\x2d2401\x2d4c17\x2dbb89\x2d7a450c5017a6-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 10 00:39:38.526428 kubelet[2104]: I0910 00:39:38.526388 2104 scope.go:117] "RemoveContainer" containerID="3dfb3eecd67d3838e60727ddd75d6a9e22aac0f3376d854d9d6a9f147a5962e6" Sep 10 00:39:38.527998 env[1312]: time="2025-09-10T00:39:38.527958871Z" level=info msg="RemoveContainer for \"3dfb3eecd67d3838e60727ddd75d6a9e22aac0f3376d854d9d6a9f147a5962e6\"" Sep 10 00:39:38.641510 env[1312]: time="2025-09-10T00:39:38.641419676Z" level=info msg="RemoveContainer for \"3dfb3eecd67d3838e60727ddd75d6a9e22aac0f3376d854d9d6a9f147a5962e6\" returns successfully" Sep 10 00:39:38.642460 kubelet[2104]: I0910 00:39:38.642412 2104 scope.go:117] "RemoveContainer" containerID="3dfb3eecd67d3838e60727ddd75d6a9e22aac0f3376d854d9d6a9f147a5962e6" Sep 10 00:39:38.642717 env[1312]: time="2025-09-10T00:39:38.642645285Z" level=error msg="ContainerStatus for \"3dfb3eecd67d3838e60727ddd75d6a9e22aac0f3376d854d9d6a9f147a5962e6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3dfb3eecd67d3838e60727ddd75d6a9e22aac0f3376d854d9d6a9f147a5962e6\": not found" Sep 10 00:39:38.642867 kubelet[2104]: E0910 00:39:38.642839 2104 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3dfb3eecd67d3838e60727ddd75d6a9e22aac0f3376d854d9d6a9f147a5962e6\": not found" containerID="3dfb3eecd67d3838e60727ddd75d6a9e22aac0f3376d854d9d6a9f147a5962e6" Sep 10 00:39:38.642991 kubelet[2104]: I0910 00:39:38.642888 2104 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3dfb3eecd67d3838e60727ddd75d6a9e22aac0f3376d854d9d6a9f147a5962e6"} err="failed to get container status \"3dfb3eecd67d3838e60727ddd75d6a9e22aac0f3376d854d9d6a9f147a5962e6\": rpc error: code = NotFound desc = an error occurred when try to find container \"3dfb3eecd67d3838e60727ddd75d6a9e22aac0f3376d854d9d6a9f147a5962e6\": not found" Sep 10 00:39:38.642991 kubelet[2104]: I0910 00:39:38.642983 2104 scope.go:117] "RemoveContainer" containerID="88cb827dabd28a341302fb6d9a9eeaa17b7bf69a0e21784f5dc9eb3d356d9140" Sep 10 00:39:38.644833 env[1312]: time="2025-09-10T00:39:38.644778790Z" level=info msg="RemoveContainer for \"88cb827dabd28a341302fb6d9a9eeaa17b7bf69a0e21784f5dc9eb3d356d9140\"" Sep 10 00:39:38.726116 env[1312]: time="2025-09-10T00:39:38.726038839Z" level=info msg="RemoveContainer for \"88cb827dabd28a341302fb6d9a9eeaa17b7bf69a0e21784f5dc9eb3d356d9140\" returns successfully" Sep 10 00:39:38.726750 kubelet[2104]: I0910 00:39:38.726715 2104 scope.go:117] "RemoveContainer" containerID="6d88b68abe61689bd8aea4fd25d637f74a13e18c1f99bdcd86ad74a843f1741d" Sep 10 00:39:38.728084 env[1312]: time="2025-09-10T00:39:38.728045384Z" level=info msg="RemoveContainer for \"6d88b68abe61689bd8aea4fd25d637f74a13e18c1f99bdcd86ad74a843f1741d\"" Sep 10 00:39:38.735145 env[1312]: time="2025-09-10T00:39:38.735077826Z" level=info msg="RemoveContainer for \"6d88b68abe61689bd8aea4fd25d637f74a13e18c1f99bdcd86ad74a843f1741d\" returns successfully" Sep 10 00:39:38.735483 kubelet[2104]: I0910 00:39:38.735440 2104 scope.go:117] "RemoveContainer" containerID="bc36983c4d9dd3e3cadab7aadb4510e2a115139e0db66c46571bc35261b9a1bd" Sep 10 00:39:38.737114 env[1312]: time="2025-09-10T00:39:38.737082417Z" level=info msg="RemoveContainer for \"bc36983c4d9dd3e3cadab7aadb4510e2a115139e0db66c46571bc35261b9a1bd\"" Sep 10 00:39:38.747936 env[1312]: time="2025-09-10T00:39:38.747016347Z" level=info msg="RemoveContainer for \"bc36983c4d9dd3e3cadab7aadb4510e2a115139e0db66c46571bc35261b9a1bd\" returns successfully" Sep 10 00:39:38.749657 kubelet[2104]: I0910 00:39:38.749619 2104 scope.go:117] "RemoveContainer" containerID="8c032f0458585e4e03b811874fcfb3c93a848073db9464bb6bafe7ce40dfbbd4" Sep 10 00:39:38.751786 env[1312]: time="2025-09-10T00:39:38.751739380Z" level=info msg="RemoveContainer for \"8c032f0458585e4e03b811874fcfb3c93a848073db9464bb6bafe7ce40dfbbd4\"" Sep 10 00:39:38.821353 env[1312]: time="2025-09-10T00:39:38.821247261Z" level=info msg="RemoveContainer for \"8c032f0458585e4e03b811874fcfb3c93a848073db9464bb6bafe7ce40dfbbd4\" returns successfully" Sep 10 00:39:38.821740 kubelet[2104]: I0910 00:39:38.821688 2104 scope.go:117] "RemoveContainer" containerID="6d51ef98bd5d8fa2cbd50dff4d26a1d3856e0b0445416f55d534af3d8d3bebaa" Sep 10 00:39:38.823418 env[1312]: time="2025-09-10T00:39:38.823378371Z" level=info msg="RemoveContainer for \"6d51ef98bd5d8fa2cbd50dff4d26a1d3856e0b0445416f55d534af3d8d3bebaa\"" Sep 10 00:39:38.954550 env[1312]: time="2025-09-10T00:39:38.954435219Z" level=info msg="RemoveContainer for \"6d51ef98bd5d8fa2cbd50dff4d26a1d3856e0b0445416f55d534af3d8d3bebaa\" returns successfully" Sep 10 00:39:38.954916 kubelet[2104]: I0910 00:39:38.954848 2104 scope.go:117] "RemoveContainer" containerID="88cb827dabd28a341302fb6d9a9eeaa17b7bf69a0e21784f5dc9eb3d356d9140" Sep 10 00:39:38.955401 env[1312]: time="2025-09-10T00:39:38.955271451Z" level=error msg="ContainerStatus for \"88cb827dabd28a341302fb6d9a9eeaa17b7bf69a0e21784f5dc9eb3d356d9140\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"88cb827dabd28a341302fb6d9a9eeaa17b7bf69a0e21784f5dc9eb3d356d9140\": not found" Sep 10 00:39:38.955717 kubelet[2104]: E0910 00:39:38.955655 2104 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"88cb827dabd28a341302fb6d9a9eeaa17b7bf69a0e21784f5dc9eb3d356d9140\": not found" containerID="88cb827dabd28a341302fb6d9a9eeaa17b7bf69a0e21784f5dc9eb3d356d9140" Sep 10 00:39:38.955839 kubelet[2104]: I0910 00:39:38.955738 2104 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"88cb827dabd28a341302fb6d9a9eeaa17b7bf69a0e21784f5dc9eb3d356d9140"} err="failed to get container status \"88cb827dabd28a341302fb6d9a9eeaa17b7bf69a0e21784f5dc9eb3d356d9140\": rpc error: code = NotFound desc = an error occurred when try to find container \"88cb827dabd28a341302fb6d9a9eeaa17b7bf69a0e21784f5dc9eb3d356d9140\": not found" Sep 10 00:39:38.955839 kubelet[2104]: I0910 00:39:38.955785 2104 scope.go:117] "RemoveContainer" containerID="6d88b68abe61689bd8aea4fd25d637f74a13e18c1f99bdcd86ad74a843f1741d" Sep 10 00:39:38.956145 env[1312]: time="2025-09-10T00:39:38.956090149Z" level=error msg="ContainerStatus for \"6d88b68abe61689bd8aea4fd25d637f74a13e18c1f99bdcd86ad74a843f1741d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6d88b68abe61689bd8aea4fd25d637f74a13e18c1f99bdcd86ad74a843f1741d\": not found" Sep 10 00:39:38.956314 kubelet[2104]: E0910 00:39:38.956290 2104 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6d88b68abe61689bd8aea4fd25d637f74a13e18c1f99bdcd86ad74a843f1741d\": not found" containerID="6d88b68abe61689bd8aea4fd25d637f74a13e18c1f99bdcd86ad74a843f1741d" Sep 10 00:39:38.956358 kubelet[2104]: I0910 00:39:38.956316 2104 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6d88b68abe61689bd8aea4fd25d637f74a13e18c1f99bdcd86ad74a843f1741d"} err="failed to get container status \"6d88b68abe61689bd8aea4fd25d637f74a13e18c1f99bdcd86ad74a843f1741d\": rpc error: code = NotFound desc = an error occurred when try to find container \"6d88b68abe61689bd8aea4fd25d637f74a13e18c1f99bdcd86ad74a843f1741d\": not found" Sep 10 00:39:38.956358 kubelet[2104]: I0910 00:39:38.956342 2104 scope.go:117] "RemoveContainer" containerID="bc36983c4d9dd3e3cadab7aadb4510e2a115139e0db66c46571bc35261b9a1bd" Sep 10 00:39:38.956626 env[1312]: time="2025-09-10T00:39:38.956548255Z" level=error msg="ContainerStatus for \"bc36983c4d9dd3e3cadab7aadb4510e2a115139e0db66c46571bc35261b9a1bd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bc36983c4d9dd3e3cadab7aadb4510e2a115139e0db66c46571bc35261b9a1bd\": not found" Sep 10 00:39:38.956758 kubelet[2104]: E0910 00:39:38.956730 2104 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bc36983c4d9dd3e3cadab7aadb4510e2a115139e0db66c46571bc35261b9a1bd\": not found" containerID="bc36983c4d9dd3e3cadab7aadb4510e2a115139e0db66c46571bc35261b9a1bd" Sep 10 00:39:38.956842 kubelet[2104]: I0910 00:39:38.956757 2104 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bc36983c4d9dd3e3cadab7aadb4510e2a115139e0db66c46571bc35261b9a1bd"} err="failed to get container status \"bc36983c4d9dd3e3cadab7aadb4510e2a115139e0db66c46571bc35261b9a1bd\": rpc error: code = NotFound desc = an error occurred when try to find container \"bc36983c4d9dd3e3cadab7aadb4510e2a115139e0db66c46571bc35261b9a1bd\": not found" Sep 10 00:39:38.956842 kubelet[2104]: I0910 00:39:38.956776 2104 scope.go:117] "RemoveContainer" containerID="8c032f0458585e4e03b811874fcfb3c93a848073db9464bb6bafe7ce40dfbbd4" Sep 10 00:39:38.956986 env[1312]: time="2025-09-10T00:39:38.956939236Z" level=error msg="ContainerStatus for \"8c032f0458585e4e03b811874fcfb3c93a848073db9464bb6bafe7ce40dfbbd4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8c032f0458585e4e03b811874fcfb3c93a848073db9464bb6bafe7ce40dfbbd4\": not found" Sep 10 00:39:38.957103 kubelet[2104]: E0910 00:39:38.957081 2104 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8c032f0458585e4e03b811874fcfb3c93a848073db9464bb6bafe7ce40dfbbd4\": not found" containerID="8c032f0458585e4e03b811874fcfb3c93a848073db9464bb6bafe7ce40dfbbd4" Sep 10 00:39:38.957147 kubelet[2104]: I0910 00:39:38.957107 2104 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8c032f0458585e4e03b811874fcfb3c93a848073db9464bb6bafe7ce40dfbbd4"} err="failed to get container status \"8c032f0458585e4e03b811874fcfb3c93a848073db9464bb6bafe7ce40dfbbd4\": rpc error: code = NotFound desc = an error occurred when try to find container \"8c032f0458585e4e03b811874fcfb3c93a848073db9464bb6bafe7ce40dfbbd4\": not found" Sep 10 00:39:38.957147 kubelet[2104]: I0910 00:39:38.957124 2104 scope.go:117] "RemoveContainer" containerID="6d51ef98bd5d8fa2cbd50dff4d26a1d3856e0b0445416f55d534af3d8d3bebaa" Sep 10 00:39:38.957339 env[1312]: time="2025-09-10T00:39:38.957267216Z" level=error msg="ContainerStatus for \"6d51ef98bd5d8fa2cbd50dff4d26a1d3856e0b0445416f55d534af3d8d3bebaa\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6d51ef98bd5d8fa2cbd50dff4d26a1d3856e0b0445416f55d534af3d8d3bebaa\": not found" Sep 10 00:39:38.957687 kubelet[2104]: E0910 00:39:38.957653 2104 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6d51ef98bd5d8fa2cbd50dff4d26a1d3856e0b0445416f55d534af3d8d3bebaa\": not found" containerID="6d51ef98bd5d8fa2cbd50dff4d26a1d3856e0b0445416f55d534af3d8d3bebaa" Sep 10 00:39:38.957761 kubelet[2104]: I0910 00:39:38.957693 2104 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6d51ef98bd5d8fa2cbd50dff4d26a1d3856e0b0445416f55d534af3d8d3bebaa"} err="failed to get container status \"6d51ef98bd5d8fa2cbd50dff4d26a1d3856e0b0445416f55d534af3d8d3bebaa\": rpc error: code = NotFound desc = an error occurred when try to find container \"6d51ef98bd5d8fa2cbd50dff4d26a1d3856e0b0445416f55d534af3d8d3bebaa\": not found" Sep 10 00:39:39.251682 kubelet[2104]: I0910 00:39:39.251627 2104 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61ec1db8-411c-4ac4-99cc-e07c70e56eac" path="/var/lib/kubelet/pods/61ec1db8-411c-4ac4-99cc-e07c70e56eac/volumes" Sep 10 00:39:39.252214 kubelet[2104]: I0910 00:39:39.252185 2104 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab6e75b4-2401-4c17-bb89-7a450c5017a6" path="/var/lib/kubelet/pods/ab6e75b4-2401-4c17-bb89-7a450c5017a6/volumes" Sep 10 00:39:39.436480 sshd[3793]: pam_unix(sshd:session): session closed for user core Sep 10 00:39:39.439343 systemd[1]: Started sshd@28-10.0.0.12:22-10.0.0.1:47224.service. Sep 10 00:39:39.439986 systemd[1]: sshd@27-10.0.0.12:22-10.0.0.1:47210.service: Deactivated successfully. Sep 10 00:39:39.441521 systemd[1]: session-28.scope: Deactivated successfully. Sep 10 00:39:39.442151 systemd-logind[1293]: Session 28 logged out. Waiting for processes to exit. Sep 10 00:39:39.443382 systemd-logind[1293]: Removed session 28. Sep 10 00:39:39.481740 sshd[3959]: Accepted publickey for core from 10.0.0.1 port 47224 ssh2: RSA SHA256:naKAIq1jxVWXXwxrf2qyS4Axg4ReIMCTV7B/o3rV64U Sep 10 00:39:39.483238 sshd[3959]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:39:39.487957 systemd-logind[1293]: New session 29 of user core. Sep 10 00:39:39.489116 systemd[1]: Started session-29.scope. Sep 10 00:39:40.316059 kubelet[2104]: E0910 00:39:40.315996 2104 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 10 00:39:40.375488 systemd[1]: Started sshd@29-10.0.0.12:22-10.0.0.1:46980.service. Sep 10 00:39:40.396483 kubelet[2104]: E0910 00:39:40.396411 2104 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ab6e75b4-2401-4c17-bb89-7a450c5017a6" containerName="mount-cgroup" Sep 10 00:39:40.396483 kubelet[2104]: E0910 00:39:40.396455 2104 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ab6e75b4-2401-4c17-bb89-7a450c5017a6" containerName="clean-cilium-state" Sep 10 00:39:40.396483 kubelet[2104]: E0910 00:39:40.396466 2104 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="61ec1db8-411c-4ac4-99cc-e07c70e56eac" containerName="cilium-operator" Sep 10 00:39:40.396483 kubelet[2104]: E0910 00:39:40.396476 2104 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ab6e75b4-2401-4c17-bb89-7a450c5017a6" containerName="apply-sysctl-overwrites" Sep 10 00:39:40.396483 kubelet[2104]: E0910 00:39:40.396487 2104 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ab6e75b4-2401-4c17-bb89-7a450c5017a6" containerName="mount-bpf-fs" Sep 10 00:39:40.396483 kubelet[2104]: E0910 00:39:40.396513 2104 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ab6e75b4-2401-4c17-bb89-7a450c5017a6" containerName="cilium-agent" Sep 10 00:39:40.397070 kubelet[2104]: I0910 00:39:40.396555 2104 memory_manager.go:354] "RemoveStaleState removing state" podUID="61ec1db8-411c-4ac4-99cc-e07c70e56eac" containerName="cilium-operator" Sep 10 00:39:40.397070 kubelet[2104]: I0910 00:39:40.396588 2104 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab6e75b4-2401-4c17-bb89-7a450c5017a6" containerName="cilium-agent" Sep 10 00:39:40.415487 sshd[3975]: Accepted publickey for core from 10.0.0.1 port 46980 ssh2: RSA SHA256:naKAIq1jxVWXXwxrf2qyS4Axg4ReIMCTV7B/o3rV64U Sep 10 00:39:40.416760 sshd[3975]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:39:40.420831 systemd-logind[1293]: New session 30 of user core. Sep 10 00:39:40.421780 systemd[1]: Started session-30.scope. Sep 10 00:39:40.513322 kubelet[2104]: I0910 00:39:40.513251 2104 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/289eb48a-8ea6-45fd-9389-86b5cf784388-lib-modules\") pod \"cilium-2b989\" (UID: \"289eb48a-8ea6-45fd-9389-86b5cf784388\") " pod="kube-system/cilium-2b989" Sep 10 00:39:40.513322 kubelet[2104]: I0910 00:39:40.513305 2104 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/289eb48a-8ea6-45fd-9389-86b5cf784388-host-proc-sys-kernel\") pod \"cilium-2b989\" (UID: \"289eb48a-8ea6-45fd-9389-86b5cf784388\") " pod="kube-system/cilium-2b989" Sep 10 00:39:40.513322 kubelet[2104]: I0910 00:39:40.513332 2104 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/289eb48a-8ea6-45fd-9389-86b5cf784388-hubble-tls\") pod \"cilium-2b989\" (UID: \"289eb48a-8ea6-45fd-9389-86b5cf784388\") " pod="kube-system/cilium-2b989" Sep 10 00:39:40.513620 kubelet[2104]: I0910 00:39:40.513371 2104 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/289eb48a-8ea6-45fd-9389-86b5cf784388-hostproc\") pod \"cilium-2b989\" (UID: \"289eb48a-8ea6-45fd-9389-86b5cf784388\") " pod="kube-system/cilium-2b989" Sep 10 00:39:40.513620 kubelet[2104]: I0910 00:39:40.513400 2104 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/289eb48a-8ea6-45fd-9389-86b5cf784388-cilium-ipsec-secrets\") pod \"cilium-2b989\" (UID: \"289eb48a-8ea6-45fd-9389-86b5cf784388\") " pod="kube-system/cilium-2b989" Sep 10 00:39:40.513620 kubelet[2104]: I0910 00:39:40.513417 2104 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/289eb48a-8ea6-45fd-9389-86b5cf784388-bpf-maps\") pod \"cilium-2b989\" (UID: \"289eb48a-8ea6-45fd-9389-86b5cf784388\") " pod="kube-system/cilium-2b989" Sep 10 00:39:40.513620 kubelet[2104]: I0910 00:39:40.513432 2104 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/289eb48a-8ea6-45fd-9389-86b5cf784388-cilium-cgroup\") pod \"cilium-2b989\" (UID: \"289eb48a-8ea6-45fd-9389-86b5cf784388\") " pod="kube-system/cilium-2b989" Sep 10 00:39:40.513620 kubelet[2104]: I0910 00:39:40.513449 2104 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/289eb48a-8ea6-45fd-9389-86b5cf784388-host-proc-sys-net\") pod \"cilium-2b989\" (UID: \"289eb48a-8ea6-45fd-9389-86b5cf784388\") " pod="kube-system/cilium-2b989" Sep 10 00:39:40.513620 kubelet[2104]: I0910 00:39:40.513466 2104 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqhww\" (UniqueName: \"kubernetes.io/projected/289eb48a-8ea6-45fd-9389-86b5cf784388-kube-api-access-qqhww\") pod \"cilium-2b989\" (UID: \"289eb48a-8ea6-45fd-9389-86b5cf784388\") " pod="kube-system/cilium-2b989" Sep 10 00:39:40.513803 kubelet[2104]: I0910 00:39:40.513486 2104 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/289eb48a-8ea6-45fd-9389-86b5cf784388-cilium-run\") pod \"cilium-2b989\" (UID: \"289eb48a-8ea6-45fd-9389-86b5cf784388\") " pod="kube-system/cilium-2b989" Sep 10 00:39:40.513803 kubelet[2104]: I0910 00:39:40.513516 2104 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/289eb48a-8ea6-45fd-9389-86b5cf784388-etc-cni-netd\") pod \"cilium-2b989\" (UID: \"289eb48a-8ea6-45fd-9389-86b5cf784388\") " pod="kube-system/cilium-2b989" Sep 10 00:39:40.513803 kubelet[2104]: I0910 00:39:40.513533 2104 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/289eb48a-8ea6-45fd-9389-86b5cf784388-xtables-lock\") pod \"cilium-2b989\" (UID: \"289eb48a-8ea6-45fd-9389-86b5cf784388\") " pod="kube-system/cilium-2b989" Sep 10 00:39:40.513803 kubelet[2104]: I0910 00:39:40.513559 2104 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/289eb48a-8ea6-45fd-9389-86b5cf784388-clustermesh-secrets\") pod \"cilium-2b989\" (UID: \"289eb48a-8ea6-45fd-9389-86b5cf784388\") " pod="kube-system/cilium-2b989" Sep 10 00:39:40.513803 kubelet[2104]: I0910 00:39:40.513577 2104 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/289eb48a-8ea6-45fd-9389-86b5cf784388-cilium-config-path\") pod \"cilium-2b989\" (UID: \"289eb48a-8ea6-45fd-9389-86b5cf784388\") " pod="kube-system/cilium-2b989" Sep 10 00:39:40.513803 kubelet[2104]: I0910 00:39:40.513616 2104 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/289eb48a-8ea6-45fd-9389-86b5cf784388-cni-path\") pod \"cilium-2b989\" (UID: \"289eb48a-8ea6-45fd-9389-86b5cf784388\") " pod="kube-system/cilium-2b989" Sep 10 00:39:40.586542 sshd[3959]: pam_unix(sshd:session): session closed for user core Sep 10 00:39:40.591160 systemd[1]: Started sshd@30-10.0.0.12:22-10.0.0.1:46994.service. Sep 10 00:39:40.591761 systemd[1]: sshd@28-10.0.0.12:22-10.0.0.1:47224.service: Deactivated successfully. Sep 10 00:39:40.593942 systemd[1]: session-29.scope: Deactivated successfully. Sep 10 00:39:40.594760 systemd-logind[1293]: Session 29 logged out. Waiting for processes to exit. Sep 10 00:39:40.596238 systemd-logind[1293]: Removed session 29. Sep 10 00:39:40.631552 sshd[3989]: Accepted publickey for core from 10.0.0.1 port 46994 ssh2: RSA SHA256:naKAIq1jxVWXXwxrf2qyS4Axg4ReIMCTV7B/o3rV64U Sep 10 00:39:40.632925 sshd[3989]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:39:40.636197 systemd-logind[1293]: New session 31 of user core. Sep 10 00:39:40.637311 systemd[1]: Started session-31.scope. Sep 10 00:39:40.687314 kubelet[2104]: E0910 00:39:40.686309 2104 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-qqhww], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-2b989" podUID="289eb48a-8ea6-45fd-9389-86b5cf784388" Sep 10 00:39:40.802672 sshd[3975]: pam_unix(sshd:session): session closed for user core Sep 10 00:39:40.805418 systemd[1]: sshd@29-10.0.0.12:22-10.0.0.1:46980.service: Deactivated successfully. Sep 10 00:39:40.806658 systemd-logind[1293]: Session 30 logged out. Waiting for processes to exit. Sep 10 00:39:40.806687 systemd[1]: session-30.scope: Deactivated successfully. Sep 10 00:39:40.807676 systemd-logind[1293]: Removed session 30. Sep 10 00:39:41.721670 kubelet[2104]: I0910 00:39:41.721596 2104 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/289eb48a-8ea6-45fd-9389-86b5cf784388-lib-modules\") pod \"289eb48a-8ea6-45fd-9389-86b5cf784388\" (UID: \"289eb48a-8ea6-45fd-9389-86b5cf784388\") " Sep 10 00:39:41.721670 kubelet[2104]: I0910 00:39:41.721652 2104 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/289eb48a-8ea6-45fd-9389-86b5cf784388-cni-path\") pod \"289eb48a-8ea6-45fd-9389-86b5cf784388\" (UID: \"289eb48a-8ea6-45fd-9389-86b5cf784388\") " Sep 10 00:39:41.721670 kubelet[2104]: I0910 00:39:41.721678 2104 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/289eb48a-8ea6-45fd-9389-86b5cf784388-hubble-tls\") pod \"289eb48a-8ea6-45fd-9389-86b5cf784388\" (UID: \"289eb48a-8ea6-45fd-9389-86b5cf784388\") " Sep 10 00:39:41.722241 kubelet[2104]: I0910 00:39:41.721693 2104 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/289eb48a-8ea6-45fd-9389-86b5cf784388-cilium-cgroup\") pod \"289eb48a-8ea6-45fd-9389-86b5cf784388\" (UID: \"289eb48a-8ea6-45fd-9389-86b5cf784388\") " Sep 10 00:39:41.722241 kubelet[2104]: I0910 00:39:41.721711 2104 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/289eb48a-8ea6-45fd-9389-86b5cf784388-clustermesh-secrets\") pod \"289eb48a-8ea6-45fd-9389-86b5cf784388\" (UID: \"289eb48a-8ea6-45fd-9389-86b5cf784388\") " Sep 10 00:39:41.722241 kubelet[2104]: I0910 00:39:41.721728 2104 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/289eb48a-8ea6-45fd-9389-86b5cf784388-host-proc-sys-kernel\") pod \"289eb48a-8ea6-45fd-9389-86b5cf784388\" (UID: \"289eb48a-8ea6-45fd-9389-86b5cf784388\") " Sep 10 00:39:41.722241 kubelet[2104]: I0910 00:39:41.721709 2104 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/289eb48a-8ea6-45fd-9389-86b5cf784388-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "289eb48a-8ea6-45fd-9389-86b5cf784388" (UID: "289eb48a-8ea6-45fd-9389-86b5cf784388"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 00:39:41.722241 kubelet[2104]: I0910 00:39:41.721757 2104 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/289eb48a-8ea6-45fd-9389-86b5cf784388-cilium-run\") pod \"289eb48a-8ea6-45fd-9389-86b5cf784388\" (UID: \"289eb48a-8ea6-45fd-9389-86b5cf784388\") " Sep 10 00:39:41.722241 kubelet[2104]: I0910 00:39:41.721772 2104 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/289eb48a-8ea6-45fd-9389-86b5cf784388-etc-cni-netd\") pod \"289eb48a-8ea6-45fd-9389-86b5cf784388\" (UID: \"289eb48a-8ea6-45fd-9389-86b5cf784388\") " Sep 10 00:39:41.722465 kubelet[2104]: I0910 00:39:41.721793 2104 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqhww\" (UniqueName: \"kubernetes.io/projected/289eb48a-8ea6-45fd-9389-86b5cf784388-kube-api-access-qqhww\") pod \"289eb48a-8ea6-45fd-9389-86b5cf784388\" (UID: \"289eb48a-8ea6-45fd-9389-86b5cf784388\") " Sep 10 00:39:41.722465 kubelet[2104]: I0910 00:39:41.721788 2104 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/289eb48a-8ea6-45fd-9389-86b5cf784388-cni-path" (OuterVolumeSpecName: "cni-path") pod "289eb48a-8ea6-45fd-9389-86b5cf784388" (UID: "289eb48a-8ea6-45fd-9389-86b5cf784388"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 00:39:41.722465 kubelet[2104]: I0910 00:39:41.721807 2104 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/289eb48a-8ea6-45fd-9389-86b5cf784388-cilium-ipsec-secrets\") pod \"289eb48a-8ea6-45fd-9389-86b5cf784388\" (UID: \"289eb48a-8ea6-45fd-9389-86b5cf784388\") " Sep 10 00:39:41.722465 kubelet[2104]: I0910 00:39:41.721869 2104 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/289eb48a-8ea6-45fd-9389-86b5cf784388-host-proc-sys-net\") pod \"289eb48a-8ea6-45fd-9389-86b5cf784388\" (UID: \"289eb48a-8ea6-45fd-9389-86b5cf784388\") " Sep 10 00:39:41.722465 kubelet[2104]: I0910 00:39:41.721892 2104 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/289eb48a-8ea6-45fd-9389-86b5cf784388-xtables-lock\") pod \"289eb48a-8ea6-45fd-9389-86b5cf784388\" (UID: \"289eb48a-8ea6-45fd-9389-86b5cf784388\") " Sep 10 00:39:41.722465 kubelet[2104]: I0910 00:39:41.721919 2104 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/289eb48a-8ea6-45fd-9389-86b5cf784388-cilium-config-path\") pod \"289eb48a-8ea6-45fd-9389-86b5cf784388\" (UID: \"289eb48a-8ea6-45fd-9389-86b5cf784388\") " Sep 10 00:39:41.722693 kubelet[2104]: I0910 00:39:41.721942 2104 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/289eb48a-8ea6-45fd-9389-86b5cf784388-hostproc\") pod \"289eb48a-8ea6-45fd-9389-86b5cf784388\" (UID: \"289eb48a-8ea6-45fd-9389-86b5cf784388\") " Sep 10 00:39:41.722693 kubelet[2104]: I0910 00:39:41.721959 2104 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/289eb48a-8ea6-45fd-9389-86b5cf784388-bpf-maps\") pod \"289eb48a-8ea6-45fd-9389-86b5cf784388\" (UID: \"289eb48a-8ea6-45fd-9389-86b5cf784388\") " Sep 10 00:39:41.722693 kubelet[2104]: I0910 00:39:41.722014 2104 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/289eb48a-8ea6-45fd-9389-86b5cf784388-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 10 00:39:41.722693 kubelet[2104]: I0910 00:39:41.722029 2104 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/289eb48a-8ea6-45fd-9389-86b5cf784388-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 10 00:39:41.722693 kubelet[2104]: I0910 00:39:41.722051 2104 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/289eb48a-8ea6-45fd-9389-86b5cf784388-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "289eb48a-8ea6-45fd-9389-86b5cf784388" (UID: "289eb48a-8ea6-45fd-9389-86b5cf784388"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 00:39:41.722693 kubelet[2104]: I0910 00:39:41.722073 2104 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/289eb48a-8ea6-45fd-9389-86b5cf784388-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "289eb48a-8ea6-45fd-9389-86b5cf784388" (UID: "289eb48a-8ea6-45fd-9389-86b5cf784388"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 00:39:41.722881 kubelet[2104]: I0910 00:39:41.722088 2104 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/289eb48a-8ea6-45fd-9389-86b5cf784388-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "289eb48a-8ea6-45fd-9389-86b5cf784388" (UID: "289eb48a-8ea6-45fd-9389-86b5cf784388"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 00:39:41.724239 kubelet[2104]: I0910 00:39:41.724213 2104 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/289eb48a-8ea6-45fd-9389-86b5cf784388-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "289eb48a-8ea6-45fd-9389-86b5cf784388" (UID: "289eb48a-8ea6-45fd-9389-86b5cf784388"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 10 00:39:41.724354 kubelet[2104]: I0910 00:39:41.724334 2104 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/289eb48a-8ea6-45fd-9389-86b5cf784388-hostproc" (OuterVolumeSpecName: "hostproc") pod "289eb48a-8ea6-45fd-9389-86b5cf784388" (UID: "289eb48a-8ea6-45fd-9389-86b5cf784388"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 00:39:41.724535 kubelet[2104]: I0910 00:39:41.724516 2104 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/289eb48a-8ea6-45fd-9389-86b5cf784388-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "289eb48a-8ea6-45fd-9389-86b5cf784388" (UID: "289eb48a-8ea6-45fd-9389-86b5cf784388"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 00:39:41.724653 kubelet[2104]: I0910 00:39:41.724634 2104 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/289eb48a-8ea6-45fd-9389-86b5cf784388-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "289eb48a-8ea6-45fd-9389-86b5cf784388" (UID: "289eb48a-8ea6-45fd-9389-86b5cf784388"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 00:39:41.724764 kubelet[2104]: I0910 00:39:41.724741 2104 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/289eb48a-8ea6-45fd-9389-86b5cf784388-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "289eb48a-8ea6-45fd-9389-86b5cf784388" (UID: "289eb48a-8ea6-45fd-9389-86b5cf784388"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 00:39:41.724898 kubelet[2104]: I0910 00:39:41.724862 2104 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/289eb48a-8ea6-45fd-9389-86b5cf784388-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "289eb48a-8ea6-45fd-9389-86b5cf784388" (UID: "289eb48a-8ea6-45fd-9389-86b5cf784388"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 10 00:39:41.725025 kubelet[2104]: I0910 00:39:41.725001 2104 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/289eb48a-8ea6-45fd-9389-86b5cf784388-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "289eb48a-8ea6-45fd-9389-86b5cf784388" (UID: "289eb48a-8ea6-45fd-9389-86b5cf784388"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 10 00:39:41.726414 systemd[1]: var-lib-kubelet-pods-289eb48a\x2d8ea6\x2d45fd\x2d9389\x2d86b5cf784388-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 10 00:39:41.726617 systemd[1]: var-lib-kubelet-pods-289eb48a\x2d8ea6\x2d45fd\x2d9389\x2d86b5cf784388-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Sep 10 00:39:41.727125 kubelet[2104]: I0910 00:39:41.726925 2104 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/289eb48a-8ea6-45fd-9389-86b5cf784388-kube-api-access-qqhww" (OuterVolumeSpecName: "kube-api-access-qqhww") pod "289eb48a-8ea6-45fd-9389-86b5cf784388" (UID: "289eb48a-8ea6-45fd-9389-86b5cf784388"). InnerVolumeSpecName "kube-api-access-qqhww". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 10 00:39:41.727287 kubelet[2104]: I0910 00:39:41.727261 2104 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/289eb48a-8ea6-45fd-9389-86b5cf784388-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "289eb48a-8ea6-45fd-9389-86b5cf784388" (UID: "289eb48a-8ea6-45fd-9389-86b5cf784388"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 10 00:39:41.727459 kubelet[2104]: I0910 00:39:41.727437 2104 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/289eb48a-8ea6-45fd-9389-86b5cf784388-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "289eb48a-8ea6-45fd-9389-86b5cf784388" (UID: "289eb48a-8ea6-45fd-9389-86b5cf784388"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 10 00:39:41.729447 systemd[1]: var-lib-kubelet-pods-289eb48a\x2d8ea6\x2d45fd\x2d9389\x2d86b5cf784388-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqqhww.mount: Deactivated successfully. Sep 10 00:39:41.729615 systemd[1]: var-lib-kubelet-pods-289eb48a\x2d8ea6\x2d45fd\x2d9389\x2d86b5cf784388-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 10 00:39:41.823297 kubelet[2104]: I0910 00:39:41.823243 2104 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/289eb48a-8ea6-45fd-9389-86b5cf784388-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 10 00:39:41.823297 kubelet[2104]: I0910 00:39:41.823283 2104 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qqhww\" (UniqueName: \"kubernetes.io/projected/289eb48a-8ea6-45fd-9389-86b5cf784388-kube-api-access-qqhww\") on node \"localhost\" DevicePath \"\"" Sep 10 00:39:41.823297 kubelet[2104]: I0910 00:39:41.823303 2104 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/289eb48a-8ea6-45fd-9389-86b5cf784388-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 10 00:39:41.823516 kubelet[2104]: I0910 00:39:41.823313 2104 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/289eb48a-8ea6-45fd-9389-86b5cf784388-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 10 00:39:41.823516 kubelet[2104]: I0910 00:39:41.823323 2104 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/289eb48a-8ea6-45fd-9389-86b5cf784388-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 10 00:39:41.823516 kubelet[2104]: I0910 00:39:41.823332 2104 reconciler_common.go:293] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/289eb48a-8ea6-45fd-9389-86b5cf784388-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Sep 10 00:39:41.823516 kubelet[2104]: I0910 00:39:41.823341 2104 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/289eb48a-8ea6-45fd-9389-86b5cf784388-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 10 00:39:41.823516 kubelet[2104]: I0910 00:39:41.823350 2104 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/289eb48a-8ea6-45fd-9389-86b5cf784388-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 10 00:39:41.823516 kubelet[2104]: I0910 00:39:41.823360 2104 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/289eb48a-8ea6-45fd-9389-86b5cf784388-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 10 00:39:41.823516 kubelet[2104]: I0910 00:39:41.823369 2104 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/289eb48a-8ea6-45fd-9389-86b5cf784388-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 10 00:39:41.823516 kubelet[2104]: I0910 00:39:41.823380 2104 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/289eb48a-8ea6-45fd-9389-86b5cf784388-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 10 00:39:41.823716 kubelet[2104]: I0910 00:39:41.823390 2104 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/289eb48a-8ea6-45fd-9389-86b5cf784388-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 10 00:39:41.823716 kubelet[2104]: I0910 00:39:41.823399 2104 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/289eb48a-8ea6-45fd-9389-86b5cf784388-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 10 00:39:42.828918 kubelet[2104]: I0910 00:39:42.828847 2104 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5n5n\" (UniqueName: \"kubernetes.io/projected/0b4b7852-241d-431c-a752-22776e42f117-kube-api-access-x5n5n\") pod \"cilium-zlzzs\" (UID: \"0b4b7852-241d-431c-a752-22776e42f117\") " pod="kube-system/cilium-zlzzs" Sep 10 00:39:42.828918 kubelet[2104]: I0910 00:39:42.828928 2104 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0b4b7852-241d-431c-a752-22776e42f117-lib-modules\") pod \"cilium-zlzzs\" (UID: \"0b4b7852-241d-431c-a752-22776e42f117\") " pod="kube-system/cilium-zlzzs" Sep 10 00:39:42.829359 kubelet[2104]: I0910 00:39:42.828963 2104 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0b4b7852-241d-431c-a752-22776e42f117-hostproc\") pod \"cilium-zlzzs\" (UID: \"0b4b7852-241d-431c-a752-22776e42f117\") " pod="kube-system/cilium-zlzzs" Sep 10 00:39:42.829359 kubelet[2104]: I0910 00:39:42.829002 2104 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0b4b7852-241d-431c-a752-22776e42f117-cilium-cgroup\") pod \"cilium-zlzzs\" (UID: \"0b4b7852-241d-431c-a752-22776e42f117\") " pod="kube-system/cilium-zlzzs" Sep 10 00:39:42.829359 kubelet[2104]: I0910 00:39:42.829032 2104 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0b4b7852-241d-431c-a752-22776e42f117-hubble-tls\") pod \"cilium-zlzzs\" (UID: \"0b4b7852-241d-431c-a752-22776e42f117\") " pod="kube-system/cilium-zlzzs" Sep 10 00:39:42.829359 kubelet[2104]: I0910 00:39:42.829086 2104 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0b4b7852-241d-431c-a752-22776e42f117-cni-path\") pod \"cilium-zlzzs\" (UID: \"0b4b7852-241d-431c-a752-22776e42f117\") " pod="kube-system/cilium-zlzzs" Sep 10 00:39:42.829359 kubelet[2104]: I0910 00:39:42.829124 2104 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0b4b7852-241d-431c-a752-22776e42f117-host-proc-sys-kernel\") pod \"cilium-zlzzs\" (UID: \"0b4b7852-241d-431c-a752-22776e42f117\") " pod="kube-system/cilium-zlzzs" Sep 10 00:39:42.829359 kubelet[2104]: I0910 00:39:42.829159 2104 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0b4b7852-241d-431c-a752-22776e42f117-host-proc-sys-net\") pod \"cilium-zlzzs\" (UID: \"0b4b7852-241d-431c-a752-22776e42f117\") " pod="kube-system/cilium-zlzzs" Sep 10 00:39:42.829628 kubelet[2104]: I0910 00:39:42.829192 2104 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0b4b7852-241d-431c-a752-22776e42f117-cilium-run\") pod \"cilium-zlzzs\" (UID: \"0b4b7852-241d-431c-a752-22776e42f117\") " pod="kube-system/cilium-zlzzs" Sep 10 00:39:42.829628 kubelet[2104]: I0910 00:39:42.829226 2104 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0b4b7852-241d-431c-a752-22776e42f117-etc-cni-netd\") pod \"cilium-zlzzs\" (UID: \"0b4b7852-241d-431c-a752-22776e42f117\") " pod="kube-system/cilium-zlzzs" Sep 10 00:39:42.829628 kubelet[2104]: I0910 00:39:42.829246 2104 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0b4b7852-241d-431c-a752-22776e42f117-clustermesh-secrets\") pod \"cilium-zlzzs\" (UID: \"0b4b7852-241d-431c-a752-22776e42f117\") " pod="kube-system/cilium-zlzzs" Sep 10 00:39:42.829628 kubelet[2104]: I0910 00:39:42.829260 2104 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0b4b7852-241d-431c-a752-22776e42f117-cilium-config-path\") pod \"cilium-zlzzs\" (UID: \"0b4b7852-241d-431c-a752-22776e42f117\") " pod="kube-system/cilium-zlzzs" Sep 10 00:39:42.829628 kubelet[2104]: I0910 00:39:42.829275 2104 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0b4b7852-241d-431c-a752-22776e42f117-cilium-ipsec-secrets\") pod \"cilium-zlzzs\" (UID: \"0b4b7852-241d-431c-a752-22776e42f117\") " pod="kube-system/cilium-zlzzs" Sep 10 00:39:42.829628 kubelet[2104]: I0910 00:39:42.829292 2104 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0b4b7852-241d-431c-a752-22776e42f117-bpf-maps\") pod \"cilium-zlzzs\" (UID: \"0b4b7852-241d-431c-a752-22776e42f117\") " pod="kube-system/cilium-zlzzs" Sep 10 00:39:42.829815 kubelet[2104]: I0910 00:39:42.829310 2104 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0b4b7852-241d-431c-a752-22776e42f117-xtables-lock\") pod \"cilium-zlzzs\" (UID: \"0b4b7852-241d-431c-a752-22776e42f117\") " pod="kube-system/cilium-zlzzs" Sep 10 00:39:42.972205 kubelet[2104]: E0910 00:39:42.972152 2104 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:39:42.974012 env[1312]: time="2025-09-10T00:39:42.973961479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zlzzs,Uid:0b4b7852-241d-431c-a752-22776e42f117,Namespace:kube-system,Attempt:0,}" Sep 10 00:39:43.051632 env[1312]: time="2025-09-10T00:39:43.051558879Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:39:43.051632 env[1312]: time="2025-09-10T00:39:43.051606650Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:39:43.051632 env[1312]: time="2025-09-10T00:39:43.051617901Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:39:43.051850 env[1312]: time="2025-09-10T00:39:43.051771341Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/87d3f2fc6030ec2e59974bababe2de2d487343b169b672636c86b78547da7086 pid=4022 runtime=io.containerd.runc.v2 Sep 10 00:39:43.088699 env[1312]: time="2025-09-10T00:39:43.087993367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zlzzs,Uid:0b4b7852-241d-431c-a752-22776e42f117,Namespace:kube-system,Attempt:0,} returns sandbox id \"87d3f2fc6030ec2e59974bababe2de2d487343b169b672636c86b78547da7086\"" Sep 10 00:39:43.089050 kubelet[2104]: E0910 00:39:43.089022 2104 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:39:43.091158 env[1312]: time="2025-09-10T00:39:43.091099839Z" level=info msg="CreateContainer within sandbox \"87d3f2fc6030ec2e59974bababe2de2d487343b169b672636c86b78547da7086\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 10 00:39:43.103543 env[1312]: time="2025-09-10T00:39:43.103478036Z" level=info msg="CreateContainer within sandbox \"87d3f2fc6030ec2e59974bababe2de2d487343b169b672636c86b78547da7086\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"18cf54fd8df5726775cfa774d7495f408323b8059c6875258b4b04c5ecbaf894\"" Sep 10 00:39:43.104095 env[1312]: time="2025-09-10T00:39:43.104013488Z" level=info msg="StartContainer for \"18cf54fd8df5726775cfa774d7495f408323b8059c6875258b4b04c5ecbaf894\"" Sep 10 00:39:43.145704 env[1312]: time="2025-09-10T00:39:43.145639629Z" level=info msg="StartContainer for \"18cf54fd8df5726775cfa774d7495f408323b8059c6875258b4b04c5ecbaf894\" returns successfully" Sep 10 00:39:43.202185 env[1312]: time="2025-09-10T00:39:43.202125377Z" level=info msg="shim disconnected" id=18cf54fd8df5726775cfa774d7495f408323b8059c6875258b4b04c5ecbaf894 Sep 10 00:39:43.202579 env[1312]: time="2025-09-10T00:39:43.202560851Z" level=warning msg="cleaning up after shim disconnected" id=18cf54fd8df5726775cfa774d7495f408323b8059c6875258b4b04c5ecbaf894 namespace=k8s.io Sep 10 00:39:43.202686 env[1312]: time="2025-09-10T00:39:43.202639159Z" level=info msg="cleaning up dead shim" Sep 10 00:39:43.210889 env[1312]: time="2025-09-10T00:39:43.210853685Z" level=warning msg="cleanup warnings time=\"2025-09-10T00:39:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4105 runtime=io.containerd.runc.v2\n" Sep 10 00:39:43.384095 kubelet[2104]: I0910 00:39:43.383990 2104 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="289eb48a-8ea6-45fd-9389-86b5cf784388" path="/var/lib/kubelet/pods/289eb48a-8ea6-45fd-9389-86b5cf784388/volumes" Sep 10 00:39:43.544316 kubelet[2104]: E0910 00:39:43.544278 2104 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:39:43.545908 env[1312]: time="2025-09-10T00:39:43.545863154Z" level=info msg="CreateContainer within sandbox \"87d3f2fc6030ec2e59974bababe2de2d487343b169b672636c86b78547da7086\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 10 00:39:43.978988 env[1312]: time="2025-09-10T00:39:43.978882429Z" level=info msg="CreateContainer within sandbox \"87d3f2fc6030ec2e59974bababe2de2d487343b169b672636c86b78547da7086\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1e97ee7f2c84aefbf3269ce68c3a55062473fa91b72bc6d0afde64a21647f400\"" Sep 10 00:39:43.979887 env[1312]: time="2025-09-10T00:39:43.979834448Z" level=info msg="StartContainer for \"1e97ee7f2c84aefbf3269ce68c3a55062473fa91b72bc6d0afde64a21647f400\"" Sep 10 00:39:44.104606 env[1312]: time="2025-09-10T00:39:44.104537928Z" level=info msg="StartContainer for \"1e97ee7f2c84aefbf3269ce68c3a55062473fa91b72bc6d0afde64a21647f400\" returns successfully" Sep 10 00:39:44.120720 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1e97ee7f2c84aefbf3269ce68c3a55062473fa91b72bc6d0afde64a21647f400-rootfs.mount: Deactivated successfully. Sep 10 00:39:44.138950 env[1312]: time="2025-09-10T00:39:44.138863672Z" level=info msg="shim disconnected" id=1e97ee7f2c84aefbf3269ce68c3a55062473fa91b72bc6d0afde64a21647f400 Sep 10 00:39:44.138950 env[1312]: time="2025-09-10T00:39:44.138920980Z" level=warning msg="cleaning up after shim disconnected" id=1e97ee7f2c84aefbf3269ce68c3a55062473fa91b72bc6d0afde64a21647f400 namespace=k8s.io Sep 10 00:39:44.138950 env[1312]: time="2025-09-10T00:39:44.138930418Z" level=info msg="cleaning up dead shim" Sep 10 00:39:44.145752 env[1312]: time="2025-09-10T00:39:44.145702296Z" level=warning msg="cleanup warnings time=\"2025-09-10T00:39:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4168 runtime=io.containerd.runc.v2\n" Sep 10 00:39:44.551476 kubelet[2104]: E0910 00:39:44.551415 2104 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:39:44.553828 env[1312]: time="2025-09-10T00:39:44.553771578Z" level=info msg="CreateContainer within sandbox \"87d3f2fc6030ec2e59974bababe2de2d487343b169b672636c86b78547da7086\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 10 00:39:44.670628 env[1312]: time="2025-09-10T00:39:44.670536728Z" level=info msg="CreateContainer within sandbox \"87d3f2fc6030ec2e59974bababe2de2d487343b169b672636c86b78547da7086\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"06efecf9312aae572b972020530bb28c1390d8ef85b6cf7ec2db0ab90f3b6ed5\"" Sep 10 00:39:44.671535 env[1312]: time="2025-09-10T00:39:44.671357349Z" level=info msg="StartContainer for \"06efecf9312aae572b972020530bb28c1390d8ef85b6cf7ec2db0ab90f3b6ed5\"" Sep 10 00:39:44.783822 env[1312]: time="2025-09-10T00:39:44.783755817Z" level=info msg="StartContainer for \"06efecf9312aae572b972020530bb28c1390d8ef85b6cf7ec2db0ab90f3b6ed5\" returns successfully" Sep 10 00:39:44.818775 env[1312]: time="2025-09-10T00:39:44.818382169Z" level=info msg="shim disconnected" id=06efecf9312aae572b972020530bb28c1390d8ef85b6cf7ec2db0ab90f3b6ed5 Sep 10 00:39:44.818775 env[1312]: time="2025-09-10T00:39:44.818440239Z" level=warning msg="cleaning up after shim disconnected" id=06efecf9312aae572b972020530bb28c1390d8ef85b6cf7ec2db0ab90f3b6ed5 namespace=k8s.io Sep 10 00:39:44.818775 env[1312]: time="2025-09-10T00:39:44.818451760Z" level=info msg="cleaning up dead shim" Sep 10 00:39:44.829154 env[1312]: time="2025-09-10T00:39:44.829089345Z" level=warning msg="cleanup warnings time=\"2025-09-10T00:39:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4223 runtime=io.containerd.runc.v2\n" Sep 10 00:39:45.229487 env[1312]: time="2025-09-10T00:39:45.229340650Z" level=info msg="StopPodSandbox for \"1bb5cc10ed07d256cc972eeb8befbd6867ac9390dc13973b01a50a12df790b61\"" Sep 10 00:39:45.229878 env[1312]: time="2025-09-10T00:39:45.229466317Z" level=info msg="TearDown network for sandbox \"1bb5cc10ed07d256cc972eeb8befbd6867ac9390dc13973b01a50a12df790b61\" successfully" Sep 10 00:39:45.229878 env[1312]: time="2025-09-10T00:39:45.229522093Z" level=info msg="StopPodSandbox for \"1bb5cc10ed07d256cc972eeb8befbd6867ac9390dc13973b01a50a12df790b61\" returns successfully" Sep 10 00:39:45.230027 env[1312]: time="2025-09-10T00:39:45.229927228Z" level=info msg="RemovePodSandbox for \"1bb5cc10ed07d256cc972eeb8befbd6867ac9390dc13973b01a50a12df790b61\"" Sep 10 00:39:45.230027 env[1312]: time="2025-09-10T00:39:45.229966663Z" level=info msg="Forcibly stopping sandbox \"1bb5cc10ed07d256cc972eeb8befbd6867ac9390dc13973b01a50a12df790b61\"" Sep 10 00:39:45.230089 env[1312]: time="2025-09-10T00:39:45.230031837Z" level=info msg="TearDown network for sandbox \"1bb5cc10ed07d256cc972eeb8befbd6867ac9390dc13973b01a50a12df790b61\" successfully" Sep 10 00:39:45.233643 env[1312]: time="2025-09-10T00:39:45.233602794Z" level=info msg="RemovePodSandbox \"1bb5cc10ed07d256cc972eeb8befbd6867ac9390dc13973b01a50a12df790b61\" returns successfully" Sep 10 00:39:45.234176 env[1312]: time="2025-09-10T00:39:45.234142384Z" level=info msg="StopPodSandbox for \"c5833686b1433264ea602d3ff2336c558bc671ce3651325adf9c8d8d91f18dfd\"" Sep 10 00:39:45.234286 env[1312]: time="2025-09-10T00:39:45.234240980Z" level=info msg="TearDown network for sandbox \"c5833686b1433264ea602d3ff2336c558bc671ce3651325adf9c8d8d91f18dfd\" successfully" Sep 10 00:39:45.234325 env[1312]: time="2025-09-10T00:39:45.234283180Z" level=info msg="StopPodSandbox for \"c5833686b1433264ea602d3ff2336c558bc671ce3651325adf9c8d8d91f18dfd\" returns successfully" Sep 10 00:39:45.234669 env[1312]: time="2025-09-10T00:39:45.234629435Z" level=info msg="RemovePodSandbox for \"c5833686b1433264ea602d3ff2336c558bc671ce3651325adf9c8d8d91f18dfd\"" Sep 10 00:39:45.234825 env[1312]: time="2025-09-10T00:39:45.234759541Z" level=info msg="Forcibly stopping sandbox \"c5833686b1433264ea602d3ff2336c558bc671ce3651325adf9c8d8d91f18dfd\"" Sep 10 00:39:45.235048 env[1312]: time="2025-09-10T00:39:45.234873856Z" level=info msg="TearDown network for sandbox \"c5833686b1433264ea602d3ff2336c558bc671ce3651325adf9c8d8d91f18dfd\" successfully" Sep 10 00:39:45.238620 env[1312]: time="2025-09-10T00:39:45.238557727Z" level=info msg="RemovePodSandbox \"c5833686b1433264ea602d3ff2336c558bc671ce3651325adf9c8d8d91f18dfd\" returns successfully" Sep 10 00:39:45.317460 kubelet[2104]: E0910 00:39:45.317397 2104 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 10 00:39:45.557192 kubelet[2104]: E0910 00:39:45.557140 2104 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:39:45.558903 env[1312]: time="2025-09-10T00:39:45.558860925Z" level=info msg="CreateContainer within sandbox \"87d3f2fc6030ec2e59974bababe2de2d487343b169b672636c86b78547da7086\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 10 00:39:45.767476 env[1312]: time="2025-09-10T00:39:45.767388455Z" level=info msg="CreateContainer within sandbox \"87d3f2fc6030ec2e59974bababe2de2d487343b169b672636c86b78547da7086\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0e80e888fb64576ac5eb8ec87c2f22af432880b7c93b67828634c7fe1961bfda\"" Sep 10 00:39:45.768199 env[1312]: time="2025-09-10T00:39:45.768154943Z" level=info msg="StartContainer for \"0e80e888fb64576ac5eb8ec87c2f22af432880b7c93b67828634c7fe1961bfda\"" Sep 10 00:39:45.826273 env[1312]: time="2025-09-10T00:39:45.826114181Z" level=info msg="StartContainer for \"0e80e888fb64576ac5eb8ec87c2f22af432880b7c93b67828634c7fe1961bfda\" returns successfully" Sep 10 00:39:45.847335 env[1312]: time="2025-09-10T00:39:45.847260482Z" level=info msg="shim disconnected" id=0e80e888fb64576ac5eb8ec87c2f22af432880b7c93b67828634c7fe1961bfda Sep 10 00:39:45.847335 env[1312]: time="2025-09-10T00:39:45.847328160Z" level=warning msg="cleaning up after shim disconnected" id=0e80e888fb64576ac5eb8ec87c2f22af432880b7c93b67828634c7fe1961bfda namespace=k8s.io Sep 10 00:39:45.847335 env[1312]: time="2025-09-10T00:39:45.847341525Z" level=info msg="cleaning up dead shim" Sep 10 00:39:45.855653 env[1312]: time="2025-09-10T00:39:45.855597697Z" level=warning msg="cleanup warnings time=\"2025-09-10T00:39:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4283 runtime=io.containerd.runc.v2\n" Sep 10 00:39:45.938236 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0e80e888fb64576ac5eb8ec87c2f22af432880b7c93b67828634c7fe1961bfda-rootfs.mount: Deactivated successfully. Sep 10 00:39:46.561645 kubelet[2104]: E0910 00:39:46.561608 2104 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:39:46.563299 env[1312]: time="2025-09-10T00:39:46.563259614Z" level=info msg="CreateContainer within sandbox \"87d3f2fc6030ec2e59974bababe2de2d487343b169b672636c86b78547da7086\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 10 00:39:47.027546 env[1312]: time="2025-09-10T00:39:47.023343197Z" level=info msg="CreateContainer within sandbox \"87d3f2fc6030ec2e59974bababe2de2d487343b169b672636c86b78547da7086\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"596ca456e6da1667c661efe7eb1a56ede7de9dff41ff2026b92554b6a697de25\"" Sep 10 00:39:47.027546 env[1312]: time="2025-09-10T00:39:47.024247545Z" level=info msg="StartContainer for \"596ca456e6da1667c661efe7eb1a56ede7de9dff41ff2026b92554b6a697de25\"" Sep 10 00:39:47.091124 env[1312]: time="2025-09-10T00:39:47.091012985Z" level=info msg="StartContainer for \"596ca456e6da1667c661efe7eb1a56ede7de9dff41ff2026b92554b6a697de25\" returns successfully" Sep 10 00:39:47.566990 kubelet[2104]: E0910 00:39:47.565747 2104 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:39:47.579557 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Sep 10 00:39:48.008302 systemd[1]: run-containerd-runc-k8s.io-596ca456e6da1667c661efe7eb1a56ede7de9dff41ff2026b92554b6a697de25-runc.q7BA1f.mount: Deactivated successfully. Sep 10 00:39:48.202807 kubelet[2104]: I0910 00:39:48.202749 2104 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-10T00:39:48Z","lastTransitionTime":"2025-09-10T00:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 10 00:39:48.973356 kubelet[2104]: E0910 00:39:48.973237 2104 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:39:50.565731 systemd-networkd[1083]: lxc_health: Link UP Sep 10 00:39:50.574780 systemd-networkd[1083]: lxc_health: Gained carrier Sep 10 00:39:50.577728 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 10 00:39:50.975014 kubelet[2104]: E0910 00:39:50.974838 2104 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:39:50.996534 kubelet[2104]: I0910 00:39:50.996440 2104 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-zlzzs" podStartSLOduration=8.996414284 podStartE2EDuration="8.996414284s" podCreationTimestamp="2025-09-10 00:39:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:39:47.698178015 +0000 UTC m=+122.537700635" watchObservedRunningTime="2025-09-10 00:39:50.996414284 +0000 UTC m=+125.835936924" Sep 10 00:39:51.572193 kubelet[2104]: E0910 00:39:51.572156 2104 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:39:51.801793 systemd[1]: run-containerd-runc-k8s.io-596ca456e6da1667c661efe7eb1a56ede7de9dff41ff2026b92554b6a697de25-runc.stVasl.mount: Deactivated successfully. Sep 10 00:39:51.851111 kubelet[2104]: E0910 00:39:51.850975 2104 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:39412->127.0.0.1:34579: write tcp 127.0.0.1:39412->127.0.0.1:34579: write: broken pipe Sep 10 00:39:52.373804 systemd-networkd[1083]: lxc_health: Gained IPv6LL Sep 10 00:39:52.574420 kubelet[2104]: E0910 00:39:52.574383 2104 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:39:53.926250 systemd[1]: run-containerd-runc-k8s.io-596ca456e6da1667c661efe7eb1a56ede7de9dff41ff2026b92554b6a697de25-runc.3guatk.mount: Deactivated successfully. Sep 10 00:39:56.062732 sshd[3989]: pam_unix(sshd:session): session closed for user core Sep 10 00:39:56.065273 systemd[1]: sshd@30-10.0.0.12:22-10.0.0.1:46994.service: Deactivated successfully. Sep 10 00:39:56.066417 systemd-logind[1293]: Session 31 logged out. Waiting for processes to exit. Sep 10 00:39:56.066449 systemd[1]: session-31.scope: Deactivated successfully. Sep 10 00:39:56.067346 systemd-logind[1293]: Removed session 31.