May 15 10:46:23.828763 kernel: Linux version 5.15.182-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Thu May 15 09:06:41 -00 2025 May 15 10:46:23.828781 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f8c1bc5ff10765e781843bfc97fc5357002a3f8a120201a0e954fce1d2ba48f0 May 15 10:46:23.828790 kernel: BIOS-provided physical RAM map: May 15 10:46:23.828796 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable May 15 10:46:23.828801 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable May 15 10:46:23.828806 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS May 15 10:46:23.828813 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable May 15 10:46:23.828819 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS May 15 10:46:23.828824 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable May 15 10:46:23.828831 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS May 15 10:46:23.828836 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable May 15 10:46:23.828842 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved May 15 10:46:23.828847 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data May 15 10:46:23.828853 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS May 15 10:46:23.828860 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable May 15 10:46:23.828867 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved May 15 10:46:23.828873 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS May 15 10:46:23.828878 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 15 10:46:23.828884 kernel: NX (Execute Disable) protection: active May 15 10:46:23.828890 kernel: e820: update [mem 0x9b475018-0x9b47ec57] usable ==> usable May 15 10:46:23.828896 kernel: e820: update [mem 0x9b475018-0x9b47ec57] usable ==> usable May 15 10:46:23.828902 kernel: e820: update [mem 0x9b438018-0x9b474e57] usable ==> usable May 15 10:46:23.828907 kernel: e820: update [mem 0x9b438018-0x9b474e57] usable ==> usable May 15 10:46:23.828913 kernel: extended physical RAM map: May 15 10:46:23.828919 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable May 15 10:46:23.828925 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable May 15 10:46:23.828931 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS May 15 10:46:23.828937 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable May 15 10:46:23.828943 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS May 15 10:46:23.828949 kernel: reserve setup_data: [mem 0x000000000080c000-0x000000000080ffff] usable May 15 10:46:23.828955 kernel: reserve setup_data: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS May 15 10:46:23.828960 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b438017] usable May 15 10:46:23.828966 kernel: reserve setup_data: [mem 0x000000009b438018-0x000000009b474e57] usable May 15 10:46:23.828972 kernel: reserve setup_data: [mem 0x000000009b474e58-0x000000009b475017] usable May 15 10:46:23.828977 kernel: reserve setup_data: [mem 0x000000009b475018-0x000000009b47ec57] usable May 15 10:46:23.828983 kernel: reserve setup_data: [mem 0x000000009b47ec58-0x000000009c8eefff] usable May 15 10:46:23.828990 kernel: reserve setup_data: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved May 15 10:46:23.828996 kernel: reserve setup_data: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data May 15 10:46:23.829002 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS May 15 10:46:23.829008 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009cf3ffff] usable May 15 10:46:23.829016 kernel: reserve setup_data: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved May 15 10:46:23.829023 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS May 15 10:46:23.829029 kernel: reserve setup_data: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 15 10:46:23.829036 kernel: efi: EFI v2.70 by EDK II May 15 10:46:23.829043 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b673018 RNG=0x9cb73018 May 15 10:46:23.829049 kernel: random: crng init done May 15 10:46:23.829055 kernel: SMBIOS 2.8 present. May 15 10:46:23.829061 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 May 15 10:46:23.829068 kernel: Hypervisor detected: KVM May 15 10:46:23.829074 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 15 10:46:23.829080 kernel: kvm-clock: cpu 0, msr 2b19a001, primary cpu clock May 15 10:46:23.829087 kernel: kvm-clock: using sched offset of 3996351521 cycles May 15 10:46:23.829095 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 15 10:46:23.829101 kernel: tsc: Detected 2794.748 MHz processor May 15 10:46:23.829108 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 15 10:46:23.829114 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 15 10:46:23.829121 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 May 15 10:46:23.829127 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 15 10:46:23.829134 kernel: Using GB pages for direct mapping May 15 10:46:23.829140 kernel: Secure boot disabled May 15 10:46:23.829189 kernel: ACPI: Early table checksum verification disabled May 15 10:46:23.829197 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) May 15 10:46:23.829204 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) May 15 10:46:23.829210 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 15 10:46:23.829217 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 10:46:23.829223 kernel: ACPI: FACS 0x000000009CBDD000 000040 May 15 10:46:23.829230 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 10:46:23.829236 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 10:46:23.829243 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 10:46:23.829249 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 10:46:23.829257 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) May 15 10:46:23.829263 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] May 15 10:46:23.829270 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] May 15 10:46:23.829276 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] May 15 10:46:23.829282 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] May 15 10:46:23.829289 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] May 15 10:46:23.829295 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] May 15 10:46:23.829301 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] May 15 10:46:23.829308 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] May 15 10:46:23.829315 kernel: No NUMA configuration found May 15 10:46:23.829322 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] May 15 10:46:23.829328 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] May 15 10:46:23.829335 kernel: Zone ranges: May 15 10:46:23.829341 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 15 10:46:23.829347 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] May 15 10:46:23.829354 kernel: Normal empty May 15 10:46:23.829360 kernel: Movable zone start for each node May 15 10:46:23.829366 kernel: Early memory node ranges May 15 10:46:23.829374 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] May 15 10:46:23.829380 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] May 15 10:46:23.829386 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] May 15 10:46:23.829393 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] May 15 10:46:23.829399 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] May 15 10:46:23.829413 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] May 15 10:46:23.829420 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] May 15 10:46:23.829426 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 15 10:46:23.829433 kernel: On node 0, zone DMA: 96 pages in unavailable ranges May 15 10:46:23.829440 kernel: On node 0, zone DMA: 8 pages in unavailable ranges May 15 10:46:23.829448 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 15 10:46:23.829454 kernel: On node 0, zone DMA: 240 pages in unavailable ranges May 15 10:46:23.829461 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges May 15 10:46:23.829467 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges May 15 10:46:23.829474 kernel: ACPI: PM-Timer IO Port: 0x608 May 15 10:46:23.829480 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 15 10:46:23.829486 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 15 10:46:23.829493 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 15 10:46:23.829499 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 15 10:46:23.829507 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 15 10:46:23.829513 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 15 10:46:23.829519 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 15 10:46:23.829526 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 15 10:46:23.829532 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 15 10:46:23.829539 kernel: TSC deadline timer available May 15 10:46:23.829545 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs May 15 10:46:23.829551 kernel: kvm-guest: KVM setup pv remote TLB flush May 15 10:46:23.829558 kernel: kvm-guest: setup PV sched yield May 15 10:46:23.829565 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices May 15 10:46:23.829572 kernel: Booting paravirtualized kernel on KVM May 15 10:46:23.829583 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 15 10:46:23.829591 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 May 15 10:46:23.829598 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 May 15 10:46:23.829605 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 May 15 10:46:23.829611 kernel: pcpu-alloc: [0] 0 1 2 3 May 15 10:46:23.829618 kernel: kvm-guest: setup async PF for cpu 0 May 15 10:46:23.829624 kernel: kvm-guest: stealtime: cpu 0, msr 9b21c0c0 May 15 10:46:23.829631 kernel: kvm-guest: PV spinlocks enabled May 15 10:46:23.829638 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 15 10:46:23.829645 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 May 15 10:46:23.829653 kernel: Policy zone: DMA32 May 15 10:46:23.829661 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f8c1bc5ff10765e781843bfc97fc5357002a3f8a120201a0e954fce1d2ba48f0 May 15 10:46:23.829668 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 15 10:46:23.829675 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 15 10:46:23.829683 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 15 10:46:23.829689 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 15 10:46:23.829697 kernel: Memory: 2397432K/2567000K available (12294K kernel code, 2276K rwdata, 13724K rodata, 47472K init, 4108K bss, 169308K reserved, 0K cma-reserved) May 15 10:46:23.829704 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 15 10:46:23.829711 kernel: ftrace: allocating 34585 entries in 136 pages May 15 10:46:23.829717 kernel: ftrace: allocated 136 pages with 2 groups May 15 10:46:23.829724 kernel: rcu: Hierarchical RCU implementation. May 15 10:46:23.829731 kernel: rcu: RCU event tracing is enabled. May 15 10:46:23.829738 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 15 10:46:23.829747 kernel: Rude variant of Tasks RCU enabled. May 15 10:46:23.829754 kernel: Tracing variant of Tasks RCU enabled. May 15 10:46:23.829761 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 15 10:46:23.829767 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 15 10:46:23.829774 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 15 10:46:23.829781 kernel: Console: colour dummy device 80x25 May 15 10:46:23.829788 kernel: printk: console [ttyS0] enabled May 15 10:46:23.829794 kernel: ACPI: Core revision 20210730 May 15 10:46:23.829801 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 15 10:46:23.829810 kernel: APIC: Switch to symmetric I/O mode setup May 15 10:46:23.829817 kernel: x2apic enabled May 15 10:46:23.829823 kernel: Switched APIC routing to physical x2apic. May 15 10:46:23.829830 kernel: kvm-guest: setup PV IPIs May 15 10:46:23.829837 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 15 10:46:23.829844 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 15 10:46:23.829850 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) May 15 10:46:23.829857 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 15 10:46:23.829864 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 15 10:46:23.829872 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 15 10:46:23.829879 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 15 10:46:23.829885 kernel: Spectre V2 : Mitigation: Retpolines May 15 10:46:23.829892 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 15 10:46:23.829899 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 15 10:46:23.829906 kernel: RETBleed: Mitigation: untrained return thunk May 15 10:46:23.829913 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 15 10:46:23.829920 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp May 15 10:46:23.829927 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 15 10:46:23.829935 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 15 10:46:23.829941 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 15 10:46:23.829948 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 15 10:46:23.829955 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. May 15 10:46:23.829962 kernel: Freeing SMP alternatives memory: 32K May 15 10:46:23.829968 kernel: pid_max: default: 32768 minimum: 301 May 15 10:46:23.829975 kernel: LSM: Security Framework initializing May 15 10:46:23.829981 kernel: SELinux: Initializing. May 15 10:46:23.829988 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 15 10:46:23.829996 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 15 10:46:23.830003 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 15 10:46:23.830010 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 15 10:46:23.830016 kernel: ... version: 0 May 15 10:46:23.830023 kernel: ... bit width: 48 May 15 10:46:23.830030 kernel: ... generic registers: 6 May 15 10:46:23.830037 kernel: ... value mask: 0000ffffffffffff May 15 10:46:23.830043 kernel: ... max period: 00007fffffffffff May 15 10:46:23.830050 kernel: ... fixed-purpose events: 0 May 15 10:46:23.830058 kernel: ... event mask: 000000000000003f May 15 10:46:23.830065 kernel: signal: max sigframe size: 1776 May 15 10:46:23.830071 kernel: rcu: Hierarchical SRCU implementation. May 15 10:46:23.830078 kernel: smp: Bringing up secondary CPUs ... May 15 10:46:23.830085 kernel: x86: Booting SMP configuration: May 15 10:46:23.830091 kernel: .... node #0, CPUs: #1 May 15 10:46:23.830098 kernel: kvm-clock: cpu 1, msr 2b19a041, secondary cpu clock May 15 10:46:23.830105 kernel: kvm-guest: setup async PF for cpu 1 May 15 10:46:23.830111 kernel: kvm-guest: stealtime: cpu 1, msr 9b29c0c0 May 15 10:46:23.830119 kernel: #2 May 15 10:46:23.830126 kernel: kvm-clock: cpu 2, msr 2b19a081, secondary cpu clock May 15 10:46:23.830133 kernel: kvm-guest: setup async PF for cpu 2 May 15 10:46:23.830140 kernel: kvm-guest: stealtime: cpu 2, msr 9b31c0c0 May 15 10:46:23.830164 kernel: #3 May 15 10:46:23.830171 kernel: kvm-clock: cpu 3, msr 2b19a0c1, secondary cpu clock May 15 10:46:23.830178 kernel: kvm-guest: setup async PF for cpu 3 May 15 10:46:23.830184 kernel: kvm-guest: stealtime: cpu 3, msr 9b39c0c0 May 15 10:46:23.830191 kernel: smp: Brought up 1 node, 4 CPUs May 15 10:46:23.830198 kernel: smpboot: Max logical packages: 1 May 15 10:46:23.830206 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) May 15 10:46:23.830213 kernel: devtmpfs: initialized May 15 10:46:23.830220 kernel: x86/mm: Memory block size: 128MB May 15 10:46:23.830227 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) May 15 10:46:23.830234 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) May 15 10:46:23.830240 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) May 15 10:46:23.830247 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) May 15 10:46:23.830254 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) May 15 10:46:23.830261 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 15 10:46:23.830269 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 15 10:46:23.830276 kernel: pinctrl core: initialized pinctrl subsystem May 15 10:46:23.830283 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 15 10:46:23.830290 kernel: audit: initializing netlink subsys (disabled) May 15 10:46:23.830296 kernel: audit: type=2000 audit(1747305983.711:1): state=initialized audit_enabled=0 res=1 May 15 10:46:23.830303 kernel: thermal_sys: Registered thermal governor 'step_wise' May 15 10:46:23.830310 kernel: thermal_sys: Registered thermal governor 'user_space' May 15 10:46:23.830316 kernel: cpuidle: using governor menu May 15 10:46:23.830323 kernel: ACPI: bus type PCI registered May 15 10:46:23.830331 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 15 10:46:23.830338 kernel: dca service started, version 1.12.1 May 15 10:46:23.830345 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) May 15 10:46:23.830351 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 May 15 10:46:23.830358 kernel: PCI: Using configuration type 1 for base access May 15 10:46:23.830365 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 15 10:46:23.830372 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages May 15 10:46:23.830379 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages May 15 10:46:23.830386 kernel: ACPI: Added _OSI(Module Device) May 15 10:46:23.830393 kernel: ACPI: Added _OSI(Processor Device) May 15 10:46:23.830400 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 15 10:46:23.830414 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 15 10:46:23.830421 kernel: ACPI: Added _OSI(Linux-Dell-Video) May 15 10:46:23.830428 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) May 15 10:46:23.830435 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) May 15 10:46:23.830442 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 15 10:46:23.830448 kernel: ACPI: Interpreter enabled May 15 10:46:23.830455 kernel: ACPI: PM: (supports S0 S3 S5) May 15 10:46:23.830463 kernel: ACPI: Using IOAPIC for interrupt routing May 15 10:46:23.830470 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 15 10:46:23.830477 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 15 10:46:23.830484 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 15 10:46:23.830598 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 15 10:46:23.830671 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 15 10:46:23.830737 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 15 10:46:23.830748 kernel: PCI host bridge to bus 0000:00 May 15 10:46:23.830822 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 15 10:46:23.830887 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 15 10:46:23.830948 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 15 10:46:23.831009 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] May 15 10:46:23.831070 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 15 10:46:23.831132 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] May 15 10:46:23.831210 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 15 10:46:23.831295 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 15 10:46:23.831372 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 May 15 10:46:23.831452 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] May 15 10:46:23.831522 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] May 15 10:46:23.831591 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] May 15 10:46:23.831660 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb May 15 10:46:23.831732 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 15 10:46:23.831813 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 May 15 10:46:23.831886 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] May 15 10:46:23.831959 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] May 15 10:46:23.832028 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] May 15 10:46:23.832105 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 May 15 10:46:23.832192 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] May 15 10:46:23.832260 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] May 15 10:46:23.832327 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] May 15 10:46:23.832402 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 May 15 10:46:23.832483 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] May 15 10:46:23.832605 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] May 15 10:46:23.832675 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] May 15 10:46:23.832748 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] May 15 10:46:23.832823 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 15 10:46:23.832891 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 15 10:46:23.832968 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 15 10:46:23.833036 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] May 15 10:46:23.833103 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] May 15 10:46:23.833212 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 15 10:46:23.833285 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] May 15 10:46:23.833294 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 15 10:46:23.833302 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 15 10:46:23.833309 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 15 10:46:23.833315 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 15 10:46:23.833322 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 15 10:46:23.833329 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 15 10:46:23.833336 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 15 10:46:23.833344 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 15 10:46:23.833351 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 15 10:46:23.833358 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 15 10:46:23.833365 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 15 10:46:23.833371 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 15 10:46:23.833378 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 15 10:46:23.833385 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 15 10:46:23.833391 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 15 10:46:23.833398 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 15 10:46:23.833414 kernel: iommu: Default domain type: Translated May 15 10:46:23.833421 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 15 10:46:23.833491 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 15 10:46:23.833558 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 15 10:46:23.833624 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 15 10:46:23.833633 kernel: vgaarb: loaded May 15 10:46:23.833640 kernel: pps_core: LinuxPPS API ver. 1 registered May 15 10:46:23.833647 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 15 10:46:23.833653 kernel: PTP clock support registered May 15 10:46:23.833662 kernel: Registered efivars operations May 15 10:46:23.833669 kernel: PCI: Using ACPI for IRQ routing May 15 10:46:23.833676 kernel: PCI: pci_cache_line_size set to 64 bytes May 15 10:46:23.833682 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] May 15 10:46:23.833689 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] May 15 10:46:23.833695 kernel: e820: reserve RAM buffer [mem 0x9b438018-0x9bffffff] May 15 10:46:23.833702 kernel: e820: reserve RAM buffer [mem 0x9b475018-0x9bffffff] May 15 10:46:23.833708 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] May 15 10:46:23.833715 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] May 15 10:46:23.833724 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 15 10:46:23.833731 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 15 10:46:23.833737 kernel: clocksource: Switched to clocksource kvm-clock May 15 10:46:23.833744 kernel: VFS: Disk quotas dquot_6.6.0 May 15 10:46:23.833751 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 15 10:46:23.833758 kernel: pnp: PnP ACPI init May 15 10:46:23.833832 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved May 15 10:46:23.833842 kernel: pnp: PnP ACPI: found 6 devices May 15 10:46:23.833850 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 15 10:46:23.833857 kernel: NET: Registered PF_INET protocol family May 15 10:46:23.833864 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 15 10:46:23.833871 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 15 10:46:23.833878 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 15 10:46:23.833885 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 15 10:46:23.833892 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) May 15 10:46:23.833898 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 15 10:46:23.833906 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 15 10:46:23.833913 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 15 10:46:23.833920 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 15 10:46:23.833927 kernel: NET: Registered PF_XDP protocol family May 15 10:46:23.833999 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window May 15 10:46:23.834069 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] May 15 10:46:23.834131 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 15 10:46:23.834204 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 15 10:46:23.834269 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 15 10:46:23.834333 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] May 15 10:46:23.834394 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 15 10:46:23.834464 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] May 15 10:46:23.834473 kernel: PCI: CLS 0 bytes, default 64 May 15 10:46:23.834480 kernel: Initialise system trusted keyrings May 15 10:46:23.834487 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 15 10:46:23.834494 kernel: Key type asymmetric registered May 15 10:46:23.834500 kernel: Asymmetric key parser 'x509' registered May 15 10:46:23.834509 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 15 10:46:23.834516 kernel: io scheduler mq-deadline registered May 15 10:46:23.834530 kernel: io scheduler kyber registered May 15 10:46:23.834539 kernel: io scheduler bfq registered May 15 10:46:23.834546 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 15 10:46:23.834554 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 15 10:46:23.834561 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 15 10:46:23.834568 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 15 10:46:23.834575 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 15 10:46:23.834584 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 15 10:46:23.834591 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 15 10:46:23.834598 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 15 10:46:23.834605 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 15 10:46:23.834681 kernel: rtc_cmos 00:04: RTC can wake from S4 May 15 10:46:23.834692 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 15 10:46:23.834752 kernel: rtc_cmos 00:04: registered as rtc0 May 15 10:46:23.834816 kernel: rtc_cmos 00:04: setting system clock to 2025-05-15T10:46:23 UTC (1747305983) May 15 10:46:23.834882 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 15 10:46:23.834891 kernel: efifb: probing for efifb May 15 10:46:23.834898 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k May 15 10:46:23.834906 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 May 15 10:46:23.834913 kernel: efifb: scrolling: redraw May 15 10:46:23.834921 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 15 10:46:23.834930 kernel: Console: switching to colour frame buffer device 160x50 May 15 10:46:23.834937 kernel: fb0: EFI VGA frame buffer device May 15 10:46:23.834947 kernel: pstore: Registered efi as persistent store backend May 15 10:46:23.834965 kernel: NET: Registered PF_INET6 protocol family May 15 10:46:23.834974 kernel: Segment Routing with IPv6 May 15 10:46:23.834982 kernel: In-situ OAM (IOAM) with IPv6 May 15 10:46:23.834991 kernel: NET: Registered PF_PACKET protocol family May 15 10:46:23.834998 kernel: Key type dns_resolver registered May 15 10:46:23.835005 kernel: IPI shorthand broadcast: enabled May 15 10:46:23.835013 kernel: sched_clock: Marking stable (412206669, 126946999)->(586517379, -47363711) May 15 10:46:23.835020 kernel: registered taskstats version 1 May 15 10:46:23.835027 kernel: Loading compiled-in X.509 certificates May 15 10:46:23.835035 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.182-flatcar: 04007c306af6b7696d09b3c2eafc1297036fd28e' May 15 10:46:23.835042 kernel: Key type .fscrypt registered May 15 10:46:23.835049 kernel: Key type fscrypt-provisioning registered May 15 10:46:23.835056 kernel: pstore: Using crash dump compression: deflate May 15 10:46:23.835063 kernel: ima: No TPM chip found, activating TPM-bypass! May 15 10:46:23.835072 kernel: ima: Allocated hash algorithm: sha1 May 15 10:46:23.835079 kernel: ima: No architecture policies found May 15 10:46:23.835086 kernel: clk: Disabling unused clocks May 15 10:46:23.835094 kernel: Freeing unused kernel image (initmem) memory: 47472K May 15 10:46:23.835101 kernel: Write protecting the kernel read-only data: 28672k May 15 10:46:23.835108 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K May 15 10:46:23.835115 kernel: Freeing unused kernel image (rodata/data gap) memory: 612K May 15 10:46:23.835123 kernel: Run /init as init process May 15 10:46:23.835130 kernel: with arguments: May 15 10:46:23.835138 kernel: /init May 15 10:46:23.835156 kernel: with environment: May 15 10:46:23.835164 kernel: HOME=/ May 15 10:46:23.835170 kernel: TERM=linux May 15 10:46:23.835178 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 15 10:46:23.835187 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 15 10:46:23.835196 systemd[1]: Detected virtualization kvm. May 15 10:46:23.835204 systemd[1]: Detected architecture x86-64. May 15 10:46:23.835213 systemd[1]: Running in initrd. May 15 10:46:23.835220 systemd[1]: No hostname configured, using default hostname. May 15 10:46:23.835227 systemd[1]: Hostname set to . May 15 10:46:23.835235 systemd[1]: Initializing machine ID from VM UUID. May 15 10:46:23.835243 systemd[1]: Queued start job for default target initrd.target. May 15 10:46:23.835250 systemd[1]: Started systemd-ask-password-console.path. May 15 10:46:23.835257 systemd[1]: Reached target cryptsetup.target. May 15 10:46:23.835265 systemd[1]: Reached target paths.target. May 15 10:46:23.835272 systemd[1]: Reached target slices.target. May 15 10:46:23.835281 systemd[1]: Reached target swap.target. May 15 10:46:23.835288 systemd[1]: Reached target timers.target. May 15 10:46:23.835296 systemd[1]: Listening on iscsid.socket. May 15 10:46:23.835303 systemd[1]: Listening on iscsiuio.socket. May 15 10:46:23.835311 systemd[1]: Listening on systemd-journald-audit.socket. May 15 10:46:23.835319 systemd[1]: Listening on systemd-journald-dev-log.socket. May 15 10:46:23.835326 systemd[1]: Listening on systemd-journald.socket. May 15 10:46:23.835335 systemd[1]: Listening on systemd-networkd.socket. May 15 10:46:23.835342 systemd[1]: Listening on systemd-udevd-control.socket. May 15 10:46:23.835350 systemd[1]: Listening on systemd-udevd-kernel.socket. May 15 10:46:23.835357 systemd[1]: Reached target sockets.target. May 15 10:46:23.835365 systemd[1]: Starting kmod-static-nodes.service... May 15 10:46:23.835372 systemd[1]: Finished network-cleanup.service. May 15 10:46:23.835380 systemd[1]: Starting systemd-fsck-usr.service... May 15 10:46:23.835387 systemd[1]: Starting systemd-journald.service... May 15 10:46:23.835395 systemd[1]: Starting systemd-modules-load.service... May 15 10:46:23.835403 systemd[1]: Starting systemd-resolved.service... May 15 10:46:23.835418 systemd[1]: Starting systemd-vconsole-setup.service... May 15 10:46:23.835426 systemd[1]: Finished kmod-static-nodes.service. May 15 10:46:23.835433 systemd[1]: Finished systemd-fsck-usr.service. May 15 10:46:23.835441 kernel: audit: type=1130 audit(1747305983.829:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:23.835449 systemd[1]: Finished systemd-vconsole-setup.service. May 15 10:46:23.835457 kernel: audit: type=1130 audit(1747305983.834:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:23.835468 systemd-journald[198]: Journal started May 15 10:46:23.835506 systemd-journald[198]: Runtime Journal (/run/log/journal/91c9d866bbcc4a0db8f43cc3a4d8a3d8) is 6.0M, max 48.4M, 42.4M free. May 15 10:46:23.829000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:23.834000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:23.838180 systemd[1]: Started systemd-journald.service. May 15 10:46:23.838159 systemd-modules-load[199]: Inserted module 'overlay' May 15 10:46:23.839000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:23.843162 kernel: audit: type=1130 audit(1747305983.839:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:23.843445 systemd[1]: Starting dracut-cmdline-ask.service... May 15 10:46:23.845700 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 15 10:46:23.851977 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 15 10:46:23.853000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:23.854429 systemd-resolved[200]: Positive Trust Anchors: May 15 10:46:23.858690 kernel: audit: type=1130 audit(1747305983.853:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:23.854438 systemd-resolved[200]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 10:46:23.854466 systemd-resolved[200]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 15 10:46:23.856577 systemd-resolved[200]: Defaulting to hostname 'linux'. May 15 10:46:23.873774 kernel: audit: type=1130 audit(1747305983.858:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:23.873812 kernel: audit: type=1130 audit(1747305983.860:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:23.858000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:23.860000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:23.857231 systemd[1]: Finished dracut-cmdline-ask.service. May 15 10:46:23.858739 systemd[1]: Started systemd-resolved.service. May 15 10:46:23.861029 systemd[1]: Reached target nss-lookup.target. May 15 10:46:23.878698 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 15 10:46:23.873845 systemd[1]: Starting dracut-cmdline.service... May 15 10:46:23.882952 systemd-modules-load[199]: Inserted module 'br_netfilter' May 15 10:46:23.883881 kernel: Bridge firewalling registered May 15 10:46:23.888000 dracut-cmdline[216]: dracut-dracut-053 May 15 10:46:23.890280 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f8c1bc5ff10765e781843bfc97fc5357002a3f8a120201a0e954fce1d2ba48f0 May 15 10:46:23.900171 kernel: SCSI subsystem initialized May 15 10:46:23.910661 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 15 10:46:23.910683 kernel: device-mapper: uevent: version 1.0.3 May 15 10:46:23.911926 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com May 15 10:46:23.914517 systemd-modules-load[199]: Inserted module 'dm_multipath' May 15 10:46:23.915211 systemd[1]: Finished systemd-modules-load.service. May 15 10:46:23.920811 kernel: audit: type=1130 audit(1747305983.916:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:23.916000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:23.916963 systemd[1]: Starting systemd-sysctl.service... May 15 10:46:23.925217 systemd[1]: Finished systemd-sysctl.service. May 15 10:46:23.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:23.930170 kernel: audit: type=1130 audit(1747305983.926:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:23.953173 kernel: Loading iSCSI transport class v2.0-870. May 15 10:46:23.968177 kernel: iscsi: registered transport (tcp) May 15 10:46:23.989170 kernel: iscsi: registered transport (qla4xxx) May 15 10:46:23.989190 kernel: QLogic iSCSI HBA Driver May 15 10:46:24.019244 systemd[1]: Finished dracut-cmdline.service. May 15 10:46:24.020000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:24.021537 systemd[1]: Starting dracut-pre-udev.service... May 15 10:46:24.025085 kernel: audit: type=1130 audit(1747305984.020:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:24.068173 kernel: raid6: avx2x4 gen() 24495 MB/s May 15 10:46:24.085170 kernel: raid6: avx2x4 xor() 8082 MB/s May 15 10:46:24.102175 kernel: raid6: avx2x2 gen() 32468 MB/s May 15 10:46:24.119171 kernel: raid6: avx2x2 xor() 19235 MB/s May 15 10:46:24.136175 kernel: raid6: avx2x1 gen() 26572 MB/s May 15 10:46:24.153172 kernel: raid6: avx2x1 xor() 15364 MB/s May 15 10:46:24.170173 kernel: raid6: sse2x4 gen() 14794 MB/s May 15 10:46:24.187171 kernel: raid6: sse2x4 xor() 7579 MB/s May 15 10:46:24.204170 kernel: raid6: sse2x2 gen() 16401 MB/s May 15 10:46:24.221171 kernel: raid6: sse2x2 xor() 9825 MB/s May 15 10:46:24.238171 kernel: raid6: sse2x1 gen() 12445 MB/s May 15 10:46:24.255557 kernel: raid6: sse2x1 xor() 7801 MB/s May 15 10:46:24.255576 kernel: raid6: using algorithm avx2x2 gen() 32468 MB/s May 15 10:46:24.255585 kernel: raid6: .... xor() 19235 MB/s, rmw enabled May 15 10:46:24.256268 kernel: raid6: using avx2x2 recovery algorithm May 15 10:46:24.268168 kernel: xor: automatically using best checksumming function avx May 15 10:46:24.356177 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no May 15 10:46:24.363634 systemd[1]: Finished dracut-pre-udev.service. May 15 10:46:24.365000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:24.365000 audit: BPF prog-id=7 op=LOAD May 15 10:46:24.365000 audit: BPF prog-id=8 op=LOAD May 15 10:46:24.366448 systemd[1]: Starting systemd-udevd.service... May 15 10:46:24.377659 systemd-udevd[400]: Using default interface naming scheme 'v252'. May 15 10:46:24.381325 systemd[1]: Started systemd-udevd.service. May 15 10:46:24.382000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:24.383901 systemd[1]: Starting dracut-pre-trigger.service... May 15 10:46:24.395935 dracut-pre-trigger[412]: rd.md=0: removing MD RAID activation May 15 10:46:24.422292 systemd[1]: Finished dracut-pre-trigger.service. May 15 10:46:24.423000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:24.424697 systemd[1]: Starting systemd-udev-trigger.service... May 15 10:46:24.455942 systemd[1]: Finished systemd-udev-trigger.service. May 15 10:46:24.455000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:24.481170 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 15 10:46:24.496966 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 15 10:46:24.496979 kernel: GPT:9289727 != 19775487 May 15 10:46:24.496991 kernel: GPT:Alternate GPT header not at the end of the disk. May 15 10:46:24.497000 kernel: GPT:9289727 != 19775487 May 15 10:46:24.497008 kernel: GPT: Use GNU Parted to correct GPT errors. May 15 10:46:24.497016 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 10:46:24.497025 kernel: cryptd: max_cpu_qlen set to 1000 May 15 10:46:24.497033 kernel: libata version 3.00 loaded. May 15 10:46:24.499633 kernel: AVX2 version of gcm_enc/dec engaged. May 15 10:46:24.499653 kernel: AES CTR mode by8 optimization enabled May 15 10:46:24.502605 kernel: ahci 0000:00:1f.2: version 3.0 May 15 10:46:24.518004 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 15 10:46:24.518017 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 15 10:46:24.518097 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 15 10:46:24.518209 kernel: scsi host0: ahci May 15 10:46:24.518297 kernel: scsi host1: ahci May 15 10:46:24.518378 kernel: scsi host2: ahci May 15 10:46:24.518472 kernel: scsi host3: ahci May 15 10:46:24.518551 kernel: scsi host4: ahci May 15 10:46:24.518629 kernel: scsi host5: ahci May 15 10:46:24.518706 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 31 May 15 10:46:24.518715 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 31 May 15 10:46:24.518724 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 31 May 15 10:46:24.518732 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 31 May 15 10:46:24.518740 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 31 May 15 10:46:24.518751 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 31 May 15 10:46:24.527471 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. May 15 10:46:24.529453 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (445) May 15 10:46:24.530586 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. May 15 10:46:24.541496 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. May 15 10:46:24.544363 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 15 10:46:24.547223 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. May 15 10:46:24.548700 systemd[1]: Starting disk-uuid.service... May 15 10:46:24.555137 disk-uuid[527]: Primary Header is updated. May 15 10:46:24.555137 disk-uuid[527]: Secondary Entries is updated. May 15 10:46:24.555137 disk-uuid[527]: Secondary Header is updated. May 15 10:46:24.558334 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 10:46:24.829775 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 15 10:46:24.829828 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 15 10:46:24.829838 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 15 10:46:24.829853 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 15 10:46:24.829862 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 15 10:46:24.831177 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 15 10:46:24.832184 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 15 10:46:24.833805 kernel: ata3.00: applying bridge limits May 15 10:46:24.833818 kernel: ata3.00: configured for UDMA/100 May 15 10:46:24.834178 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 15 10:46:24.868212 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 15 10:46:24.885813 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 15 10:46:24.885833 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 15 10:46:25.565174 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 10:46:25.565432 disk-uuid[528]: The operation has completed successfully. May 15 10:46:25.584638 systemd[1]: disk-uuid.service: Deactivated successfully. May 15 10:46:25.586000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:25.586000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:25.584721 systemd[1]: Finished disk-uuid.service. May 15 10:46:25.593482 systemd[1]: Starting verity-setup.service... May 15 10:46:25.605170 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" May 15 10:46:25.623290 systemd[1]: Found device dev-mapper-usr.device. May 15 10:46:25.624481 systemd[1]: Mounting sysusr-usr.mount... May 15 10:46:25.627394 systemd[1]: Finished verity-setup.service. May 15 10:46:25.628000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:25.683171 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. May 15 10:46:25.683249 systemd[1]: Mounted sysusr-usr.mount. May 15 10:46:25.683439 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. May 15 10:46:25.684069 systemd[1]: Starting ignition-setup.service... May 15 10:46:25.685791 systemd[1]: Starting parse-ip-for-networkd.service... May 15 10:46:25.693488 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 15 10:46:25.693513 kernel: BTRFS info (device vda6): using free space tree May 15 10:46:25.693523 kernel: BTRFS info (device vda6): has skinny extents May 15 10:46:25.700850 systemd[1]: mnt-oem.mount: Deactivated successfully. May 15 10:46:25.709094 systemd[1]: Finished ignition-setup.service. May 15 10:46:25.710000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:25.711304 systemd[1]: Starting ignition-fetch-offline.service... May 15 10:46:25.744277 ignition[650]: Ignition 2.14.0 May 15 10:46:25.744287 ignition[650]: Stage: fetch-offline May 15 10:46:25.744326 ignition[650]: no configs at "/usr/lib/ignition/base.d" May 15 10:46:25.744333 ignition[650]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 10:46:25.744426 ignition[650]: parsed url from cmdline: "" May 15 10:46:25.744429 ignition[650]: no config URL provided May 15 10:46:25.744433 ignition[650]: reading system config file "/usr/lib/ignition/user.ign" May 15 10:46:25.744438 ignition[650]: no config at "/usr/lib/ignition/user.ign" May 15 10:46:25.744452 ignition[650]: op(1): [started] loading QEMU firmware config module May 15 10:46:25.744456 ignition[650]: op(1): executing: "modprobe" "qemu_fw_cfg" May 15 10:46:25.749650 ignition[650]: op(1): [finished] loading QEMU firmware config module May 15 10:46:25.749665 ignition[650]: QEMU firmware config was not found. Ignoring... May 15 10:46:25.754170 systemd[1]: Finished parse-ip-for-networkd.service. May 15 10:46:25.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:25.756000 audit: BPF prog-id=9 op=LOAD May 15 10:46:25.757115 systemd[1]: Starting systemd-networkd.service... May 15 10:46:25.791971 ignition[650]: parsing config with SHA512: 893f7e255d87f649b806f37954a92be8bfc6e8134c3018de6e72cc449c6a5ec8598665568c4184ba23fba0c63088d246082a3604b2d3ad978fe4d1d783c4f332 May 15 10:46:25.798759 unknown[650]: fetched base config from "system" May 15 10:46:25.798879 unknown[650]: fetched user config from "qemu" May 15 10:46:25.799454 ignition[650]: fetch-offline: fetch-offline passed May 15 10:46:25.800000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:25.800356 systemd[1]: Finished ignition-fetch-offline.service. May 15 10:46:25.799507 ignition[650]: Ignition finished successfully May 15 10:46:25.813632 systemd-networkd[722]: lo: Link UP May 15 10:46:25.813640 systemd-networkd[722]: lo: Gained carrier May 15 10:46:25.815268 systemd-networkd[722]: Enumeration completed May 15 10:46:25.815333 systemd[1]: Started systemd-networkd.service. May 15 10:46:25.817000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:25.817542 systemd[1]: Reached target network.target. May 15 10:46:25.817597 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 15 10:46:25.818295 systemd[1]: Starting ignition-kargs.service... May 15 10:46:25.818932 systemd-networkd[722]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 10:46:25.819512 systemd[1]: Starting iscsiuio.service... May 15 10:46:25.824000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:25.823252 systemd[1]: Started iscsiuio.service. May 15 10:46:25.825257 systemd[1]: Starting iscsid.service... May 15 10:46:25.827473 systemd-networkd[722]: eth0: Link UP May 15 10:46:25.827478 systemd-networkd[722]: eth0: Gained carrier May 15 10:46:25.830485 iscsid[728]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi May 15 10:46:25.830485 iscsid[728]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. May 15 10:46:25.830485 iscsid[728]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. May 15 10:46:25.830485 iscsid[728]: If using hardware iscsi like qla4xxx this message can be ignored. May 15 10:46:25.830485 iscsid[728]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi May 15 10:46:25.830485 iscsid[728]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf May 15 10:46:25.830000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:25.839000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:25.840000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:25.829115 systemd[1]: Started iscsid.service. May 15 10:46:25.831401 ignition[724]: Ignition 2.14.0 May 15 10:46:25.831081 systemd[1]: Starting dracut-initqueue.service... May 15 10:46:25.831407 ignition[724]: Stage: kargs May 15 10:46:25.833493 systemd[1]: Finished ignition-kargs.service. May 15 10:46:25.831483 ignition[724]: no configs at "/usr/lib/ignition/base.d" May 15 10:46:25.854000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:25.839401 systemd[1]: Finished dracut-initqueue.service. May 15 10:46:25.831490 ignition[724]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 10:46:25.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:25.841090 systemd[1]: Reached target remote-fs-pre.target. May 15 10:46:25.832285 ignition[724]: kargs: kargs passed May 15 10:46:25.843070 systemd[1]: Reached target remote-cryptsetup.target. May 15 10:46:25.832317 ignition[724]: Ignition finished successfully May 15 10:46:25.845230 systemd[1]: Reached target remote-fs.target. May 15 10:46:25.853460 ignition[744]: Ignition 2.14.0 May 15 10:46:25.845959 systemd[1]: Starting dracut-pre-mount.service... May 15 10:46:25.853465 ignition[744]: Stage: disks May 15 10:46:25.846628 systemd[1]: Starting ignition-disks.service... May 15 10:46:25.853542 ignition[744]: no configs at "/usr/lib/ignition/base.d" May 15 10:46:25.852228 systemd-networkd[722]: eth0: DHCPv4 address 10.0.0.121/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 15 10:46:25.853550 ignition[744]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 10:46:25.852738 systemd[1]: Finished dracut-pre-mount.service. May 15 10:46:25.874529 systemd-fsck[756]: ROOT: clean, 623/553520 files, 56023/553472 blocks May 15 10:46:25.854451 ignition[744]: disks: disks passed May 15 10:46:25.854994 systemd[1]: Finished ignition-disks.service. May 15 10:46:25.854479 ignition[744]: Ignition finished successfully May 15 10:46:25.855942 systemd[1]: Reached target initrd-root-device.target. May 15 10:46:25.857970 systemd[1]: Reached target local-fs-pre.target. May 15 10:46:25.858810 systemd[1]: Reached target local-fs.target. May 15 10:46:25.881000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:25.859593 systemd[1]: Reached target sysinit.target. May 15 10:46:25.860354 systemd[1]: Reached target basic.target. May 15 10:46:25.860987 systemd[1]: Starting systemd-fsck-root.service... May 15 10:46:25.879768 systemd[1]: Finished systemd-fsck-root.service. May 15 10:46:25.888783 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. May 15 10:46:25.881884 systemd[1]: Mounting sysroot.mount... May 15 10:46:25.887835 systemd[1]: Mounted sysroot.mount. May 15 10:46:25.888809 systemd[1]: Reached target initrd-root-fs.target. May 15 10:46:25.889657 systemd[1]: Mounting sysroot-usr.mount... May 15 10:46:25.889978 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. May 15 10:46:25.890010 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 15 10:46:25.890027 systemd[1]: Reached target ignition-diskful.target. May 15 10:46:25.899960 initrd-setup-root[766]: cut: /sysroot/etc/passwd: No such file or directory May 15 10:46:25.891837 systemd[1]: Mounted sysroot-usr.mount. May 15 10:46:25.902052 initrd-setup-root[774]: cut: /sysroot/etc/group: No such file or directory May 15 10:46:25.893823 systemd[1]: Starting initrd-setup-root.service... May 15 10:46:25.904249 initrd-setup-root[782]: cut: /sysroot/etc/shadow: No such file or directory May 15 10:46:25.906636 initrd-setup-root[790]: cut: /sysroot/etc/gshadow: No such file or directory May 15 10:46:25.929874 systemd[1]: Finished initrd-setup-root.service. May 15 10:46:25.930000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:25.931457 systemd[1]: Starting ignition-mount.service... May 15 10:46:25.932634 systemd[1]: Starting sysroot-boot.service... May 15 10:46:25.936479 bash[807]: umount: /sysroot/usr/share/oem: not mounted. May 15 10:46:25.944930 ignition[809]: INFO : Ignition 2.14.0 May 15 10:46:25.944930 ignition[809]: INFO : Stage: mount May 15 10:46:25.946496 ignition[809]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 10:46:25.946496 ignition[809]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 10:46:25.946496 ignition[809]: INFO : mount: mount passed May 15 10:46:25.946496 ignition[809]: INFO : Ignition finished successfully May 15 10:46:25.947000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:25.946636 systemd[1]: Finished ignition-mount.service. May 15 10:46:25.952950 systemd[1]: Finished sysroot-boot.service. May 15 10:46:25.952000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:26.633124 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 15 10:46:26.640176 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (817) May 15 10:46:26.640212 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 15 10:46:26.642894 kernel: BTRFS info (device vda6): using free space tree May 15 10:46:26.642918 kernel: BTRFS info (device vda6): has skinny extents May 15 10:46:26.646279 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 15 10:46:26.647829 systemd[1]: Starting ignition-files.service... May 15 10:46:26.661552 ignition[837]: INFO : Ignition 2.14.0 May 15 10:46:26.661552 ignition[837]: INFO : Stage: files May 15 10:46:26.663132 ignition[837]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 10:46:26.663132 ignition[837]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 10:46:26.666491 ignition[837]: DEBUG : files: compiled without relabeling support, skipping May 15 10:46:26.668281 ignition[837]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 15 10:46:26.668281 ignition[837]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 15 10:46:26.671576 ignition[837]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 15 10:46:26.673146 ignition[837]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 15 10:46:26.673146 ignition[837]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 15 10:46:26.673146 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 15 10:46:26.673146 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 May 15 10:46:26.672357 unknown[837]: wrote ssh authorized keys file for user: core May 15 10:46:26.768457 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 15 10:46:26.891644 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 15 10:46:26.893801 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 15 10:46:26.893801 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 15 10:46:27.378447 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 15 10:46:27.450198 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 15 10:46:27.450198 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 15 10:46:27.453755 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 15 10:46:27.453755 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 15 10:46:27.453755 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 15 10:46:27.453755 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 10:46:27.453755 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 10:46:27.453755 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 10:46:27.453755 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 10:46:27.453755 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 15 10:46:27.453755 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 15 10:46:27.453755 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 15 10:46:27.453755 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 15 10:46:27.453755 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 15 10:46:27.453755 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 May 15 10:46:27.780362 systemd-networkd[722]: eth0: Gained IPv6LL May 15 10:46:27.855412 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 15 10:46:28.308009 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 15 10:46:28.308009 ignition[837]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 15 10:46:28.312471 ignition[837]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 10:46:28.312471 ignition[837]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 10:46:28.312471 ignition[837]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 15 10:46:28.312471 ignition[837]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 15 10:46:28.312471 ignition[837]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 15 10:46:28.312471 ignition[837]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 15 10:46:28.312471 ignition[837]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 15 10:46:28.312471 ignition[837]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" May 15 10:46:28.312471 ignition[837]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" May 15 10:46:28.312471 ignition[837]: INFO : files: op(11): [started] setting preset to disabled for "coreos-metadata.service" May 15 10:46:28.312471 ignition[837]: INFO : files: op(11): op(12): [started] removing enablement symlink(s) for "coreos-metadata.service" May 15 10:46:28.345041 ignition[837]: INFO : files: op(11): op(12): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 15 10:46:28.346646 ignition[837]: INFO : files: op(11): [finished] setting preset to disabled for "coreos-metadata.service" May 15 10:46:28.346646 ignition[837]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 15 10:46:28.346646 ignition[837]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 15 10:46:28.346646 ignition[837]: INFO : files: files passed May 15 10:46:28.346646 ignition[837]: INFO : Ignition finished successfully May 15 10:46:28.353737 systemd[1]: Finished ignition-files.service. May 15 10:46:28.359030 kernel: kauditd_printk_skb: 24 callbacks suppressed May 15 10:46:28.359060 kernel: audit: type=1130 audit(1747305988.354:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:28.354000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:28.359285 systemd[1]: Starting initrd-setup-root-after-ignition.service... May 15 10:46:28.360216 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). May 15 10:46:28.360768 systemd[1]: Starting ignition-quench.service... May 15 10:46:28.370873 kernel: audit: type=1130 audit(1747305988.364:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:28.370898 kernel: audit: type=1131 audit(1747305988.364:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:28.364000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:28.364000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:28.363464 systemd[1]: ignition-quench.service: Deactivated successfully. May 15 10:46:28.363533 systemd[1]: Finished ignition-quench.service. May 15 10:46:28.375886 initrd-setup-root-after-ignition[864]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory May 15 10:46:28.378630 initrd-setup-root-after-ignition[866]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 15 10:46:28.380501 systemd[1]: Finished initrd-setup-root-after-ignition.service. May 15 10:46:28.385356 kernel: audit: type=1130 audit(1747305988.380:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:28.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:28.380628 systemd[1]: Reached target ignition-complete.target. May 15 10:46:28.387038 systemd[1]: Starting initrd-parse-etc.service... May 15 10:46:28.398869 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 15 10:46:28.398941 systemd[1]: Finished initrd-parse-etc.service. May 15 10:46:28.406997 kernel: audit: type=1130 audit(1747305988.400:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:28.407015 kernel: audit: type=1131 audit(1747305988.400:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:28.400000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:28.400000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:28.400751 systemd[1]: Reached target initrd-fs.target. May 15 10:46:28.407780 systemd[1]: Reached target initrd.target. May 15 10:46:28.409241 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. May 15 10:46:28.409930 systemd[1]: Starting dracut-pre-pivot.service... May 15 10:46:28.421626 systemd[1]: Finished dracut-pre-pivot.service. May 15 10:46:28.425962 kernel: audit: type=1130 audit(1747305988.421:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:28.421000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:28.426001 systemd[1]: Starting initrd-cleanup.service... May 15 10:46:28.434557 systemd[1]: Stopped target nss-lookup.target. May 15 10:46:28.434669 systemd[1]: Stopped target remote-cryptsetup.target. May 15 10:46:28.436983 systemd[1]: Stopped target timers.target. May 15 10:46:28.438493 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 15 10:46:28.443589 kernel: audit: type=1131 audit(1747305988.439:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:28.439000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:28.438571 systemd[1]: Stopped dracut-pre-pivot.service. May 15 10:46:28.440013 systemd[1]: Stopped target initrd.target. May 15 10:46:28.444391 systemd[1]: Stopped target basic.target. May 15 10:46:28.445827 systemd[1]: Stopped target ignition-complete.target. May 15 10:46:28.447356 systemd[1]: Stopped target ignition-diskful.target. May 15 10:46:28.448846 systemd[1]: Stopped target initrd-root-device.target. May 15 10:46:28.450497 systemd[1]: Stopped target remote-fs.target. May 15 10:46:28.452041 systemd[1]: Stopped target remote-fs-pre.target. May 15 10:46:28.453666 systemd[1]: Stopped target sysinit.target. May 15 10:46:28.455106 systemd[1]: Stopped target local-fs.target. May 15 10:46:28.456615 systemd[1]: Stopped target local-fs-pre.target. May 15 10:46:28.458092 systemd[1]: Stopped target swap.target. May 15 10:46:28.464408 kernel: audit: type=1131 audit(1747305988.460:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:28.460000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:28.459468 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 15 10:46:28.459545 systemd[1]: Stopped dracut-pre-mount.service. May 15 10:46:28.470420 kernel: audit: type=1131 audit(1747305988.466:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:28.466000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:28.461050 systemd[1]: Stopped target cryptsetup.target. May 15 10:46:28.471000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:28.465225 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 15 10:46:28.465310 systemd[1]: Stopped dracut-initqueue.service. May 15 10:46:28.466977 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 15 10:46:28.467054 systemd[1]: Stopped ignition-fetch-offline.service. May 15 10:46:28.471356 systemd[1]: Stopped target paths.target. May 15 10:46:28.472718 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 15 10:46:28.476187 systemd[1]: Stopped systemd-ask-password-console.path. May 15 10:46:28.477718 systemd[1]: Stopped target slices.target. May 15 10:46:28.479454 systemd[1]: Stopped target sockets.target. May 15 10:46:28.482000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:28.481009 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 15 10:46:28.483000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:28.481089 systemd[1]: Stopped initrd-setup-root-after-ignition.service. May 15 10:46:28.486720 iscsid[728]: iscsid shutting down. May 15 10:46:28.482661 systemd[1]: ignition-files.service: Deactivated successfully. May 15 10:46:28.482735 systemd[1]: Stopped ignition-files.service. May 15 10:46:28.484671 systemd[1]: Stopping ignition-mount.service... May 15 10:46:28.491033 ignition[879]: INFO : Ignition 2.14.0 May 15 10:46:28.491033 ignition[879]: INFO : Stage: umount May 15 10:46:28.491033 ignition[879]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 10:46:28.491033 ignition[879]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 10:46:28.495000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:28.485900 systemd[1]: Stopping iscsid.service... May 15 10:46:28.497000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:28.497913 ignition[879]: INFO : umount: umount passed May 15 10:46:28.497913 ignition[879]: INFO : Ignition finished successfully May 15 10:46:28.498000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:28.491336 systemd[1]: Stopping sysroot-boot.service... May 15 10:46:28.500000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:28.493230 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 15 10:46:28.493386 systemd[1]: Stopped systemd-udev-trigger.service. May 15 10:46:28.503000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:28.495389 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 15 10:46:28.505000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:28.495474 systemd[1]: Stopped dracut-pre-trigger.service. May 15 10:46:28.507000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:28.498597 systemd[1]: iscsid.service: Deactivated successfully. May 15 10:46:28.498672 systemd[1]: Stopped iscsid.service. May 15 10:46:28.509000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:28.509000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:28.499017 systemd[1]: ignition-mount.service: Deactivated successfully. May 15 10:46:28.511000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:28.499076 systemd[1]: Stopped ignition-mount.service. May 15 10:46:28.501228 systemd[1]: iscsid.socket: Deactivated successfully. May 15 10:46:28.501290 systemd[1]: Closed iscsid.socket. May 15 10:46:28.502362 systemd[1]: ignition-disks.service: Deactivated successfully. May 15 10:46:28.502393 systemd[1]: Stopped ignition-disks.service. May 15 10:46:28.504099 systemd[1]: ignition-kargs.service: Deactivated successfully. May 15 10:46:28.504130 systemd[1]: Stopped ignition-kargs.service. May 15 10:46:28.505657 systemd[1]: ignition-setup.service: Deactivated successfully. May 15 10:46:28.505685 systemd[1]: Stopped ignition-setup.service. May 15 10:46:28.507429 systemd[1]: Stopping iscsiuio.service... May 15 10:46:28.508919 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 15 10:46:28.508977 systemd[1]: Finished initrd-cleanup.service. May 15 10:46:28.510296 systemd[1]: iscsiuio.service: Deactivated successfully. May 15 10:46:28.525000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:28.510366 systemd[1]: Stopped iscsiuio.service. May 15 10:46:28.511654 systemd[1]: Stopped target network.target. May 15 10:46:28.513278 systemd[1]: iscsiuio.socket: Deactivated successfully. May 15 10:46:28.513312 systemd[1]: Closed iscsiuio.socket. May 15 10:46:28.514096 systemd[1]: Stopping systemd-networkd.service... May 15 10:46:28.529000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:28.516280 systemd[1]: Stopping systemd-resolved.service... May 15 10:46:28.519208 systemd-networkd[722]: eth0: DHCPv6 lease lost May 15 10:46:28.532000 audit: BPF prog-id=9 op=UNLOAD May 15 10:46:28.522484 systemd[1]: systemd-networkd.service: Deactivated successfully. May 15 10:46:28.524244 systemd[1]: Stopped systemd-networkd.service. May 15 10:46:28.528009 systemd[1]: systemd-resolved.service: Deactivated successfully. May 15 10:46:28.537000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:28.528079 systemd[1]: Stopped systemd-resolved.service. May 15 10:46:28.539000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:28.529659 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 15 10:46:28.529683 systemd[1]: Closed systemd-networkd.socket. May 15 10:46:28.534411 systemd[1]: Stopping network-cleanup.service... May 15 10:46:28.535388 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 15 10:46:28.535428 systemd[1]: Stopped parse-ip-for-networkd.service. May 15 10:46:28.537601 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 15 10:46:28.537632 systemd[1]: Stopped systemd-sysctl.service. May 15 10:46:28.545935 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 15 10:46:28.545970 systemd[1]: Stopped systemd-modules-load.service. May 15 10:46:28.547000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:28.547897 systemd[1]: Stopping systemd-udevd.service... May 15 10:46:28.549000 audit: BPF prog-id=6 op=UNLOAD May 15 10:46:28.551651 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 15 10:46:28.551722 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 15 10:46:28.554077 systemd[1]: systemd-udevd.service: Deactivated successfully. May 15 10:46:28.554244 systemd[1]: Stopped systemd-udevd.service. May 15 10:46:28.555000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:28.556840 systemd[1]: network-cleanup.service: Deactivated successfully. May 15 10:46:28.558000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:28.556965 systemd[1]: Stopped network-cleanup.service. May 15 10:46:28.558842 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 15 10:46:28.558874 systemd[1]: Closed systemd-udevd-control.socket. May 15 10:46:28.560388 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 15 10:46:28.563000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:28.565000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:28.560412 systemd[1]: Closed systemd-udevd-kernel.socket. May 15 10:46:28.565000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:28.566000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:28.569000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:28.562412 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 15 10:46:28.570000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:28.562442 systemd[1]: Stopped dracut-pre-udev.service. May 15 10:46:28.573000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:28.573000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:28.564058 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 15 10:46:28.564086 systemd[1]: Stopped dracut-cmdline.service. May 15 10:46:28.565833 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 15 10:46:28.565862 systemd[1]: Stopped dracut-cmdline-ask.service. May 15 10:46:28.566505 systemd[1]: Starting initrd-udevadm-cleanup-db.service... May 15 10:46:28.566773 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 15 10:46:28.566811 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. May 15 10:46:28.568272 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 15 10:46:28.568315 systemd[1]: Stopped kmod-static-nodes.service. May 15 10:46:28.569818 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 10:46:28.569849 systemd[1]: Stopped systemd-vconsole-setup.service. May 15 10:46:28.571441 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 15 10:46:28.571753 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 15 10:46:28.571815 systemd[1]: Finished initrd-udevadm-cleanup-db.service. May 15 10:46:28.670997 systemd[1]: sysroot-boot.service: Deactivated successfully. May 15 10:46:28.671075 systemd[1]: Stopped sysroot-boot.service. May 15 10:46:28.672000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:28.672851 systemd[1]: Reached target initrd-switch-root.target. May 15 10:46:28.674320 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 15 10:46:28.675000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:28.674352 systemd[1]: Stopped initrd-setup-root.service. May 15 10:46:28.676940 systemd[1]: Starting initrd-switch-root.service... May 15 10:46:28.695894 systemd[1]: Switching root. May 15 10:46:28.711698 systemd-journald[198]: Journal stopped May 15 10:46:31.287361 systemd-journald[198]: Received SIGTERM from PID 1 (systemd). May 15 10:46:31.287415 kernel: SELinux: Class mctp_socket not defined in policy. May 15 10:46:31.287427 kernel: SELinux: Class anon_inode not defined in policy. May 15 10:46:31.287439 kernel: SELinux: the above unknown classes and permissions will be allowed May 15 10:46:31.287448 kernel: SELinux: policy capability network_peer_controls=1 May 15 10:46:31.287459 kernel: SELinux: policy capability open_perms=1 May 15 10:46:31.287472 kernel: SELinux: policy capability extended_socket_class=1 May 15 10:46:31.287482 kernel: SELinux: policy capability always_check_network=0 May 15 10:46:31.287492 kernel: SELinux: policy capability cgroup_seclabel=1 May 15 10:46:31.287501 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 15 10:46:31.287510 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 15 10:46:31.287519 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 15 10:46:31.287531 systemd[1]: Successfully loaded SELinux policy in 37.713ms. May 15 10:46:31.287544 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.438ms. May 15 10:46:31.287558 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 15 10:46:31.287569 systemd[1]: Detected virtualization kvm. May 15 10:46:31.287580 systemd[1]: Detected architecture x86-64. May 15 10:46:31.287590 systemd[1]: Detected first boot. May 15 10:46:31.287601 systemd[1]: Initializing machine ID from VM UUID. May 15 10:46:31.287612 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). May 15 10:46:31.287624 systemd[1]: Populated /etc with preset unit settings. May 15 10:46:31.287635 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 15 10:46:31.287646 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 15 10:46:31.287659 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 10:46:31.287670 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 15 10:46:31.287680 systemd[1]: Stopped initrd-switch-root.service. May 15 10:46:31.287691 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 15 10:46:31.287701 systemd[1]: Created slice system-addon\x2dconfig.slice. May 15 10:46:31.287712 systemd[1]: Created slice system-addon\x2drun.slice. May 15 10:46:31.287722 systemd[1]: Created slice system-getty.slice. May 15 10:46:31.287733 systemd[1]: Created slice system-modprobe.slice. May 15 10:46:31.287744 systemd[1]: Created slice system-serial\x2dgetty.slice. May 15 10:46:31.287755 systemd[1]: Created slice system-system\x2dcloudinit.slice. May 15 10:46:31.287765 systemd[1]: Created slice system-systemd\x2dfsck.slice. May 15 10:46:31.287775 systemd[1]: Created slice user.slice. May 15 10:46:31.287788 systemd[1]: Started systemd-ask-password-console.path. May 15 10:46:31.287798 systemd[1]: Started systemd-ask-password-wall.path. May 15 10:46:31.287809 systemd[1]: Set up automount boot.automount. May 15 10:46:31.287819 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. May 15 10:46:31.287830 systemd[1]: Stopped target initrd-switch-root.target. May 15 10:46:31.287840 systemd[1]: Stopped target initrd-fs.target. May 15 10:46:31.287850 systemd[1]: Stopped target initrd-root-fs.target. May 15 10:46:31.287860 systemd[1]: Reached target integritysetup.target. May 15 10:46:31.287872 systemd[1]: Reached target remote-cryptsetup.target. May 15 10:46:31.287883 systemd[1]: Reached target remote-fs.target. May 15 10:46:31.287893 systemd[1]: Reached target slices.target. May 15 10:46:31.287903 systemd[1]: Reached target swap.target. May 15 10:46:31.287913 systemd[1]: Reached target torcx.target. May 15 10:46:31.287923 systemd[1]: Reached target veritysetup.target. May 15 10:46:31.287933 systemd[1]: Listening on systemd-coredump.socket. May 15 10:46:31.287943 systemd[1]: Listening on systemd-initctl.socket. May 15 10:46:31.287953 systemd[1]: Listening on systemd-networkd.socket. May 15 10:46:31.287965 systemd[1]: Listening on systemd-udevd-control.socket. May 15 10:46:31.287975 systemd[1]: Listening on systemd-udevd-kernel.socket. May 15 10:46:31.287985 systemd[1]: Listening on systemd-userdbd.socket. May 15 10:46:31.287995 systemd[1]: Mounting dev-hugepages.mount... May 15 10:46:31.288007 systemd[1]: Mounting dev-mqueue.mount... May 15 10:46:31.288017 systemd[1]: Mounting media.mount... May 15 10:46:31.288028 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 10:46:31.288038 systemd[1]: Mounting sys-kernel-debug.mount... May 15 10:46:31.288048 systemd[1]: Mounting sys-kernel-tracing.mount... May 15 10:46:31.288059 systemd[1]: Mounting tmp.mount... May 15 10:46:31.288069 systemd[1]: Starting flatcar-tmpfiles.service... May 15 10:46:31.288081 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 15 10:46:31.288091 systemd[1]: Starting kmod-static-nodes.service... May 15 10:46:31.288101 systemd[1]: Starting modprobe@configfs.service... May 15 10:46:31.288111 systemd[1]: Starting modprobe@dm_mod.service... May 15 10:46:31.288121 systemd[1]: Starting modprobe@drm.service... May 15 10:46:31.288131 systemd[1]: Starting modprobe@efi_pstore.service... May 15 10:46:31.288141 systemd[1]: Starting modprobe@fuse.service... May 15 10:46:31.288180 systemd[1]: Starting modprobe@loop.service... May 15 10:46:31.288191 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 15 10:46:31.288201 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 15 10:46:31.288211 systemd[1]: Stopped systemd-fsck-root.service. May 15 10:46:31.288221 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 15 10:46:31.288231 systemd[1]: Stopped systemd-fsck-usr.service. May 15 10:46:31.288247 kernel: fuse: init (API version 7.34) May 15 10:46:31.288256 kernel: loop: module loaded May 15 10:46:31.288266 systemd[1]: Stopped systemd-journald.service. May 15 10:46:31.288277 systemd[1]: Starting systemd-journald.service... May 15 10:46:31.288287 systemd[1]: Starting systemd-modules-load.service... May 15 10:46:31.288298 systemd[1]: Starting systemd-network-generator.service... May 15 10:46:31.288308 systemd[1]: Starting systemd-remount-fs.service... May 15 10:46:31.288320 systemd[1]: Starting systemd-udev-trigger.service... May 15 10:46:31.288330 systemd[1]: verity-setup.service: Deactivated successfully. May 15 10:46:31.288340 systemd[1]: Stopped verity-setup.service. May 15 10:46:31.288350 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 10:46:31.288361 systemd[1]: Mounted dev-hugepages.mount. May 15 10:46:31.288373 systemd[1]: Mounted dev-mqueue.mount. May 15 10:46:31.288383 systemd[1]: Mounted media.mount. May 15 10:46:31.288393 systemd[1]: Mounted sys-kernel-debug.mount. May 15 10:46:31.288405 systemd-journald[993]: Journal started May 15 10:46:31.288440 systemd-journald[993]: Runtime Journal (/run/log/journal/91c9d866bbcc4a0db8f43cc3a4d8a3d8) is 6.0M, max 48.4M, 42.4M free. May 15 10:46:28.769000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 May 15 10:46:29.015000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 15 10:46:29.015000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 15 10:46:29.015000 audit: BPF prog-id=10 op=LOAD May 15 10:46:29.015000 audit: BPF prog-id=10 op=UNLOAD May 15 10:46:29.015000 audit: BPF prog-id=11 op=LOAD May 15 10:46:29.015000 audit: BPF prog-id=11 op=UNLOAD May 15 10:46:29.041000 audit[912]: AVC avc: denied { associate } for pid=912 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" May 15 10:46:29.041000 audit[912]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001c58a2 a1=c000146de0 a2=c00014f040 a3=32 items=0 ppid=895 pid=912 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:46:29.041000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 15 10:46:29.043000 audit[912]: AVC avc: denied { associate } for pid=912 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 May 15 10:46:29.043000 audit[912]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001c5979 a2=1ed a3=0 items=2 ppid=895 pid=912 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:46:29.043000 audit: CWD cwd="/" May 15 10:46:29.043000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:29.043000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:29.043000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 15 10:46:31.150000 audit: BPF prog-id=12 op=LOAD May 15 10:46:31.150000 audit: BPF prog-id=3 op=UNLOAD May 15 10:46:31.151000 audit: BPF prog-id=13 op=LOAD May 15 10:46:31.151000 audit: BPF prog-id=14 op=LOAD May 15 10:46:31.151000 audit: BPF prog-id=4 op=UNLOAD May 15 10:46:31.151000 audit: BPF prog-id=5 op=UNLOAD May 15 10:46:31.151000 audit: BPF prog-id=15 op=LOAD May 15 10:46:31.151000 audit: BPF prog-id=12 op=UNLOAD May 15 10:46:31.151000 audit: BPF prog-id=16 op=LOAD May 15 10:46:31.151000 audit: BPF prog-id=17 op=LOAD May 15 10:46:31.151000 audit: BPF prog-id=13 op=UNLOAD May 15 10:46:31.151000 audit: BPF prog-id=14 op=UNLOAD May 15 10:46:31.152000 audit: BPF prog-id=18 op=LOAD May 15 10:46:31.152000 audit: BPF prog-id=15 op=UNLOAD May 15 10:46:31.152000 audit: BPF prog-id=19 op=LOAD May 15 10:46:31.152000 audit: BPF prog-id=20 op=LOAD May 15 10:46:31.152000 audit: BPF prog-id=16 op=UNLOAD May 15 10:46:31.152000 audit: BPF prog-id=17 op=UNLOAD May 15 10:46:31.152000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:31.156000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:31.156000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:31.166000 audit: BPF prog-id=18 op=UNLOAD May 15 10:46:31.255000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:31.259000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:31.261000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:31.261000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:31.262000 audit: BPF prog-id=21 op=LOAD May 15 10:46:31.262000 audit: BPF prog-id=22 op=LOAD May 15 10:46:31.262000 audit: BPF prog-id=23 op=LOAD May 15 10:46:31.262000 audit: BPF prog-id=19 op=UNLOAD May 15 10:46:31.262000 audit: BPF prog-id=20 op=UNLOAD May 15 10:46:31.279000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:31.285000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 15 10:46:31.285000 audit[993]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7fffd62ae3a0 a2=4000 a3=7fffd62ae43c items=0 ppid=1 pid=993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:46:31.285000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" May 15 10:46:31.149230 systemd[1]: Queued start job for default target multi-user.target. May 15 10:46:29.040621 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-05-15T10:46:29Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.100 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.100 /var/lib/torcx/store]" May 15 10:46:31.149250 systemd[1]: Unnecessary job was removed for dev-vda6.device. May 15 10:46:29.040829 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-05-15T10:46:29Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 15 10:46:31.152886 systemd[1]: systemd-journald.service: Deactivated successfully. May 15 10:46:29.040844 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-05-15T10:46:29Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 15 10:46:29.040868 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-05-15T10:46:29Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" May 15 10:46:29.040877 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-05-15T10:46:29Z" level=debug msg="skipped missing lower profile" missing profile=oem May 15 10:46:29.040903 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-05-15T10:46:29Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" May 15 10:46:29.040915 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-05-15T10:46:29Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= May 15 10:46:29.041082 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-05-15T10:46:29Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack May 15 10:46:29.041114 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-05-15T10:46:29Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 15 10:46:29.041125 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-05-15T10:46:29Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 15 10:46:29.041437 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-05-15T10:46:29Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 May 15 10:46:29.041467 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-05-15T10:46:29Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl May 15 10:46:29.041483 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-05-15T10:46:29Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.100: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.100 May 15 10:46:29.041495 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-05-15T10:46:29Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store May 15 10:46:29.041509 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-05-15T10:46:29Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.100: no such file or directory" path=/var/lib/torcx/store/3510.3.100 May 15 10:46:29.041522 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-05-15T10:46:29Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store May 15 10:46:30.891763 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-05-15T10:46:30Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 15 10:46:30.892066 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-05-15T10:46:30Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 15 10:46:30.892212 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-05-15T10:46:30Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 15 10:46:30.892441 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-05-15T10:46:30Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 15 10:46:30.892502 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-05-15T10:46:30Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= May 15 10:46:30.892570 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-05-15T10:46:30Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx May 15 10:46:31.291168 systemd[1]: Started systemd-journald.service. May 15 10:46:31.291000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:31.291617 systemd[1]: Mounted sys-kernel-tracing.mount. May 15 10:46:31.292512 systemd[1]: Mounted tmp.mount. May 15 10:46:31.293456 systemd[1]: Finished flatcar-tmpfiles.service. May 15 10:46:31.294000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:31.294566 systemd[1]: Finished kmod-static-nodes.service. May 15 10:46:31.295000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:31.295598 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 15 10:46:31.295767 systemd[1]: Finished modprobe@configfs.service. May 15 10:46:31.296000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:31.296000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:31.296786 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 10:46:31.296941 systemd[1]: Finished modprobe@dm_mod.service. May 15 10:46:31.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:31.297000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:31.297933 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 10:46:31.298097 systemd[1]: Finished modprobe@drm.service. May 15 10:46:31.298000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:31.298000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:31.299068 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 10:46:31.299234 systemd[1]: Finished modprobe@efi_pstore.service. May 15 10:46:31.299000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:31.299000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:31.300304 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 15 10:46:31.300452 systemd[1]: Finished modprobe@fuse.service. May 15 10:46:31.301000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:31.301000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:31.301418 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 10:46:31.301566 systemd[1]: Finished modprobe@loop.service. May 15 10:46:31.302000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:31.302000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:31.302679 systemd[1]: Finished systemd-modules-load.service. May 15 10:46:31.303000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:31.303811 systemd[1]: Finished systemd-network-generator.service. May 15 10:46:31.304000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:31.304956 systemd[1]: Finished systemd-remount-fs.service. May 15 10:46:31.305000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:31.306185 systemd[1]: Reached target network-pre.target. May 15 10:46:31.308062 systemd[1]: Mounting sys-fs-fuse-connections.mount... May 15 10:46:31.309766 systemd[1]: Mounting sys-kernel-config.mount... May 15 10:46:31.310519 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 15 10:46:31.311687 systemd[1]: Starting systemd-hwdb-update.service... May 15 10:46:31.313600 systemd[1]: Starting systemd-journal-flush.service... May 15 10:46:31.314453 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 10:46:31.315683 systemd[1]: Starting systemd-random-seed.service... May 15 10:46:31.316530 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 15 10:46:31.317513 systemd[1]: Starting systemd-sysctl.service... May 15 10:46:31.321455 systemd-journald[993]: Time spent on flushing to /var/log/journal/91c9d866bbcc4a0db8f43cc3a4d8a3d8 is 16.470ms for 1174 entries. May 15 10:46:31.321455 systemd-journald[993]: System Journal (/var/log/journal/91c9d866bbcc4a0db8f43cc3a4d8a3d8) is 8.0M, max 195.6M, 187.6M free. May 15 10:46:31.353418 systemd-journald[993]: Received client request to flush runtime journal. May 15 10:46:31.324000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:31.326000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:31.336000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:31.340000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:31.319819 systemd[1]: Starting systemd-sysusers.service... May 15 10:46:31.355000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:31.322617 systemd[1]: Mounted sys-fs-fuse-connections.mount. May 15 10:46:31.324071 systemd[1]: Finished systemd-udev-trigger.service. May 15 10:46:31.325062 systemd[1]: Mounted sys-kernel-config.mount. May 15 10:46:31.355775 udevadm[1015]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 15 10:46:31.326044 systemd[1]: Finished systemd-random-seed.service. May 15 10:46:31.327357 systemd[1]: Reached target first-boot-complete.target. May 15 10:46:31.329179 systemd[1]: Starting systemd-udev-settle.service... May 15 10:46:31.336084 systemd[1]: Finished systemd-sysusers.service. May 15 10:46:31.338025 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 15 10:46:31.339215 systemd[1]: Finished systemd-sysctl.service. May 15 10:46:31.354259 systemd[1]: Finished systemd-journal-flush.service. May 15 10:46:31.360478 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 15 10:46:31.361000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:31.748040 systemd[1]: Finished systemd-hwdb-update.service. May 15 10:46:31.748000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:31.749000 audit: BPF prog-id=24 op=LOAD May 15 10:46:31.749000 audit: BPF prog-id=25 op=LOAD May 15 10:46:31.749000 audit: BPF prog-id=7 op=UNLOAD May 15 10:46:31.749000 audit: BPF prog-id=8 op=UNLOAD May 15 10:46:31.750103 systemd[1]: Starting systemd-udevd.service... May 15 10:46:31.764928 systemd-udevd[1020]: Using default interface naming scheme 'v252'. May 15 10:46:31.776891 systemd[1]: Started systemd-udevd.service. May 15 10:46:31.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:31.778000 audit: BPF prog-id=26 op=LOAD May 15 10:46:31.779664 systemd[1]: Starting systemd-networkd.service... May 15 10:46:31.786000 audit: BPF prog-id=27 op=LOAD May 15 10:46:31.786000 audit: BPF prog-id=28 op=LOAD May 15 10:46:31.787000 audit: BPF prog-id=29 op=LOAD May 15 10:46:31.787784 systemd[1]: Starting systemd-userdbd.service... May 15 10:46:31.809995 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. May 15 10:46:31.815660 systemd[1]: Started systemd-userdbd.service. May 15 10:46:31.816000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:31.828870 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 15 10:46:31.837169 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 15 10:46:31.841172 kernel: ACPI: button: Power Button [PWRF] May 15 10:46:31.855000 audit[1038]: AVC avc: denied { confidentiality } for pid=1038 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 May 15 10:46:31.860249 systemd-networkd[1031]: lo: Link UP May 15 10:46:31.860253 systemd-networkd[1031]: lo: Gained carrier May 15 10:46:31.855000 audit[1038]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55cda1fb7bd0 a1=338ac a2=7f344e5c3bc5 a3=5 items=110 ppid=1020 pid=1038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:46:31.860696 systemd-networkd[1031]: Enumeration completed May 15 10:46:31.860773 systemd[1]: Started systemd-networkd.service. May 15 10:46:31.855000 audit: CWD cwd="/" May 15 10:46:31.855000 audit: PATH item=0 name=(null) inode=51 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=1 name=(null) inode=15058 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=2 name=(null) inode=15058 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=3 name=(null) inode=15059 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=4 name=(null) inode=15058 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=5 name=(null) inode=15060 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=6 name=(null) inode=15058 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=7 name=(null) inode=15061 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=8 name=(null) inode=15061 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=9 name=(null) inode=15062 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=10 name=(null) inode=15061 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=11 name=(null) inode=15063 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.861000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:31.855000 audit: PATH item=12 name=(null) inode=15061 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.861948 systemd-networkd[1031]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 10:46:31.855000 audit: PATH item=13 name=(null) inode=15064 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=14 name=(null) inode=15061 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=15 name=(null) inode=15065 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=16 name=(null) inode=15061 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=17 name=(null) inode=15066 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=18 name=(null) inode=15058 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=19 name=(null) inode=15067 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=20 name=(null) inode=15067 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.862885 systemd-networkd[1031]: eth0: Link UP May 15 10:46:31.862889 systemd-networkd[1031]: eth0: Gained carrier May 15 10:46:31.855000 audit: PATH item=21 name=(null) inode=15068 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=22 name=(null) inode=15067 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=23 name=(null) inode=15069 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=24 name=(null) inode=15067 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=25 name=(null) inode=15070 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=26 name=(null) inode=15067 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=27 name=(null) inode=15071 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=28 name=(null) inode=15067 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=29 name=(null) inode=15072 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=30 name=(null) inode=15058 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=31 name=(null) inode=15073 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=32 name=(null) inode=15073 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=33 name=(null) inode=15074 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=34 name=(null) inode=15073 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=35 name=(null) inode=15075 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=36 name=(null) inode=15073 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=37 name=(null) inode=15076 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=38 name=(null) inode=15073 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=39 name=(null) inode=15077 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=40 name=(null) inode=15073 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=41 name=(null) inode=15078 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=42 name=(null) inode=15058 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=43 name=(null) inode=15079 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=44 name=(null) inode=15079 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=45 name=(null) inode=15080 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=46 name=(null) inode=15079 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=47 name=(null) inode=15081 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=48 name=(null) inode=15079 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=49 name=(null) inode=15082 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=50 name=(null) inode=15079 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=51 name=(null) inode=15083 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=52 name=(null) inode=15079 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=53 name=(null) inode=15084 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=54 name=(null) inode=51 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=55 name=(null) inode=15085 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=56 name=(null) inode=15085 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=57 name=(null) inode=15086 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=58 name=(null) inode=15085 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=59 name=(null) inode=15087 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=60 name=(null) inode=15085 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=61 name=(null) inode=15088 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=62 name=(null) inode=15088 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=63 name=(null) inode=15089 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=64 name=(null) inode=15088 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=65 name=(null) inode=15090 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=66 name=(null) inode=15088 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=67 name=(null) inode=15091 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=68 name=(null) inode=15088 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=69 name=(null) inode=15092 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=70 name=(null) inode=15088 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=71 name=(null) inode=15093 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=72 name=(null) inode=15085 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=73 name=(null) inode=15094 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=74 name=(null) inode=15094 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=75 name=(null) inode=15095 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=76 name=(null) inode=15094 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=77 name=(null) inode=15096 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=78 name=(null) inode=15094 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=79 name=(null) inode=15097 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=80 name=(null) inode=15094 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=81 name=(null) inode=15098 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=82 name=(null) inode=15094 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=83 name=(null) inode=15099 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=84 name=(null) inode=15085 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=85 name=(null) inode=15100 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=86 name=(null) inode=15100 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=87 name=(null) inode=15101 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=88 name=(null) inode=15100 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=89 name=(null) inode=15102 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=90 name=(null) inode=15100 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=91 name=(null) inode=15103 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=92 name=(null) inode=15100 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=93 name=(null) inode=15104 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=94 name=(null) inode=15100 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=95 name=(null) inode=15105 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=96 name=(null) inode=15085 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=97 name=(null) inode=15106 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=98 name=(null) inode=15106 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=99 name=(null) inode=15107 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=100 name=(null) inode=15106 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=101 name=(null) inode=15108 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=102 name=(null) inode=15106 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=103 name=(null) inode=15109 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=104 name=(null) inode=15106 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=105 name=(null) inode=15110 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=106 name=(null) inode=15106 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=107 name=(null) inode=15111 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PATH item=109 name=(null) inode=15112 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:46:31.855000 audit: PROCTITLE proctitle="(udev-worker)" May 15 10:46:31.874261 systemd-networkd[1031]: eth0: DHCPv4 address 10.0.0.121/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 15 10:46:31.876171 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device May 15 10:46:31.878593 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 15 10:46:31.878703 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) May 15 10:46:31.878836 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 15 10:46:31.898194 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 15 10:46:31.903166 kernel: mousedev: PS/2 mouse device common for all mice May 15 10:46:31.940497 kernel: kvm: Nested Virtualization enabled May 15 10:46:31.940575 kernel: SVM: kvm: Nested Paging enabled May 15 10:46:31.940589 kernel: SVM: Virtual VMLOAD VMSAVE supported May 15 10:46:31.940601 kernel: SVM: Virtual GIF supported May 15 10:46:31.957176 kernel: EDAC MC: Ver: 3.0.0 May 15 10:46:31.980548 systemd[1]: Finished systemd-udev-settle.service. May 15 10:46:31.981000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:31.982554 systemd[1]: Starting lvm2-activation-early.service... May 15 10:46:31.989630 lvm[1056]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 15 10:46:32.017080 systemd[1]: Finished lvm2-activation-early.service. May 15 10:46:32.017000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:32.018160 systemd[1]: Reached target cryptsetup.target. May 15 10:46:32.019974 systemd[1]: Starting lvm2-activation.service... May 15 10:46:32.023113 lvm[1057]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 15 10:46:32.052230 systemd[1]: Finished lvm2-activation.service. May 15 10:46:32.052000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:32.053228 systemd[1]: Reached target local-fs-pre.target. May 15 10:46:32.054083 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 15 10:46:32.054107 systemd[1]: Reached target local-fs.target. May 15 10:46:32.054921 systemd[1]: Reached target machines.target. May 15 10:46:32.056743 systemd[1]: Starting ldconfig.service... May 15 10:46:32.057708 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 15 10:46:32.057785 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 15 10:46:32.058928 systemd[1]: Starting systemd-boot-update.service... May 15 10:46:32.060741 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... May 15 10:46:32.062648 systemd[1]: Starting systemd-machine-id-commit.service... May 15 10:46:32.064444 systemd[1]: Starting systemd-sysext.service... May 15 10:46:32.065596 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1059 (bootctl) May 15 10:46:32.066647 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... May 15 10:46:32.070000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:32.069774 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. May 15 10:46:32.074448 systemd[1]: Unmounting usr-share-oem.mount... May 15 10:46:32.079240 systemd[1]: usr-share-oem.mount: Deactivated successfully. May 15 10:46:32.079432 systemd[1]: Unmounted usr-share-oem.mount. May 15 10:46:32.089168 kernel: loop0: detected capacity change from 0 to 218376 May 15 10:46:32.102540 systemd-fsck[1067]: fsck.fat 4.2 (2021-01-31) May 15 10:46:32.102540 systemd-fsck[1067]: /dev/vda1: 791 files, 120752/258078 clusters May 15 10:46:32.103846 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. May 15 10:46:32.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:32.106725 systemd[1]: Mounting boot.mount... May 15 10:46:32.123324 systemd[1]: Mounted boot.mount. May 15 10:46:32.343177 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 15 10:46:32.344797 systemd[1]: Finished systemd-boot-update.service. May 15 10:46:32.345000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:32.348333 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 15 10:46:32.348979 systemd[1]: Finished systemd-machine-id-commit.service. May 15 10:46:32.349000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:32.360228 kernel: loop1: detected capacity change from 0 to 218376 May 15 10:46:32.364182 (sd-sysext)[1072]: Using extensions 'kubernetes'. May 15 10:46:32.364913 (sd-sysext)[1072]: Merged extensions into '/usr'. May 15 10:46:32.379563 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 10:46:32.380825 systemd[1]: Mounting usr-share-oem.mount... May 15 10:46:32.381834 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 15 10:46:32.383420 systemd[1]: Starting modprobe@dm_mod.service... May 15 10:46:32.385198 systemd[1]: Starting modprobe@efi_pstore.service... May 15 10:46:32.386881 systemd[1]: Starting modprobe@loop.service... May 15 10:46:32.387661 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 15 10:46:32.387767 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 15 10:46:32.387858 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 10:46:32.390098 systemd[1]: Mounted usr-share-oem.mount. May 15 10:46:32.391192 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 10:46:32.391316 systemd[1]: Finished modprobe@dm_mod.service. May 15 10:46:32.392000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:32.392000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:32.392491 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 10:46:32.392588 systemd[1]: Finished modprobe@efi_pstore.service. May 15 10:46:32.393000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:32.393000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:32.393740 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 10:46:32.393834 systemd[1]: Finished modprobe@loop.service. May 15 10:46:32.394000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:32.394000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:32.394961 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 10:46:32.395048 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 15 10:46:32.396060 systemd[1]: Finished systemd-sysext.service. May 15 10:46:32.396000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:32.397703 ldconfig[1058]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 15 10:46:32.397870 systemd[1]: Starting ensure-sysext.service... May 15 10:46:32.399461 systemd[1]: Starting systemd-tmpfiles-setup.service... May 15 10:46:32.405375 systemd[1]: Reloading. May 15 10:46:32.409592 systemd-tmpfiles[1079]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. May 15 10:46:32.410208 systemd-tmpfiles[1079]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 15 10:46:32.411539 systemd-tmpfiles[1079]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 15 10:46:32.456998 /usr/lib/systemd/system-generators/torcx-generator[1099]: time="2025-05-15T10:46:32Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.100 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.100 /var/lib/torcx/store]" May 15 10:46:32.457030 /usr/lib/systemd/system-generators/torcx-generator[1099]: time="2025-05-15T10:46:32Z" level=info msg="torcx already run" May 15 10:46:32.519635 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 15 10:46:32.519657 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 15 10:46:32.537384 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 10:46:32.589000 audit: BPF prog-id=30 op=LOAD May 15 10:46:32.589000 audit: BPF prog-id=26 op=UNLOAD May 15 10:46:32.590000 audit: BPF prog-id=31 op=LOAD May 15 10:46:32.590000 audit: BPF prog-id=32 op=LOAD May 15 10:46:32.590000 audit: BPF prog-id=24 op=UNLOAD May 15 10:46:32.590000 audit: BPF prog-id=25 op=UNLOAD May 15 10:46:32.592000 audit: BPF prog-id=33 op=LOAD May 15 10:46:32.592000 audit: BPF prog-id=21 op=UNLOAD May 15 10:46:32.592000 audit: BPF prog-id=34 op=LOAD May 15 10:46:32.592000 audit: BPF prog-id=35 op=LOAD May 15 10:46:32.592000 audit: BPF prog-id=22 op=UNLOAD May 15 10:46:32.592000 audit: BPF prog-id=23 op=UNLOAD May 15 10:46:32.593000 audit: BPF prog-id=36 op=LOAD May 15 10:46:32.593000 audit: BPF prog-id=27 op=UNLOAD May 15 10:46:32.593000 audit: BPF prog-id=37 op=LOAD May 15 10:46:32.593000 audit: BPF prog-id=38 op=LOAD May 15 10:46:32.593000 audit: BPF prog-id=28 op=UNLOAD May 15 10:46:32.593000 audit: BPF prog-id=29 op=UNLOAD May 15 10:46:32.595048 systemd[1]: Finished ldconfig.service. May 15 10:46:32.595000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:32.596841 systemd[1]: Finished systemd-tmpfiles-setup.service. May 15 10:46:32.597000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:32.600523 systemd[1]: Starting audit-rules.service... May 15 10:46:32.602061 systemd[1]: Starting clean-ca-certificates.service... May 15 10:46:32.603906 systemd[1]: Starting systemd-journal-catalog-update.service... May 15 10:46:32.605000 audit: BPF prog-id=39 op=LOAD May 15 10:46:32.606286 systemd[1]: Starting systemd-resolved.service... May 15 10:46:32.607000 audit: BPF prog-id=40 op=LOAD May 15 10:46:32.608626 systemd[1]: Starting systemd-timesyncd.service... May 15 10:46:32.610659 systemd[1]: Starting systemd-update-utmp.service... May 15 10:46:32.612130 systemd[1]: Finished clean-ca-certificates.service. May 15 10:46:32.613000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:32.615000 audit[1153]: SYSTEM_BOOT pid=1153 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' May 15 10:46:32.615054 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 15 10:46:32.618823 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 10:46:32.619055 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 15 10:46:32.620194 systemd[1]: Starting modprobe@dm_mod.service... May 15 10:46:32.622425 systemd[1]: Starting modprobe@efi_pstore.service... May 15 10:46:32.624412 systemd[1]: Starting modprobe@loop.service... May 15 10:46:32.625613 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 15 10:46:32.625817 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 15 10:46:32.625965 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 15 10:46:32.626086 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 10:46:32.627564 systemd[1]: Finished systemd-journal-catalog-update.service. May 15 10:46:32.628000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:32.629177 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 10:46:32.629292 systemd[1]: Finished modprobe@dm_mod.service. May 15 10:46:32.630000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:32.630000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:46:32.631000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 May 15 10:46:32.631000 audit[1165]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff7ad3b1d0 a2=420 a3=0 items=0 ppid=1142 pid=1165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:46:32.631000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 May 15 10:46:32.632016 augenrules[1165]: No rules May 15 10:46:32.630865 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 10:46:32.630971 systemd[1]: Finished modprobe@efi_pstore.service. May 15 10:46:32.632788 systemd[1]: Finished audit-rules.service. May 15 10:46:32.634054 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 10:46:32.634162 systemd[1]: Finished modprobe@loop.service. May 15 10:46:32.635475 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 10:46:32.635589 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 15 10:46:32.636797 systemd[1]: Starting systemd-update-done.service... May 15 10:46:32.638656 systemd[1]: Finished systemd-update-utmp.service. May 15 10:46:32.642605 systemd[1]: Finished systemd-update-done.service. May 15 10:46:32.643895 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 10:46:32.644080 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 15 10:46:32.645169 systemd[1]: Starting modprobe@dm_mod.service... May 15 10:46:32.646903 systemd[1]: Starting modprobe@efi_pstore.service... May 15 10:46:32.648636 systemd[1]: Starting modprobe@loop.service... May 15 10:46:32.649560 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 15 10:46:32.649863 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 15 10:46:32.649990 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 15 10:46:32.650093 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 10:46:32.651034 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 10:46:32.651415 systemd[1]: Finished modprobe@dm_mod.service. May 15 10:46:32.652651 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 10:46:32.652746 systemd[1]: Finished modprobe@efi_pstore.service. May 15 10:46:32.653964 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 10:46:32.654056 systemd[1]: Finished modprobe@loop.service. May 15 10:46:32.655256 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 10:46:32.655338 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 15 10:46:32.657698 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 10:46:32.657894 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 15 10:46:32.658946 systemd[1]: Starting modprobe@dm_mod.service... May 15 10:46:32.660799 systemd[1]: Starting modprobe@drm.service... May 15 10:46:32.662552 systemd[1]: Starting modprobe@efi_pstore.service... May 15 10:46:32.664341 systemd[1]: Starting modprobe@loop.service... May 15 10:46:32.665123 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 15 10:46:32.665249 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 15 10:46:32.665676 systemd-resolved[1148]: Positive Trust Anchors: May 15 10:46:32.665684 systemd-resolved[1148]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 10:46:32.665710 systemd-resolved[1148]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 15 10:46:32.666085 systemd[1]: Starting systemd-networkd-wait-online.service... May 15 10:46:32.667037 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 15 10:46:32.667133 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 10:46:32.668079 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 10:46:32.668226 systemd[1]: Finished modprobe@dm_mod.service. May 15 10:46:32.669350 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 10:46:32.669471 systemd[1]: Finished modprobe@drm.service. May 15 10:46:32.670619 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 10:46:32.670714 systemd[1]: Finished modprobe@efi_pstore.service. May 15 10:46:32.672058 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 10:46:32.672259 systemd[1]: Finished modprobe@loop.service. May 15 10:46:32.673527 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 10:46:32.673613 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 15 10:46:32.674526 systemd[1]: Finished ensure-sysext.service. May 15 10:46:32.676260 systemd[1]: Started systemd-timesyncd.service. May 15 10:46:32.677324 systemd[1]: Reached target time-set.target. May 15 10:46:32.677948 systemd-resolved[1148]: Defaulting to hostname 'linux'. May 15 10:46:33.358816 systemd-timesyncd[1152]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 15 10:46:33.358867 systemd-timesyncd[1152]: Initial clock synchronization to Thu 2025-05-15 10:46:33.358729 UTC. May 15 10:46:33.359998 systemd[1]: Started systemd-resolved.service. May 15 10:46:33.360919 systemd[1]: Reached target network.target. May 15 10:46:33.361759 systemd[1]: Reached target nss-lookup.target. May 15 10:46:33.362626 systemd[1]: Reached target sysinit.target. May 15 10:46:33.363496 systemd[1]: Started motdgen.path. May 15 10:46:33.364299 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. May 15 10:46:33.365574 systemd[1]: Started logrotate.timer. May 15 10:46:33.366407 systemd[1]: Started mdadm.timer. May 15 10:46:33.367281 systemd[1]: Started systemd-tmpfiles-clean.timer. May 15 10:46:33.368193 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 15 10:46:33.368215 systemd[1]: Reached target paths.target. May 15 10:46:33.369027 systemd[1]: Reached target timers.target. May 15 10:46:33.370090 systemd[1]: Listening on dbus.socket. May 15 10:46:33.371747 systemd[1]: Starting docker.socket... May 15 10:46:33.374679 systemd[1]: Listening on sshd.socket. May 15 10:46:33.375583 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 15 10:46:33.375927 systemd[1]: Listening on docker.socket. May 15 10:46:33.376755 systemd[1]: Reached target sockets.target. May 15 10:46:33.377549 systemd[1]: Reached target basic.target. May 15 10:46:33.378329 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. May 15 10:46:33.378352 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. May 15 10:46:33.379168 systemd[1]: Starting containerd.service... May 15 10:46:33.380766 systemd[1]: Starting dbus.service... May 15 10:46:33.382285 systemd[1]: Starting enable-oem-cloudinit.service... May 15 10:46:33.384247 systemd[1]: Starting extend-filesystems.service... May 15 10:46:33.385268 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). May 15 10:46:33.386161 systemd[1]: Starting motdgen.service... May 15 10:46:33.387742 systemd[1]: Starting prepare-helm.service... May 15 10:46:33.389363 systemd[1]: Starting ssh-key-proc-cmdline.service... May 15 10:46:33.392608 jq[1185]: false May 15 10:46:33.391296 systemd[1]: Starting sshd-keygen.service... May 15 10:46:33.394140 systemd[1]: Starting systemd-logind.service... May 15 10:46:33.395072 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 15 10:46:33.395135 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 15 10:46:33.395453 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 15 10:46:33.396009 systemd[1]: Starting update-engine.service... May 15 10:46:33.397911 systemd[1]: Starting update-ssh-keys-after-ignition.service... May 15 10:46:33.401415 jq[1199]: true May 15 10:46:33.402890 dbus-daemon[1184]: [system] SELinux support is enabled May 15 10:46:33.402934 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 15 10:46:33.403099 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. May 15 10:46:33.403234 systemd[1]: Started dbus.service. May 15 10:46:33.405645 extend-filesystems[1186]: Found loop1 May 15 10:46:33.406507 extend-filesystems[1186]: Found sr0 May 15 10:46:33.406507 extend-filesystems[1186]: Found vda May 15 10:46:33.406507 extend-filesystems[1186]: Found vda1 May 15 10:46:33.406507 extend-filesystems[1186]: Found vda2 May 15 10:46:33.406507 extend-filesystems[1186]: Found vda3 May 15 10:46:33.406507 extend-filesystems[1186]: Found usr May 15 10:46:33.406507 extend-filesystems[1186]: Found vda4 May 15 10:46:33.406507 extend-filesystems[1186]: Found vda6 May 15 10:46:33.406507 extend-filesystems[1186]: Found vda7 May 15 10:46:33.406507 extend-filesystems[1186]: Found vda9 May 15 10:46:33.406507 extend-filesystems[1186]: Checking size of /dev/vda9 May 15 10:46:33.410879 systemd[1]: motdgen.service: Deactivated successfully. May 15 10:46:33.413482 systemd[1]: Finished motdgen.service. May 15 10:46:33.416551 extend-filesystems[1186]: Resized partition /dev/vda9 May 15 10:46:33.418885 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 15 10:46:33.419054 systemd[1]: Finished ssh-key-proc-cmdline.service. May 15 10:46:33.419998 extend-filesystems[1211]: resize2fs 1.46.5 (30-Dec-2021) May 15 10:46:33.429862 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 15 10:46:33.429886 systemd[1]: Reached target system-config.target. May 15 10:46:33.431248 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 15 10:46:33.431266 systemd[1]: Reached target user-config.target. May 15 10:46:33.432862 jq[1212]: true May 15 10:46:33.434545 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 15 10:46:33.434606 tar[1207]: linux-amd64/LICENSE May 15 10:46:33.434746 tar[1207]: linux-amd64/helm May 15 10:46:33.437207 update_engine[1198]: I0515 10:46:33.436836 1198 main.cc:92] Flatcar Update Engine starting May 15 10:46:33.446835 systemd[1]: Started update-engine.service. May 15 10:46:33.447160 update_engine[1198]: I0515 10:46:33.446885 1198 update_check_scheduler.cc:74] Next update check in 11m56s May 15 10:46:33.450397 systemd[1]: Started locksmithd.service. May 15 10:46:33.456538 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 15 10:46:33.479899 systemd-logind[1196]: Watching system buttons on /dev/input/event1 (Power Button) May 15 10:46:33.479925 systemd-logind[1196]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 15 10:46:33.481083 systemd-logind[1196]: New seat seat0. May 15 10:46:33.483969 extend-filesystems[1211]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 15 10:46:33.483969 extend-filesystems[1211]: old_desc_blocks = 1, new_desc_blocks = 1 May 15 10:46:33.483969 extend-filesystems[1211]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 15 10:46:33.490956 extend-filesystems[1186]: Resized filesystem in /dev/vda9 May 15 10:46:33.484241 systemd[1]: extend-filesystems.service: Deactivated successfully. May 15 10:46:33.484387 systemd[1]: Finished extend-filesystems.service. May 15 10:46:33.487242 systemd[1]: Started systemd-logind.service. May 15 10:46:33.493917 bash[1235]: Updated "/home/core/.ssh/authorized_keys" May 15 10:46:33.495248 systemd[1]: Finished update-ssh-keys-after-ignition.service. May 15 10:46:33.500395 env[1214]: time="2025-05-15T10:46:33.495098128Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 May 15 10:46:33.515701 locksmithd[1236]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 15 10:46:33.518579 env[1214]: time="2025-05-15T10:46:33.518515713Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 15 10:46:33.518699 env[1214]: time="2025-05-15T10:46:33.518675693Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 15 10:46:33.520045 env[1214]: time="2025-05-15T10:46:33.519990760Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.182-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 15 10:46:33.520097 env[1214]: time="2025-05-15T10:46:33.520048127Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 15 10:46:33.520365 env[1214]: time="2025-05-15T10:46:33.520334615Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 15 10:46:33.520426 env[1214]: time="2025-05-15T10:46:33.520361064Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 15 10:46:33.520426 env[1214]: time="2025-05-15T10:46:33.520392263Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" May 15 10:46:33.520426 env[1214]: time="2025-05-15T10:46:33.520407441Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 15 10:46:33.520549 env[1214]: time="2025-05-15T10:46:33.520503031Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 15 10:46:33.520826 env[1214]: time="2025-05-15T10:46:33.520801030Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 15 10:46:33.520990 env[1214]: time="2025-05-15T10:46:33.520959317Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 15 10:46:33.521048 env[1214]: time="2025-05-15T10:46:33.520988501Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 15 10:46:33.521081 env[1214]: time="2025-05-15T10:46:33.521051620Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" May 15 10:46:33.521081 env[1214]: time="2025-05-15T10:46:33.521067560Z" level=info msg="metadata content store policy set" policy=shared May 15 10:46:33.526148 env[1214]: time="2025-05-15T10:46:33.526090000Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 15 10:46:33.526148 env[1214]: time="2025-05-15T10:46:33.526121139Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 15 10:46:33.526148 env[1214]: time="2025-05-15T10:46:33.526135726Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 15 10:46:33.526259 env[1214]: time="2025-05-15T10:46:33.526172235Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 15 10:46:33.526259 env[1214]: time="2025-05-15T10:46:33.526185930Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 15 10:46:33.526259 env[1214]: time="2025-05-15T10:46:33.526198654Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 15 10:46:33.526259 env[1214]: time="2025-05-15T10:46:33.526210136Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 15 10:46:33.526259 env[1214]: time="2025-05-15T10:46:33.526258296Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 15 10:46:33.526357 env[1214]: time="2025-05-15T10:46:33.526271330Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 May 15 10:46:33.526357 env[1214]: time="2025-05-15T10:46:33.526286389Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 15 10:46:33.526357 env[1214]: time="2025-05-15T10:46:33.526299714Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 15 10:46:33.526357 env[1214]: time="2025-05-15T10:46:33.526312658Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 15 10:46:33.526429 env[1214]: time="2025-05-15T10:46:33.526385735Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 15 10:46:33.526477 env[1214]: time="2025-05-15T10:46:33.526444425Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 15 10:46:33.526737 env[1214]: time="2025-05-15T10:46:33.526713109Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 15 10:46:33.526779 env[1214]: time="2025-05-15T10:46:33.526754627Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 15 10:46:33.526779 env[1214]: time="2025-05-15T10:46:33.526767531Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 15 10:46:33.526836 env[1214]: time="2025-05-15T10:46:33.526816753Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 15 10:46:33.526836 env[1214]: time="2025-05-15T10:46:33.526834036Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 15 10:46:33.526902 env[1214]: time="2025-05-15T10:46:33.526845808Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 15 10:46:33.526925 env[1214]: time="2025-05-15T10:46:33.526900460Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 15 10:46:33.526925 env[1214]: time="2025-05-15T10:46:33.526914427Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 15 10:46:33.526962 env[1214]: time="2025-05-15T10:46:33.526925708Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 15 10:46:33.526962 env[1214]: time="2025-05-15T10:46:33.526936208Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 15 10:46:33.526962 env[1214]: time="2025-05-15T10:46:33.526946296Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 15 10:46:33.526962 env[1214]: time="2025-05-15T10:46:33.526958610Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 15 10:46:33.527080 env[1214]: time="2025-05-15T10:46:33.527057425Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 15 10:46:33.527080 env[1214]: time="2025-05-15T10:46:33.527077072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 15 10:46:33.527140 env[1214]: time="2025-05-15T10:46:33.527088483Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 15 10:46:33.527140 env[1214]: time="2025-05-15T10:46:33.527098652Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 15 10:46:33.527140 env[1214]: time="2025-05-15T10:46:33.527110965Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 May 15 10:46:33.527140 env[1214]: time="2025-05-15T10:46:33.527120333Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 15 10:46:33.527140 env[1214]: time="2025-05-15T10:46:33.527137044Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" May 15 10:46:33.527242 env[1214]: time="2025-05-15T10:46:33.527169685Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 15 10:46:33.527390 env[1214]: time="2025-05-15T10:46:33.527336749Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 15 10:46:33.527390 env[1214]: time="2025-05-15T10:46:33.527390389Z" level=info msg="Connect containerd service" May 15 10:46:33.527961 env[1214]: time="2025-05-15T10:46:33.527416078Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 15 10:46:33.527995 env[1214]: time="2025-05-15T10:46:33.527959427Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 15 10:46:33.528088 env[1214]: time="2025-05-15T10:46:33.528055377Z" level=info msg="Start subscribing containerd event" May 15 10:46:33.528137 env[1214]: time="2025-05-15T10:46:33.528095692Z" level=info msg="Start recovering state" May 15 10:46:33.528166 env[1214]: time="2025-05-15T10:46:33.528137300Z" level=info msg="Start event monitor" May 15 10:46:33.528166 env[1214]: time="2025-05-15T10:46:33.528149123Z" level=info msg="Start snapshots syncer" May 15 10:46:33.528166 env[1214]: time="2025-05-15T10:46:33.528162187Z" level=info msg="Start cni network conf syncer for default" May 15 10:46:33.528252 env[1214]: time="2025-05-15T10:46:33.528170312Z" level=info msg="Start streaming server" May 15 10:46:33.528408 env[1214]: time="2025-05-15T10:46:33.528380757Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 15 10:46:33.528458 env[1214]: time="2025-05-15T10:46:33.528438866Z" level=info msg=serving... address=/run/containerd/containerd.sock May 15 10:46:33.528549 systemd[1]: Started containerd.service. May 15 10:46:33.528653 env[1214]: time="2025-05-15T10:46:33.528642929Z" level=info msg="containerd successfully booted in 0.067153s" May 15 10:46:33.859617 tar[1207]: linux-amd64/README.md May 15 10:46:33.863838 systemd[1]: Finished prepare-helm.service. May 15 10:46:34.028764 systemd-networkd[1031]: eth0: Gained IPv6LL May 15 10:46:34.030808 systemd[1]: Finished systemd-networkd-wait-online.service. May 15 10:46:34.032188 systemd[1]: Reached target network-online.target. May 15 10:46:34.034490 systemd[1]: Starting kubelet.service... May 15 10:46:34.042657 sshd_keygen[1206]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 15 10:46:34.063865 systemd[1]: Finished sshd-keygen.service. May 15 10:46:34.066301 systemd[1]: Starting issuegen.service... May 15 10:46:34.070995 systemd[1]: issuegen.service: Deactivated successfully. May 15 10:46:34.071158 systemd[1]: Finished issuegen.service. May 15 10:46:34.073450 systemd[1]: Starting systemd-user-sessions.service... May 15 10:46:34.078286 systemd[1]: Finished systemd-user-sessions.service. May 15 10:46:34.080468 systemd[1]: Started getty@tty1.service. May 15 10:46:34.082400 systemd[1]: Started serial-getty@ttyS0.service. May 15 10:46:34.083521 systemd[1]: Reached target getty.target. May 15 10:46:34.641779 systemd[1]: Started kubelet.service. May 15 10:46:34.642962 systemd[1]: Reached target multi-user.target. May 15 10:46:34.644899 systemd[1]: Starting systemd-update-utmp-runlevel.service... May 15 10:46:34.651874 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. May 15 10:46:34.651986 systemd[1]: Finished systemd-update-utmp-runlevel.service. May 15 10:46:34.655358 systemd[1]: Startup finished in 584ms (kernel) + 5.038s (initrd) + 5.245s (userspace) = 10.868s. May 15 10:46:35.029951 kubelet[1264]: E0515 10:46:35.029840 1264 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 10:46:35.031275 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 10:46:35.031397 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 10:46:36.745773 systemd[1]: Created slice system-sshd.slice. May 15 10:46:36.746708 systemd[1]: Started sshd@0-10.0.0.121:22-10.0.0.1:44610.service. May 15 10:46:36.782993 sshd[1273]: Accepted publickey for core from 10.0.0.1 port 44610 ssh2: RSA SHA256:haioSl9UPoE92ibERJujrg0rXVEisgSt061naG/EAtE May 15 10:46:36.784331 sshd[1273]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:46:36.792114 systemd-logind[1196]: New session 1 of user core. May 15 10:46:36.792943 systemd[1]: Created slice user-500.slice. May 15 10:46:36.793916 systemd[1]: Starting user-runtime-dir@500.service... May 15 10:46:36.801555 systemd[1]: Finished user-runtime-dir@500.service. May 15 10:46:36.802801 systemd[1]: Starting user@500.service... May 15 10:46:36.805162 (systemd)[1276]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 15 10:46:36.888480 systemd[1276]: Queued start job for default target default.target. May 15 10:46:36.889148 systemd[1276]: Reached target paths.target. May 15 10:46:36.889175 systemd[1276]: Reached target sockets.target. May 15 10:46:36.889192 systemd[1276]: Reached target timers.target. May 15 10:46:36.889206 systemd[1276]: Reached target basic.target. May 15 10:46:36.889263 systemd[1276]: Reached target default.target. May 15 10:46:36.889297 systemd[1276]: Startup finished in 79ms. May 15 10:46:36.889367 systemd[1]: Started user@500.service. May 15 10:46:36.890277 systemd[1]: Started session-1.scope. May 15 10:46:36.939642 systemd[1]: Started sshd@1-10.0.0.121:22-10.0.0.1:44620.service. May 15 10:46:36.974962 sshd[1285]: Accepted publickey for core from 10.0.0.1 port 44620 ssh2: RSA SHA256:haioSl9UPoE92ibERJujrg0rXVEisgSt061naG/EAtE May 15 10:46:36.976402 sshd[1285]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:46:36.980011 systemd-logind[1196]: New session 2 of user core. May 15 10:46:36.980910 systemd[1]: Started session-2.scope. May 15 10:46:37.034486 sshd[1285]: pam_unix(sshd:session): session closed for user core May 15 10:46:37.037922 systemd[1]: sshd@1-10.0.0.121:22-10.0.0.1:44620.service: Deactivated successfully. May 15 10:46:37.038583 systemd[1]: session-2.scope: Deactivated successfully. May 15 10:46:37.039186 systemd-logind[1196]: Session 2 logged out. Waiting for processes to exit. May 15 10:46:37.040613 systemd[1]: Started sshd@2-10.0.0.121:22-10.0.0.1:44624.service. May 15 10:46:37.041337 systemd-logind[1196]: Removed session 2. May 15 10:46:37.078140 sshd[1291]: Accepted publickey for core from 10.0.0.1 port 44624 ssh2: RSA SHA256:haioSl9UPoE92ibERJujrg0rXVEisgSt061naG/EAtE May 15 10:46:37.079604 sshd[1291]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:46:37.083638 systemd-logind[1196]: New session 3 of user core. May 15 10:46:37.084741 systemd[1]: Started session-3.scope. May 15 10:46:37.135380 sshd[1291]: pam_unix(sshd:session): session closed for user core May 15 10:46:37.138102 systemd[1]: sshd@2-10.0.0.121:22-10.0.0.1:44624.service: Deactivated successfully. May 15 10:46:37.138612 systemd[1]: session-3.scope: Deactivated successfully. May 15 10:46:37.139116 systemd-logind[1196]: Session 3 logged out. Waiting for processes to exit. May 15 10:46:37.140042 systemd[1]: Started sshd@3-10.0.0.121:22-10.0.0.1:44628.service. May 15 10:46:37.140864 systemd-logind[1196]: Removed session 3. May 15 10:46:37.176197 sshd[1297]: Accepted publickey for core from 10.0.0.1 port 44628 ssh2: RSA SHA256:haioSl9UPoE92ibERJujrg0rXVEisgSt061naG/EAtE May 15 10:46:37.177719 sshd[1297]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:46:37.181389 systemd-logind[1196]: New session 4 of user core. May 15 10:46:37.182019 systemd[1]: Started session-4.scope. May 15 10:46:37.236395 sshd[1297]: pam_unix(sshd:session): session closed for user core May 15 10:46:37.238651 systemd[1]: sshd@3-10.0.0.121:22-10.0.0.1:44628.service: Deactivated successfully. May 15 10:46:37.239105 systemd[1]: session-4.scope: Deactivated successfully. May 15 10:46:37.239597 systemd-logind[1196]: Session 4 logged out. Waiting for processes to exit. May 15 10:46:37.240393 systemd[1]: Started sshd@4-10.0.0.121:22-10.0.0.1:44630.service. May 15 10:46:37.240992 systemd-logind[1196]: Removed session 4. May 15 10:46:37.272957 sshd[1303]: Accepted publickey for core from 10.0.0.1 port 44630 ssh2: RSA SHA256:haioSl9UPoE92ibERJujrg0rXVEisgSt061naG/EAtE May 15 10:46:37.273960 sshd[1303]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:46:37.276988 systemd-logind[1196]: New session 5 of user core. May 15 10:46:37.277676 systemd[1]: Started session-5.scope. May 15 10:46:37.331140 sudo[1306]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 15 10:46:37.331314 sudo[1306]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 15 10:46:37.350296 systemd[1]: Starting docker.service... May 15 10:46:37.385716 env[1318]: time="2025-05-15T10:46:37.385650733Z" level=info msg="Starting up" May 15 10:46:37.386898 env[1318]: time="2025-05-15T10:46:37.386851135Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 15 10:46:37.386898 env[1318]: time="2025-05-15T10:46:37.386881291Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 15 10:46:37.386979 env[1318]: time="2025-05-15T10:46:37.386908302Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 15 10:46:37.386979 env[1318]: time="2025-05-15T10:46:37.386927758Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 15 10:46:37.388520 env[1318]: time="2025-05-15T10:46:37.388482084Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 15 10:46:37.388520 env[1318]: time="2025-05-15T10:46:37.388507882Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 15 10:46:37.388628 env[1318]: time="2025-05-15T10:46:37.388539542Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 15 10:46:37.388628 env[1318]: time="2025-05-15T10:46:37.388557986Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 15 10:46:37.393963 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport730562828-merged.mount: Deactivated successfully. May 15 10:46:38.125300 env[1318]: time="2025-05-15T10:46:38.125246593Z" level=info msg="Loading containers: start." May 15 10:46:38.235561 kernel: Initializing XFRM netlink socket May 15 10:46:38.263194 env[1318]: time="2025-05-15T10:46:38.263152035Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" May 15 10:46:38.310630 systemd-networkd[1031]: docker0: Link UP May 15 10:46:38.325730 env[1318]: time="2025-05-15T10:46:38.325687684Z" level=info msg="Loading containers: done." May 15 10:46:38.334423 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1308631998-merged.mount: Deactivated successfully. May 15 10:46:38.336850 env[1318]: time="2025-05-15T10:46:38.336809996Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 15 10:46:38.337005 env[1318]: time="2025-05-15T10:46:38.336979824Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 May 15 10:46:38.337095 env[1318]: time="2025-05-15T10:46:38.337072969Z" level=info msg="Daemon has completed initialization" May 15 10:46:38.353092 systemd[1]: Started docker.service. May 15 10:46:38.360062 env[1318]: time="2025-05-15T10:46:38.360015052Z" level=info msg="API listen on /run/docker.sock" May 15 10:46:39.161232 env[1214]: time="2025-05-15T10:46:39.161177484Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\"" May 15 10:46:39.677411 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount675553207.mount: Deactivated successfully. May 15 10:46:41.046493 env[1214]: time="2025-05-15T10:46:41.046426702Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.32.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:46:41.048243 env[1214]: time="2025-05-15T10:46:41.048186092Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:46:41.049907 env[1214]: time="2025-05-15T10:46:41.049877465Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.32.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:46:41.051433 env[1214]: time="2025-05-15T10:46:41.051381606Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:46:41.052039 env[1214]: time="2025-05-15T10:46:41.052006018Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\" returns image reference \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\"" May 15 10:46:41.052644 env[1214]: time="2025-05-15T10:46:41.052614649Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\"" May 15 10:46:43.035286 env[1214]: time="2025-05-15T10:46:43.035227675Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.32.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:46:43.149249 env[1214]: time="2025-05-15T10:46:43.149217727Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:46:43.163089 env[1214]: time="2025-05-15T10:46:43.163048750Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.32.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:46:43.166693 env[1214]: time="2025-05-15T10:46:43.166666136Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:46:43.167262 env[1214]: time="2025-05-15T10:46:43.167233269Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\" returns image reference \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\"" May 15 10:46:43.167906 env[1214]: time="2025-05-15T10:46:43.167864934Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\"" May 15 10:46:45.172038 env[1214]: time="2025-05-15T10:46:45.171985682Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.32.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:46:45.173807 env[1214]: time="2025-05-15T10:46:45.173782692Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:46:45.175409 env[1214]: time="2025-05-15T10:46:45.175382233Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.32.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:46:45.177106 env[1214]: time="2025-05-15T10:46:45.177076010Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:46:45.177745 env[1214]: time="2025-05-15T10:46:45.177713085Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\" returns image reference \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\"" May 15 10:46:45.178277 env[1214]: time="2025-05-15T10:46:45.178248689Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" May 15 10:46:45.282124 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 15 10:46:45.282297 systemd[1]: Stopped kubelet.service. May 15 10:46:45.283484 systemd[1]: Starting kubelet.service... May 15 10:46:45.371864 systemd[1]: Started kubelet.service. May 15 10:46:45.728873 kubelet[1453]: E0515 10:46:45.728802 1453 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 10:46:45.732250 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 10:46:45.732407 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 10:46:46.578160 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2636604979.mount: Deactivated successfully. May 15 10:46:48.092786 env[1214]: time="2025-05-15T10:46:48.092729683Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.32.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:46:48.231940 env[1214]: time="2025-05-15T10:46:48.231885259Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:46:48.307730 env[1214]: time="2025-05-15T10:46:48.307692421Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.32.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:46:48.342979 env[1214]: time="2025-05-15T10:46:48.342862280Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:46:48.343413 env[1214]: time="2025-05-15T10:46:48.343348873Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\"" May 15 10:46:48.344035 env[1214]: time="2025-05-15T10:46:48.343995385Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 15 10:46:49.456367 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2592493457.mount: Deactivated successfully. May 15 10:46:50.569944 env[1214]: time="2025-05-15T10:46:50.569881782Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:46:50.571780 env[1214]: time="2025-05-15T10:46:50.571746620Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:46:50.573452 env[1214]: time="2025-05-15T10:46:50.573411072Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:46:50.575359 env[1214]: time="2025-05-15T10:46:50.575327376Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:46:50.576091 env[1214]: time="2025-05-15T10:46:50.576059480Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 15 10:46:50.576628 env[1214]: time="2025-05-15T10:46:50.576602729Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 15 10:46:51.014316 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2500840897.mount: Deactivated successfully. May 15 10:46:51.019229 env[1214]: time="2025-05-15T10:46:51.019168936Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:46:51.021322 env[1214]: time="2025-05-15T10:46:51.021282310Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:46:51.024156 env[1214]: time="2025-05-15T10:46:51.024107369Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:46:51.027017 env[1214]: time="2025-05-15T10:46:51.026984716Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:46:51.027553 env[1214]: time="2025-05-15T10:46:51.027504091Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 15 10:46:51.027999 env[1214]: time="2025-05-15T10:46:51.027969934Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 15 10:46:51.474739 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1102849206.mount: Deactivated successfully. May 15 10:46:54.331837 env[1214]: time="2025-05-15T10:46:54.331775429Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:46:54.333735 env[1214]: time="2025-05-15T10:46:54.333687506Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:46:54.335409 env[1214]: time="2025-05-15T10:46:54.335383717Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:46:54.336947 env[1214]: time="2025-05-15T10:46:54.336924507Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:46:54.337693 env[1214]: time="2025-05-15T10:46:54.337662422Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" May 15 10:46:55.983223 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 15 10:46:55.983408 systemd[1]: Stopped kubelet.service. May 15 10:46:55.984667 systemd[1]: Starting kubelet.service... May 15 10:46:56.061973 systemd[1]: Started kubelet.service. May 15 10:46:56.097289 kubelet[1485]: E0515 10:46:56.097242 1485 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 10:46:56.098990 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 10:46:56.099110 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 10:46:56.408312 systemd[1]: Stopped kubelet.service. May 15 10:46:56.410178 systemd[1]: Starting kubelet.service... May 15 10:46:56.428855 systemd[1]: Reloading. May 15 10:46:56.492702 /usr/lib/systemd/system-generators/torcx-generator[1518]: time="2025-05-15T10:46:56Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.100 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.100 /var/lib/torcx/store]" May 15 10:46:56.492732 /usr/lib/systemd/system-generators/torcx-generator[1518]: time="2025-05-15T10:46:56Z" level=info msg="torcx already run" May 15 10:46:57.099947 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 15 10:46:57.099963 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 15 10:46:57.116792 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 10:46:57.192170 systemd[1]: Started kubelet.service. May 15 10:46:57.193744 systemd[1]: Stopping kubelet.service... May 15 10:46:57.194006 systemd[1]: kubelet.service: Deactivated successfully. May 15 10:46:57.194198 systemd[1]: Stopped kubelet.service. May 15 10:46:57.195663 systemd[1]: Starting kubelet.service... May 15 10:46:57.276629 systemd[1]: Started kubelet.service. May 15 10:46:57.318315 kubelet[1565]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 10:46:57.318315 kubelet[1565]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 15 10:46:57.318315 kubelet[1565]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 10:46:57.318762 kubelet[1565]: I0515 10:46:57.318383 1565 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 10:46:57.831562 kubelet[1565]: I0515 10:46:57.831510 1565 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 15 10:46:57.831562 kubelet[1565]: I0515 10:46:57.831554 1565 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 10:46:57.831855 kubelet[1565]: I0515 10:46:57.831833 1565 server.go:954] "Client rotation is on, will bootstrap in background" May 15 10:46:57.851713 kubelet[1565]: E0515 10:46:57.851673 1565 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.121:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.121:6443: connect: connection refused" logger="UnhandledError" May 15 10:46:57.852647 kubelet[1565]: I0515 10:46:57.852627 1565 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 10:46:57.859636 kubelet[1565]: E0515 10:46:57.859607 1565 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 15 10:46:57.859742 kubelet[1565]: I0515 10:46:57.859704 1565 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 15 10:46:57.863437 kubelet[1565]: I0515 10:46:57.863408 1565 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 10:46:57.863677 kubelet[1565]: I0515 10:46:57.863643 1565 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 10:46:57.863822 kubelet[1565]: I0515 10:46:57.863670 1565 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 15 10:46:57.863822 kubelet[1565]: I0515 10:46:57.863822 1565 topology_manager.go:138] "Creating topology manager with none policy" May 15 10:46:57.863929 kubelet[1565]: I0515 10:46:57.863830 1565 container_manager_linux.go:304] "Creating device plugin manager" May 15 10:46:57.864373 kubelet[1565]: I0515 10:46:57.864352 1565 state_mem.go:36] "Initialized new in-memory state store" May 15 10:46:57.866895 kubelet[1565]: I0515 10:46:57.866875 1565 kubelet.go:446] "Attempting to sync node with API server" May 15 10:46:57.866895 kubelet[1565]: I0515 10:46:57.866892 1565 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 10:46:57.866950 kubelet[1565]: I0515 10:46:57.866909 1565 kubelet.go:352] "Adding apiserver pod source" May 15 10:46:57.866950 kubelet[1565]: I0515 10:46:57.866919 1565 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 10:46:57.886384 kubelet[1565]: I0515 10:46:57.886359 1565 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 15 10:46:57.886687 kubelet[1565]: I0515 10:46:57.886668 1565 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 10:46:57.886727 kubelet[1565]: W0515 10:46:57.886708 1565 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 15 10:46:57.899069 kubelet[1565]: I0515 10:46:57.899042 1565 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 15 10:46:57.899133 kubelet[1565]: I0515 10:46:57.899075 1565 server.go:1287] "Started kubelet" May 15 10:46:57.908811 kubelet[1565]: W0515 10:46:57.908766 1565 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.121:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.121:6443: connect: connection refused May 15 10:46:57.908862 kubelet[1565]: E0515 10:46:57.908812 1565 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.121:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.121:6443: connect: connection refused" logger="UnhandledError" May 15 10:46:57.908862 kubelet[1565]: I0515 10:46:57.908844 1565 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 15 10:46:57.909651 kubelet[1565]: I0515 10:46:57.909622 1565 server.go:490] "Adding debug handlers to kubelet server" May 15 10:46:57.911658 kubelet[1565]: I0515 10:46:57.911608 1565 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 10:46:57.911819 kubelet[1565]: I0515 10:46:57.911794 1565 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 10:46:57.912583 kubelet[1565]: W0515 10:46:57.912548 1565 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.121:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.121:6443: connect: connection refused May 15 10:46:57.912635 kubelet[1565]: E0515 10:46:57.912591 1565 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.121:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.121:6443: connect: connection refused" logger="UnhandledError" May 15 10:46:57.912823 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). May 15 10:46:57.912949 kubelet[1565]: I0515 10:46:57.912916 1565 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 10:46:57.913975 kubelet[1565]: I0515 10:46:57.913124 1565 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 15 10:46:57.913975 kubelet[1565]: E0515 10:46:57.913193 1565 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 10:46:57.913975 kubelet[1565]: I0515 10:46:57.913211 1565 volume_manager.go:297] "Starting Kubelet Volume Manager" May 15 10:46:57.913975 kubelet[1565]: I0515 10:46:57.913329 1565 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 15 10:46:57.913975 kubelet[1565]: I0515 10:46:57.913361 1565 reconciler.go:26] "Reconciler: start to sync state" May 15 10:46:57.913975 kubelet[1565]: W0515 10:46:57.913568 1565 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.121:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.121:6443: connect: connection refused May 15 10:46:57.913975 kubelet[1565]: E0515 10:46:57.913595 1565 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.121:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.121:6443: connect: connection refused" logger="UnhandledError" May 15 10:46:57.915737 kubelet[1565]: I0515 10:46:57.915714 1565 factory.go:221] Registration of the systemd container factory successfully May 15 10:46:57.915810 kubelet[1565]: I0515 10:46:57.915787 1565 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 10:46:57.915855 kubelet[1565]: E0515 10:46:57.911943 1565 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.121:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.121:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183fad90cec68b7e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-15 10:46:57.899055998 +0000 UTC m=+0.619019595,LastTimestamp:2025-05-15 10:46:57.899055998 +0000 UTC m=+0.619019595,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 15 10:46:57.915987 kubelet[1565]: E0515 10:46:57.915913 1565 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.121:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.121:6443: connect: connection refused" interval="200ms" May 15 10:46:57.916987 kubelet[1565]: I0515 10:46:57.916968 1565 factory.go:221] Registration of the containerd container factory successfully May 15 10:46:57.917049 kubelet[1565]: E0515 10:46:57.917011 1565 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 10:46:57.929149 kubelet[1565]: I0515 10:46:57.929110 1565 cpu_manager.go:221] "Starting CPU manager" policy="none" May 15 10:46:57.929149 kubelet[1565]: I0515 10:46:57.929142 1565 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 15 10:46:57.929241 kubelet[1565]: I0515 10:46:57.929156 1565 state_mem.go:36] "Initialized new in-memory state store" May 15 10:46:58.013519 kubelet[1565]: E0515 10:46:58.013458 1565 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 10:46:58.113893 kubelet[1565]: E0515 10:46:58.113799 1565 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 10:46:58.117272 kubelet[1565]: E0515 10:46:58.117244 1565 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.121:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.121:6443: connect: connection refused" interval="400ms" May 15 10:46:58.186228 kubelet[1565]: I0515 10:46:58.186199 1565 policy_none.go:49] "None policy: Start" May 15 10:46:58.186228 kubelet[1565]: I0515 10:46:58.186219 1565 memory_manager.go:186] "Starting memorymanager" policy="None" May 15 10:46:58.186323 kubelet[1565]: I0515 10:46:58.186235 1565 state_mem.go:35] "Initializing new in-memory state store" May 15 10:46:58.190380 kubelet[1565]: I0515 10:46:58.190334 1565 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 10:46:58.191678 kubelet[1565]: I0515 10:46:58.191656 1565 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 10:46:58.191740 kubelet[1565]: I0515 10:46:58.191692 1565 status_manager.go:227] "Starting to sync pod status with apiserver" May 15 10:46:58.191740 kubelet[1565]: I0515 10:46:58.191719 1565 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 15 10:46:58.191740 kubelet[1565]: I0515 10:46:58.191728 1565 kubelet.go:2388] "Starting kubelet main sync loop" May 15 10:46:58.191832 kubelet[1565]: E0515 10:46:58.191792 1565 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 10:46:58.192393 kubelet[1565]: W0515 10:46:58.192343 1565 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.121:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.121:6443: connect: connection refused May 15 10:46:58.192453 kubelet[1565]: E0515 10:46:58.192402 1565 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.121:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.121:6443: connect: connection refused" logger="UnhandledError" May 15 10:46:58.193128 systemd[1]: Created slice kubepods.slice. May 15 10:46:58.197576 systemd[1]: Created slice kubepods-burstable.slice. May 15 10:46:58.199991 systemd[1]: Created slice kubepods-besteffort.slice. May 15 10:46:58.206174 kubelet[1565]: I0515 10:46:58.206135 1565 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 10:46:58.206307 kubelet[1565]: I0515 10:46:58.206290 1565 eviction_manager.go:189] "Eviction manager: starting control loop" May 15 10:46:58.206362 kubelet[1565]: I0515 10:46:58.206314 1565 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 10:46:58.206764 kubelet[1565]: I0515 10:46:58.206557 1565 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 10:46:58.207100 kubelet[1565]: E0515 10:46:58.207085 1565 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 15 10:46:58.207167 kubelet[1565]: E0515 10:46:58.207126 1565 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 15 10:46:58.298921 systemd[1]: Created slice kubepods-burstable-pod0c324094b8a020c96c5f6c5061a5b8f7.slice. May 15 10:46:58.304127 kubelet[1565]: E0515 10:46:58.304088 1565 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 10:46:58.305593 systemd[1]: Created slice kubepods-burstable-pod5386fe11ed933ab82453de11903c7f47.slice. May 15 10:46:58.308236 kubelet[1565]: I0515 10:46:58.308210 1565 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 15 10:46:58.308586 kubelet[1565]: E0515 10:46:58.308552 1565 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.121:6443/api/v1/nodes\": dial tcp 10.0.0.121:6443: connect: connection refused" node="localhost" May 15 10:46:58.311274 kubelet[1565]: E0515 10:46:58.311247 1565 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 10:46:58.313199 systemd[1]: Created slice kubepods-burstable-pod2980a8ab51edc665be10a02e33130e15.slice. May 15 10:46:58.314482 kubelet[1565]: I0515 10:46:58.314455 1565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0c324094b8a020c96c5f6c5061a5b8f7-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0c324094b8a020c96c5f6c5061a5b8f7\") " pod="kube-system/kube-apiserver-localhost" May 15 10:46:58.314482 kubelet[1565]: I0515 10:46:58.314481 1565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0c324094b8a020c96c5f6c5061a5b8f7-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0c324094b8a020c96c5f6c5061a5b8f7\") " pod="kube-system/kube-apiserver-localhost" May 15 10:46:58.314613 kubelet[1565]: I0515 10:46:58.314497 1565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 15 10:46:58.314613 kubelet[1565]: I0515 10:46:58.314509 1565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 15 10:46:58.314613 kubelet[1565]: I0515 10:46:58.314536 1565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2980a8ab51edc665be10a02e33130e15-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2980a8ab51edc665be10a02e33130e15\") " pod="kube-system/kube-scheduler-localhost" May 15 10:46:58.314613 kubelet[1565]: E0515 10:46:58.314523 1565 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 10:46:58.314613 kubelet[1565]: I0515 10:46:58.314549 1565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0c324094b8a020c96c5f6c5061a5b8f7-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0c324094b8a020c96c5f6c5061a5b8f7\") " pod="kube-system/kube-apiserver-localhost" May 15 10:46:58.314613 kubelet[1565]: I0515 10:46:58.314564 1565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 15 10:46:58.314782 kubelet[1565]: I0515 10:46:58.314576 1565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 15 10:46:58.314782 kubelet[1565]: I0515 10:46:58.314608 1565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 15 10:46:58.510699 kubelet[1565]: I0515 10:46:58.510591 1565 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 15 10:46:58.510984 kubelet[1565]: E0515 10:46:58.510863 1565 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.121:6443/api/v1/nodes\": dial tcp 10.0.0.121:6443: connect: connection refused" node="localhost" May 15 10:46:58.518647 kubelet[1565]: E0515 10:46:58.518607 1565 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.121:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.121:6443: connect: connection refused" interval="800ms" May 15 10:46:58.605002 kubelet[1565]: E0515 10:46:58.604958 1565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:46:58.605660 env[1214]: time="2025-05-15T10:46:58.605624278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0c324094b8a020c96c5f6c5061a5b8f7,Namespace:kube-system,Attempt:0,}" May 15 10:46:58.611842 kubelet[1565]: E0515 10:46:58.611798 1565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:46:58.612171 env[1214]: time="2025-05-15T10:46:58.612127276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5386fe11ed933ab82453de11903c7f47,Namespace:kube-system,Attempt:0,}" May 15 10:46:58.615548 kubelet[1565]: E0515 10:46:58.615495 1565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:46:58.615939 env[1214]: time="2025-05-15T10:46:58.615903349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2980a8ab51edc665be10a02e33130e15,Namespace:kube-system,Attempt:0,}" May 15 10:46:58.901560 kubelet[1565]: W0515 10:46:58.901397 1565 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.121:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.121:6443: connect: connection refused May 15 10:46:58.901560 kubelet[1565]: E0515 10:46:58.901471 1565 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.121:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.121:6443: connect: connection refused" logger="UnhandledError" May 15 10:46:58.912789 kubelet[1565]: I0515 10:46:58.912761 1565 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 15 10:46:58.913113 kubelet[1565]: E0515 10:46:58.913086 1565 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.121:6443/api/v1/nodes\": dial tcp 10.0.0.121:6443: connect: connection refused" node="localhost" May 15 10:46:59.125179 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2664689599.mount: Deactivated successfully. May 15 10:46:59.319345 kubelet[1565]: E0515 10:46:59.319205 1565 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.121:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.121:6443: connect: connection refused" interval="1.6s" May 15 10:46:59.332751 kubelet[1565]: W0515 10:46:59.332686 1565 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.121:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.121:6443: connect: connection refused May 15 10:46:59.332830 kubelet[1565]: E0515 10:46:59.332749 1565 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.121:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.121:6443: connect: connection refused" logger="UnhandledError" May 15 10:46:59.349360 kubelet[1565]: W0515 10:46:59.349314 1565 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.121:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.121:6443: connect: connection refused May 15 10:46:59.349360 kubelet[1565]: E0515 10:46:59.349354 1565 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.121:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.121:6443: connect: connection refused" logger="UnhandledError" May 15 10:46:59.357788 env[1214]: time="2025-05-15T10:46:59.357744400Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:46:59.358956 env[1214]: time="2025-05-15T10:46:59.358926818Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:46:59.361569 env[1214]: time="2025-05-15T10:46:59.361543126Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:46:59.362429 env[1214]: time="2025-05-15T10:46:59.362389042Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:46:59.363935 env[1214]: time="2025-05-15T10:46:59.363893084Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:46:59.365389 env[1214]: time="2025-05-15T10:46:59.365339667Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:46:59.366568 env[1214]: time="2025-05-15T10:46:59.366544747Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:46:59.368255 env[1214]: time="2025-05-15T10:46:59.368224077Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:46:59.369187 env[1214]: time="2025-05-15T10:46:59.369146798Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:46:59.371491 env[1214]: time="2025-05-15T10:46:59.371446502Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:46:59.372572 env[1214]: time="2025-05-15T10:46:59.372541526Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:46:59.373708 env[1214]: time="2025-05-15T10:46:59.373686754Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:46:59.393310 env[1214]: time="2025-05-15T10:46:59.393256000Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 10:46:59.393310 env[1214]: time="2025-05-15T10:46:59.393294903Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 10:46:59.393310 env[1214]: time="2025-05-15T10:46:59.393304812Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 10:46:59.393490 env[1214]: time="2025-05-15T10:46:59.393435427Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0f07db49081337153658f018644b8d27c6d286c846c2471616b00f2d3443a2c9 pid=1606 runtime=io.containerd.runc.v2 May 15 10:46:59.405347 systemd[1]: Started cri-containerd-0f07db49081337153658f018644b8d27c6d286c846c2471616b00f2d3443a2c9.scope. May 15 10:46:59.407671 kubelet[1565]: W0515 10:46:59.407612 1565 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.121:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.121:6443: connect: connection refused May 15 10:46:59.407671 kubelet[1565]: E0515 10:46:59.407645 1565 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.121:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.121:6443: connect: connection refused" logger="UnhandledError" May 15 10:46:59.411932 env[1214]: time="2025-05-15T10:46:59.411878260Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 10:46:59.412026 env[1214]: time="2025-05-15T10:46:59.411924417Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 10:46:59.412026 env[1214]: time="2025-05-15T10:46:59.411954333Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 10:46:59.412183 env[1214]: time="2025-05-15T10:46:59.412154107Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/79e6aca6223875fe487ff877fd1ae7f82370defa3daa9a4681cdd027172d861a pid=1651 runtime=io.containerd.runc.v2 May 15 10:46:59.413911 env[1214]: time="2025-05-15T10:46:59.413858114Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 10:46:59.413911 env[1214]: time="2025-05-15T10:46:59.413895013Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 10:46:59.414058 env[1214]: time="2025-05-15T10:46:59.413905242Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 10:46:59.414058 env[1214]: time="2025-05-15T10:46:59.414033292Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/445572f58a5e6b42d2273d384c594426b043c8d640b595667be5c94bb11cecd8 pid=1640 runtime=io.containerd.runc.v2 May 15 10:46:59.422893 systemd[1]: Started cri-containerd-79e6aca6223875fe487ff877fd1ae7f82370defa3daa9a4681cdd027172d861a.scope. May 15 10:46:59.426958 systemd[1]: Started cri-containerd-445572f58a5e6b42d2273d384c594426b043c8d640b595667be5c94bb11cecd8.scope. May 15 10:46:59.441465 env[1214]: time="2025-05-15T10:46:59.441417627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5386fe11ed933ab82453de11903c7f47,Namespace:kube-system,Attempt:0,} returns sandbox id \"0f07db49081337153658f018644b8d27c6d286c846c2471616b00f2d3443a2c9\"" May 15 10:46:59.442398 kubelet[1565]: E0515 10:46:59.442370 1565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:46:59.444387 env[1214]: time="2025-05-15T10:46:59.444347774Z" level=info msg="CreateContainer within sandbox \"0f07db49081337153658f018644b8d27c6d286c846c2471616b00f2d3443a2c9\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 15 10:46:59.457519 env[1214]: time="2025-05-15T10:46:59.457478323Z" level=info msg="CreateContainer within sandbox \"0f07db49081337153658f018644b8d27c6d286c846c2471616b00f2d3443a2c9\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c8af654ba78702465b8043b1d437885fe6dc7099cc1dd7c837a4c599548bcf97\"" May 15 10:46:59.458987 env[1214]: time="2025-05-15T10:46:59.458954842Z" level=info msg="StartContainer for \"c8af654ba78702465b8043b1d437885fe6dc7099cc1dd7c837a4c599548bcf97\"" May 15 10:46:59.459293 env[1214]: time="2025-05-15T10:46:59.459260015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2980a8ab51edc665be10a02e33130e15,Namespace:kube-system,Attempt:0,} returns sandbox id \"79e6aca6223875fe487ff877fd1ae7f82370defa3daa9a4681cdd027172d861a\"" May 15 10:46:59.459908 kubelet[1565]: E0515 10:46:59.459886 1565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:46:59.461804 env[1214]: time="2025-05-15T10:46:59.461774030Z" level=info msg="CreateContainer within sandbox \"79e6aca6223875fe487ff877fd1ae7f82370defa3daa9a4681cdd027172d861a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 15 10:46:59.467139 env[1214]: time="2025-05-15T10:46:59.467091895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0c324094b8a020c96c5f6c5061a5b8f7,Namespace:kube-system,Attempt:0,} returns sandbox id \"445572f58a5e6b42d2273d384c594426b043c8d640b595667be5c94bb11cecd8\"" May 15 10:46:59.467491 kubelet[1565]: E0515 10:46:59.467470 1565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:46:59.469155 env[1214]: time="2025-05-15T10:46:59.469129237Z" level=info msg="CreateContainer within sandbox \"445572f58a5e6b42d2273d384c594426b043c8d640b595667be5c94bb11cecd8\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 15 10:46:59.476810 systemd[1]: Started cri-containerd-c8af654ba78702465b8043b1d437885fe6dc7099cc1dd7c837a4c599548bcf97.scope. May 15 10:46:59.478725 env[1214]: time="2025-05-15T10:46:59.477625473Z" level=info msg="CreateContainer within sandbox \"79e6aca6223875fe487ff877fd1ae7f82370defa3daa9a4681cdd027172d861a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c319fe9cd921d93574aec1d63071f792b98be0a43cea196c27dbd21860a4b3e2\"" May 15 10:46:59.479380 env[1214]: time="2025-05-15T10:46:59.479346451Z" level=info msg="StartContainer for \"c319fe9cd921d93574aec1d63071f792b98be0a43cea196c27dbd21860a4b3e2\"" May 15 10:46:59.485835 env[1214]: time="2025-05-15T10:46:59.485787282Z" level=info msg="CreateContainer within sandbox \"445572f58a5e6b42d2273d384c594426b043c8d640b595667be5c94bb11cecd8\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"696f9d71a9db73e84f2feb7237a619eea5e6cc5b57f5b0267c7b1b0f7de7051f\"" May 15 10:46:59.486302 env[1214]: time="2025-05-15T10:46:59.486267944Z" level=info msg="StartContainer for \"696f9d71a9db73e84f2feb7237a619eea5e6cc5b57f5b0267c7b1b0f7de7051f\"" May 15 10:46:59.495442 systemd[1]: Started cri-containerd-c319fe9cd921d93574aec1d63071f792b98be0a43cea196c27dbd21860a4b3e2.scope. May 15 10:46:59.505343 systemd[1]: Started cri-containerd-696f9d71a9db73e84f2feb7237a619eea5e6cc5b57f5b0267c7b1b0f7de7051f.scope. May 15 10:46:59.521486 env[1214]: time="2025-05-15T10:46:59.521416042Z" level=info msg="StartContainer for \"c8af654ba78702465b8043b1d437885fe6dc7099cc1dd7c837a4c599548bcf97\" returns successfully" May 15 10:46:59.539789 env[1214]: time="2025-05-15T10:46:59.539754079Z" level=info msg="StartContainer for \"c319fe9cd921d93574aec1d63071f792b98be0a43cea196c27dbd21860a4b3e2\" returns successfully" May 15 10:46:59.550332 env[1214]: time="2025-05-15T10:46:59.550280283Z" level=info msg="StartContainer for \"696f9d71a9db73e84f2feb7237a619eea5e6cc5b57f5b0267c7b1b0f7de7051f\" returns successfully" May 15 10:46:59.715222 kubelet[1565]: I0515 10:46:59.715113 1565 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 15 10:47:00.197035 kubelet[1565]: E0515 10:47:00.196907 1565 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 10:47:00.197035 kubelet[1565]: E0515 10:47:00.197017 1565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:47:00.198252 kubelet[1565]: E0515 10:47:00.198219 1565 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 10:47:00.198304 kubelet[1565]: E0515 10:47:00.198289 1565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:47:00.199414 kubelet[1565]: E0515 10:47:00.199393 1565 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 10:47:00.199496 kubelet[1565]: E0515 10:47:00.199466 1565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:47:00.667868 kubelet[1565]: I0515 10:47:00.667751 1565 kubelet_node_status.go:79] "Successfully registered node" node="localhost" May 15 10:47:00.667868 kubelet[1565]: E0515 10:47:00.667788 1565 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 15 10:47:00.670438 kubelet[1565]: E0515 10:47:00.670398 1565 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 10:47:00.771340 kubelet[1565]: E0515 10:47:00.771309 1565 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 10:47:00.871478 kubelet[1565]: E0515 10:47:00.871434 1565 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 10:47:00.972302 kubelet[1565]: E0515 10:47:00.972179 1565 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 10:47:01.073120 kubelet[1565]: E0515 10:47:01.073077 1565 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 10:47:01.173874 kubelet[1565]: E0515 10:47:01.173816 1565 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 10:47:01.201014 kubelet[1565]: E0515 10:47:01.200973 1565 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 10:47:01.201115 kubelet[1565]: E0515 10:47:01.201035 1565 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 10:47:01.201115 kubelet[1565]: E0515 10:47:01.201091 1565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:47:01.201165 kubelet[1565]: E0515 10:47:01.201153 1565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:47:01.201273 kubelet[1565]: E0515 10:47:01.201258 1565 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 10:47:01.201393 kubelet[1565]: E0515 10:47:01.201327 1565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:47:01.275042 kubelet[1565]: E0515 10:47:01.274932 1565 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 10:47:01.375888 kubelet[1565]: E0515 10:47:01.375826 1565 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 10:47:01.476739 kubelet[1565]: E0515 10:47:01.476692 1565 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 10:47:01.577344 kubelet[1565]: E0515 10:47:01.577228 1565 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 10:47:01.614776 kubelet[1565]: I0515 10:47:01.614718 1565 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 15 10:47:01.621202 kubelet[1565]: I0515 10:47:01.621174 1565 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 15 10:47:01.625206 kubelet[1565]: I0515 10:47:01.625150 1565 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 15 10:47:01.869500 kubelet[1565]: I0515 10:47:01.869384 1565 apiserver.go:52] "Watching apiserver" May 15 10:47:01.914366 kubelet[1565]: I0515 10:47:01.914325 1565 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 15 10:47:02.201298 kubelet[1565]: I0515 10:47:02.201210 1565 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 15 10:47:02.201298 kubelet[1565]: E0515 10:47:02.201244 1565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:47:02.201658 kubelet[1565]: E0515 10:47:02.201542 1565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:47:02.206385 kubelet[1565]: E0515 10:47:02.206358 1565 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 15 10:47:02.206484 kubelet[1565]: E0515 10:47:02.206470 1565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:47:02.722431 systemd[1]: Reloading. May 15 10:47:02.781365 /usr/lib/systemd/system-generators/torcx-generator[1865]: time="2025-05-15T10:47:02Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.100 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.100 /var/lib/torcx/store]" May 15 10:47:02.781698 /usr/lib/systemd/system-generators/torcx-generator[1865]: time="2025-05-15T10:47:02Z" level=info msg="torcx already run" May 15 10:47:02.842132 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 15 10:47:02.842148 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 15 10:47:02.858969 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 10:47:02.944955 kubelet[1565]: I0515 10:47:02.944806 1565 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 10:47:02.944871 systemd[1]: Stopping kubelet.service... May 15 10:47:02.966946 systemd[1]: kubelet.service: Deactivated successfully. May 15 10:47:02.967131 systemd[1]: Stopped kubelet.service. May 15 10:47:02.968731 systemd[1]: Starting kubelet.service... May 15 10:47:03.057310 systemd[1]: Started kubelet.service. May 15 10:47:03.089842 kubelet[1909]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 10:47:03.089842 kubelet[1909]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 15 10:47:03.089842 kubelet[1909]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 10:47:03.090199 kubelet[1909]: I0515 10:47:03.089942 1909 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 10:47:03.096551 kubelet[1909]: I0515 10:47:03.096498 1909 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 15 10:47:03.096551 kubelet[1909]: I0515 10:47:03.096538 1909 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 10:47:03.096814 kubelet[1909]: I0515 10:47:03.096792 1909 server.go:954] "Client rotation is on, will bootstrap in background" May 15 10:47:03.097910 kubelet[1909]: I0515 10:47:03.097886 1909 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 15 10:47:03.099838 kubelet[1909]: I0515 10:47:03.099794 1909 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 10:47:03.102834 kubelet[1909]: E0515 10:47:03.102801 1909 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 15 10:47:03.102834 kubelet[1909]: I0515 10:47:03.102831 1909 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 15 10:47:03.108269 kubelet[1909]: I0515 10:47:03.108229 1909 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 10:47:03.108418 kubelet[1909]: I0515 10:47:03.108390 1909 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 10:47:03.108607 kubelet[1909]: I0515 10:47:03.108419 1909 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 15 10:47:03.108700 kubelet[1909]: I0515 10:47:03.108614 1909 topology_manager.go:138] "Creating topology manager with none policy" May 15 10:47:03.108700 kubelet[1909]: I0515 10:47:03.108623 1909 container_manager_linux.go:304] "Creating device plugin manager" May 15 10:47:03.108700 kubelet[1909]: I0515 10:47:03.108658 1909 state_mem.go:36] "Initialized new in-memory state store" May 15 10:47:03.108779 kubelet[1909]: I0515 10:47:03.108771 1909 kubelet.go:446] "Attempting to sync node with API server" May 15 10:47:03.108808 kubelet[1909]: I0515 10:47:03.108784 1909 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 10:47:03.108808 kubelet[1909]: I0515 10:47:03.108799 1909 kubelet.go:352] "Adding apiserver pod source" May 15 10:47:03.108854 kubelet[1909]: I0515 10:47:03.108809 1909 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 10:47:03.109695 kubelet[1909]: I0515 10:47:03.109665 1909 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 15 10:47:03.121570 kubelet[1909]: I0515 10:47:03.110307 1909 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 10:47:03.121570 kubelet[1909]: I0515 10:47:03.110836 1909 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 15 10:47:03.121570 kubelet[1909]: I0515 10:47:03.110859 1909 server.go:1287] "Started kubelet" May 15 10:47:03.121570 kubelet[1909]: I0515 10:47:03.112185 1909 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 10:47:03.121570 kubelet[1909]: I0515 10:47:03.113596 1909 volume_manager.go:297] "Starting Kubelet Volume Manager" May 15 10:47:03.121570 kubelet[1909]: I0515 10:47:03.113665 1909 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 15 10:47:03.121570 kubelet[1909]: I0515 10:47:03.113747 1909 reconciler.go:26] "Reconciler: start to sync state" May 15 10:47:03.121570 kubelet[1909]: I0515 10:47:03.115201 1909 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 15 10:47:03.121570 kubelet[1909]: I0515 10:47:03.116183 1909 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 10:47:03.121570 kubelet[1909]: I0515 10:47:03.116440 1909 server.go:490] "Adding debug handlers to kubelet server" May 15 10:47:03.121570 kubelet[1909]: I0515 10:47:03.116448 1909 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 10:47:03.121570 kubelet[1909]: I0515 10:47:03.117327 1909 factory.go:221] Registration of the systemd container factory successfully May 15 10:47:03.121570 kubelet[1909]: I0515 10:47:03.117463 1909 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 10:47:03.121570 kubelet[1909]: I0515 10:47:03.117699 1909 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 15 10:47:03.121570 kubelet[1909]: E0515 10:47:03.118764 1909 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 10:47:03.121570 kubelet[1909]: I0515 10:47:03.118955 1909 factory.go:221] Registration of the containerd container factory successfully May 15 10:47:03.121570 kubelet[1909]: E0515 10:47:03.118989 1909 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 10:47:03.137367 kubelet[1909]: I0515 10:47:03.137298 1909 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 10:47:03.139392 kubelet[1909]: I0515 10:47:03.139370 1909 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 10:47:03.139473 kubelet[1909]: I0515 10:47:03.139398 1909 status_manager.go:227] "Starting to sync pod status with apiserver" May 15 10:47:03.139473 kubelet[1909]: I0515 10:47:03.139420 1909 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 15 10:47:03.139473 kubelet[1909]: I0515 10:47:03.139426 1909 kubelet.go:2388] "Starting kubelet main sync loop" May 15 10:47:03.139473 kubelet[1909]: E0515 10:47:03.139467 1909 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 10:47:03.154192 kubelet[1909]: I0515 10:47:03.154171 1909 cpu_manager.go:221] "Starting CPU manager" policy="none" May 15 10:47:03.154192 kubelet[1909]: I0515 10:47:03.154185 1909 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 15 10:47:03.154192 kubelet[1909]: I0515 10:47:03.154200 1909 state_mem.go:36] "Initialized new in-memory state store" May 15 10:47:03.154408 kubelet[1909]: I0515 10:47:03.154306 1909 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 15 10:47:03.154408 kubelet[1909]: I0515 10:47:03.154314 1909 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 15 10:47:03.154408 kubelet[1909]: I0515 10:47:03.154329 1909 policy_none.go:49] "None policy: Start" May 15 10:47:03.154408 kubelet[1909]: I0515 10:47:03.154336 1909 memory_manager.go:186] "Starting memorymanager" policy="None" May 15 10:47:03.154408 kubelet[1909]: I0515 10:47:03.154343 1909 state_mem.go:35] "Initializing new in-memory state store" May 15 10:47:03.154557 kubelet[1909]: I0515 10:47:03.154427 1909 state_mem.go:75] "Updated machine memory state" May 15 10:47:03.157576 kubelet[1909]: I0515 10:47:03.157548 1909 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 10:47:03.157734 kubelet[1909]: I0515 10:47:03.157710 1909 eviction_manager.go:189] "Eviction manager: starting control loop" May 15 10:47:03.157783 kubelet[1909]: I0515 10:47:03.157728 1909 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 10:47:03.158153 kubelet[1909]: I0515 10:47:03.157946 1909 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 10:47:03.158661 kubelet[1909]: E0515 10:47:03.158641 1909 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 15 10:47:03.240882 kubelet[1909]: I0515 10:47:03.240851 1909 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 15 10:47:03.240882 kubelet[1909]: I0515 10:47:03.240874 1909 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 15 10:47:03.241072 kubelet[1909]: I0515 10:47:03.241015 1909 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 15 10:47:03.246660 kubelet[1909]: E0515 10:47:03.246618 1909 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 15 10:47:03.246990 kubelet[1909]: E0515 10:47:03.246973 1909 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 15 10:47:03.247127 kubelet[1909]: E0515 10:47:03.247067 1909 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 15 10:47:03.260760 kubelet[1909]: I0515 10:47:03.260733 1909 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 15 10:47:03.267615 kubelet[1909]: I0515 10:47:03.267591 1909 kubelet_node_status.go:125] "Node was previously registered" node="localhost" May 15 10:47:03.267717 kubelet[1909]: I0515 10:47:03.267669 1909 kubelet_node_status.go:79] "Successfully registered node" node="localhost" May 15 10:47:03.416183 kubelet[1909]: I0515 10:47:03.415382 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 15 10:47:03.416183 kubelet[1909]: I0515 10:47:03.415422 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2980a8ab51edc665be10a02e33130e15-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2980a8ab51edc665be10a02e33130e15\") " pod="kube-system/kube-scheduler-localhost" May 15 10:47:03.416183 kubelet[1909]: I0515 10:47:03.415450 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0c324094b8a020c96c5f6c5061a5b8f7-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0c324094b8a020c96c5f6c5061a5b8f7\") " pod="kube-system/kube-apiserver-localhost" May 15 10:47:03.416183 kubelet[1909]: I0515 10:47:03.415469 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 15 10:47:03.416183 kubelet[1909]: I0515 10:47:03.415484 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 15 10:47:03.416448 kubelet[1909]: I0515 10:47:03.415499 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 15 10:47:03.416448 kubelet[1909]: I0515 10:47:03.415543 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 15 10:47:03.416448 kubelet[1909]: I0515 10:47:03.415571 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0c324094b8a020c96c5f6c5061a5b8f7-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0c324094b8a020c96c5f6c5061a5b8f7\") " pod="kube-system/kube-apiserver-localhost" May 15 10:47:03.416448 kubelet[1909]: I0515 10:47:03.415587 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0c324094b8a020c96c5f6c5061a5b8f7-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0c324094b8a020c96c5f6c5061a5b8f7\") " pod="kube-system/kube-apiserver-localhost" May 15 10:47:03.547241 kubelet[1909]: E0515 10:47:03.547199 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:47:03.547241 kubelet[1909]: E0515 10:47:03.547251 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:47:03.547437 kubelet[1909]: E0515 10:47:03.547202 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:47:03.720980 sudo[1944]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 15 10:47:03.721193 sudo[1944]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) May 15 10:47:04.109441 kubelet[1909]: I0515 10:47:04.109346 1909 apiserver.go:52] "Watching apiserver" May 15 10:47:04.114606 kubelet[1909]: I0515 10:47:04.114586 1909 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 15 10:47:04.145133 kubelet[1909]: E0515 10:47:04.145111 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:47:04.145283 kubelet[1909]: E0515 10:47:04.145256 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:47:04.145352 kubelet[1909]: I0515 10:47:04.145161 1909 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 15 10:47:04.150315 kubelet[1909]: E0515 10:47:04.150281 1909 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 15 10:47:04.150460 kubelet[1909]: E0515 10:47:04.150427 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:47:04.169812 kubelet[1909]: I0515 10:47:04.169763 1909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.169749907 podStartE2EDuration="3.169749907s" podCreationTimestamp="2025-05-15 10:47:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 10:47:04.169648547 +0000 UTC m=+1.108372651" watchObservedRunningTime="2025-05-15 10:47:04.169749907 +0000 UTC m=+1.108474011" May 15 10:47:04.169951 kubelet[1909]: I0515 10:47:04.169849 1909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.169843553 podStartE2EDuration="3.169843553s" podCreationTimestamp="2025-05-15 10:47:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 10:47:04.163794837 +0000 UTC m=+1.102518941" watchObservedRunningTime="2025-05-15 10:47:04.169843553 +0000 UTC m=+1.108567657" May 15 10:47:04.172958 sudo[1944]: pam_unix(sudo:session): session closed for user root May 15 10:47:04.175438 kubelet[1909]: I0515 10:47:04.175067 1909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.175048215 podStartE2EDuration="3.175048215s" podCreationTimestamp="2025-05-15 10:47:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 10:47:04.174969548 +0000 UTC m=+1.113693652" watchObservedRunningTime="2025-05-15 10:47:04.175048215 +0000 UTC m=+1.113772319" May 15 10:47:05.146384 kubelet[1909]: E0515 10:47:05.146356 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:47:05.146849 kubelet[1909]: E0515 10:47:05.146827 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:47:05.355236 sudo[1306]: pam_unix(sudo:session): session closed for user root May 15 10:47:05.356303 sshd[1303]: pam_unix(sshd:session): session closed for user core May 15 10:47:05.358345 systemd[1]: sshd@4-10.0.0.121:22-10.0.0.1:44630.service: Deactivated successfully. May 15 10:47:05.359000 systemd[1]: session-5.scope: Deactivated successfully. May 15 10:47:05.359125 systemd[1]: session-5.scope: Consumed 3.543s CPU time. May 15 10:47:05.359552 systemd-logind[1196]: Session 5 logged out. Waiting for processes to exit. May 15 10:47:05.360184 systemd-logind[1196]: Removed session 5. May 15 10:47:06.148002 kubelet[1909]: E0515 10:47:06.147975 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:47:06.148417 kubelet[1909]: E0515 10:47:06.148162 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:47:08.505463 systemd[1]: Created slice kubepods-besteffort-pod0e994a8c_1d95_4124_961a_a4972d4d3fbe.slice. May 15 10:47:08.516656 systemd[1]: Created slice kubepods-burstable-podc07df115_bc56_4b47_bbdc_6d7a25dce805.slice. May 15 10:47:08.548929 kubelet[1909]: I0515 10:47:08.548881 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0e994a8c-1d95-4124-961a-a4972d4d3fbe-xtables-lock\") pod \"kube-proxy-hnhwv\" (UID: \"0e994a8c-1d95-4124-961a-a4972d4d3fbe\") " pod="kube-system/kube-proxy-hnhwv" May 15 10:47:08.548929 kubelet[1909]: I0515 10:47:08.548916 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c07df115-bc56-4b47-bbdc-6d7a25dce805-cni-path\") pod \"cilium-xklc9\" (UID: \"c07df115-bc56-4b47-bbdc-6d7a25dce805\") " pod="kube-system/cilium-xklc9" May 15 10:47:08.548929 kubelet[1909]: I0515 10:47:08.548932 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c07df115-bc56-4b47-bbdc-6d7a25dce805-xtables-lock\") pod \"cilium-xklc9\" (UID: \"c07df115-bc56-4b47-bbdc-6d7a25dce805\") " pod="kube-system/cilium-xklc9" May 15 10:47:08.549281 kubelet[1909]: I0515 10:47:08.548947 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c07df115-bc56-4b47-bbdc-6d7a25dce805-clustermesh-secrets\") pod \"cilium-xklc9\" (UID: \"c07df115-bc56-4b47-bbdc-6d7a25dce805\") " pod="kube-system/cilium-xklc9" May 15 10:47:08.549281 kubelet[1909]: I0515 10:47:08.548961 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0e994a8c-1d95-4124-961a-a4972d4d3fbe-lib-modules\") pod \"kube-proxy-hnhwv\" (UID: \"0e994a8c-1d95-4124-961a-a4972d4d3fbe\") " pod="kube-system/kube-proxy-hnhwv" May 15 10:47:08.549281 kubelet[1909]: I0515 10:47:08.548993 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c07df115-bc56-4b47-bbdc-6d7a25dce805-hubble-tls\") pod \"cilium-xklc9\" (UID: \"c07df115-bc56-4b47-bbdc-6d7a25dce805\") " pod="kube-system/cilium-xklc9" May 15 10:47:08.549281 kubelet[1909]: I0515 10:47:08.549009 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c07df115-bc56-4b47-bbdc-6d7a25dce805-hostproc\") pod \"cilium-xklc9\" (UID: \"c07df115-bc56-4b47-bbdc-6d7a25dce805\") " pod="kube-system/cilium-xklc9" May 15 10:47:08.549281 kubelet[1909]: I0515 10:47:08.549021 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c07df115-bc56-4b47-bbdc-6d7a25dce805-host-proc-sys-net\") pod \"cilium-xklc9\" (UID: \"c07df115-bc56-4b47-bbdc-6d7a25dce805\") " pod="kube-system/cilium-xklc9" May 15 10:47:08.549281 kubelet[1909]: I0515 10:47:08.549081 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c07df115-bc56-4b47-bbdc-6d7a25dce805-host-proc-sys-kernel\") pod \"cilium-xklc9\" (UID: \"c07df115-bc56-4b47-bbdc-6d7a25dce805\") " pod="kube-system/cilium-xklc9" May 15 10:47:08.549429 kubelet[1909]: I0515 10:47:08.549117 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbs6j\" (UniqueName: \"kubernetes.io/projected/0e994a8c-1d95-4124-961a-a4972d4d3fbe-kube-api-access-rbs6j\") pod \"kube-proxy-hnhwv\" (UID: \"0e994a8c-1d95-4124-961a-a4972d4d3fbe\") " pod="kube-system/kube-proxy-hnhwv" May 15 10:47:08.549429 kubelet[1909]: I0515 10:47:08.549137 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c07df115-bc56-4b47-bbdc-6d7a25dce805-lib-modules\") pod \"cilium-xklc9\" (UID: \"c07df115-bc56-4b47-bbdc-6d7a25dce805\") " pod="kube-system/cilium-xklc9" May 15 10:47:08.549429 kubelet[1909]: I0515 10:47:08.549164 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0e994a8c-1d95-4124-961a-a4972d4d3fbe-kube-proxy\") pod \"kube-proxy-hnhwv\" (UID: \"0e994a8c-1d95-4124-961a-a4972d4d3fbe\") " pod="kube-system/kube-proxy-hnhwv" May 15 10:47:08.549429 kubelet[1909]: I0515 10:47:08.549192 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c07df115-bc56-4b47-bbdc-6d7a25dce805-cilium-run\") pod \"cilium-xklc9\" (UID: \"c07df115-bc56-4b47-bbdc-6d7a25dce805\") " pod="kube-system/cilium-xklc9" May 15 10:47:08.549429 kubelet[1909]: I0515 10:47:08.549211 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c07df115-bc56-4b47-bbdc-6d7a25dce805-cilium-cgroup\") pod \"cilium-xklc9\" (UID: \"c07df115-bc56-4b47-bbdc-6d7a25dce805\") " pod="kube-system/cilium-xklc9" May 15 10:47:08.549429 kubelet[1909]: I0515 10:47:08.549223 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c07df115-bc56-4b47-bbdc-6d7a25dce805-etc-cni-netd\") pod \"cilium-xklc9\" (UID: \"c07df115-bc56-4b47-bbdc-6d7a25dce805\") " pod="kube-system/cilium-xklc9" May 15 10:47:08.549588 kubelet[1909]: I0515 10:47:08.549236 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c07df115-bc56-4b47-bbdc-6d7a25dce805-cilium-config-path\") pod \"cilium-xklc9\" (UID: \"c07df115-bc56-4b47-bbdc-6d7a25dce805\") " pod="kube-system/cilium-xklc9" May 15 10:47:08.549588 kubelet[1909]: I0515 10:47:08.549248 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c07df115-bc56-4b47-bbdc-6d7a25dce805-bpf-maps\") pod \"cilium-xklc9\" (UID: \"c07df115-bc56-4b47-bbdc-6d7a25dce805\") " pod="kube-system/cilium-xklc9" May 15 10:47:08.549588 kubelet[1909]: I0515 10:47:08.549262 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrrkh\" (UniqueName: \"kubernetes.io/projected/c07df115-bc56-4b47-bbdc-6d7a25dce805-kube-api-access-zrrkh\") pod \"cilium-xklc9\" (UID: \"c07df115-bc56-4b47-bbdc-6d7a25dce805\") " pod="kube-system/cilium-xklc9" May 15 10:47:08.577499 kubelet[1909]: I0515 10:47:08.577476 1909 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 15 10:47:08.577859 env[1214]: time="2025-05-15T10:47:08.577808523Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 15 10:47:08.578067 kubelet[1909]: I0515 10:47:08.578019 1909 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 15 10:47:08.650275 kubelet[1909]: I0515 10:47:08.650233 1909 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" May 15 10:47:08.656383 kubelet[1909]: E0515 10:47:08.656356 1909 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 15 10:47:08.656383 kubelet[1909]: E0515 10:47:08.656380 1909 projected.go:194] Error preparing data for projected volume kube-api-access-zrrkh for pod kube-system/cilium-xklc9: configmap "kube-root-ca.crt" not found May 15 10:47:08.656494 kubelet[1909]: E0515 10:47:08.656419 1909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c07df115-bc56-4b47-bbdc-6d7a25dce805-kube-api-access-zrrkh podName:c07df115-bc56-4b47-bbdc-6d7a25dce805 nodeName:}" failed. No retries permitted until 2025-05-15 10:47:09.156405041 +0000 UTC m=+6.095129145 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-zrrkh" (UniqueName: "kubernetes.io/projected/c07df115-bc56-4b47-bbdc-6d7a25dce805-kube-api-access-zrrkh") pod "cilium-xklc9" (UID: "c07df115-bc56-4b47-bbdc-6d7a25dce805") : configmap "kube-root-ca.crt" not found May 15 10:47:08.658557 kubelet[1909]: E0515 10:47:08.657672 1909 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 15 10:47:08.658557 kubelet[1909]: E0515 10:47:08.657704 1909 projected.go:194] Error preparing data for projected volume kube-api-access-rbs6j for pod kube-system/kube-proxy-hnhwv: configmap "kube-root-ca.crt" not found May 15 10:47:08.658557 kubelet[1909]: E0515 10:47:08.657758 1909 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0e994a8c-1d95-4124-961a-a4972d4d3fbe-kube-api-access-rbs6j podName:0e994a8c-1d95-4124-961a-a4972d4d3fbe nodeName:}" failed. No retries permitted until 2025-05-15 10:47:09.157744266 +0000 UTC m=+6.096468370 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-rbs6j" (UniqueName: "kubernetes.io/projected/0e994a8c-1d95-4124-961a-a4972d4d3fbe-kube-api-access-rbs6j") pod "kube-proxy-hnhwv" (UID: "0e994a8c-1d95-4124-961a-a4972d4d3fbe") : configmap "kube-root-ca.crt" not found May 15 10:47:09.137087 kubelet[1909]: E0515 10:47:09.137051 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:47:09.153426 kubelet[1909]: E0515 10:47:09.153394 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:47:09.415145 kubelet[1909]: E0515 10:47:09.415021 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:47:09.415643 env[1214]: time="2025-05-15T10:47:09.415602475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hnhwv,Uid:0e994a8c-1d95-4124-961a-a4972d4d3fbe,Namespace:kube-system,Attempt:0,}" May 15 10:47:09.418495 kubelet[1909]: E0515 10:47:09.418469 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:47:09.418878 env[1214]: time="2025-05-15T10:47:09.418818289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xklc9,Uid:c07df115-bc56-4b47-bbdc-6d7a25dce805,Namespace:kube-system,Attempt:0,}" May 15 10:47:09.689726 systemd[1]: Created slice kubepods-besteffort-pod04185e21_f162_45a9_ad40_10da46df426d.slice. May 15 10:47:09.707027 env[1214]: time="2025-05-15T10:47:09.701807613Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 10:47:09.707027 env[1214]: time="2025-05-15T10:47:09.701827622Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 10:47:09.707027 env[1214]: time="2025-05-15T10:47:09.701845275Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 10:47:09.707027 env[1214]: time="2025-05-15T10:47:09.701923843Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6e7fb4e29c1a6b9badb91678d15069cbb153abaec8bd97b3fe1bdf14cc321508 pid=2009 runtime=io.containerd.runc.v2 May 15 10:47:09.707027 env[1214]: time="2025-05-15T10:47:09.701474594Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 10:47:09.707027 env[1214]: time="2025-05-15T10:47:09.701504680Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 10:47:09.707027 env[1214]: time="2025-05-15T10:47:09.701513768Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 10:47:09.707027 env[1214]: time="2025-05-15T10:47:09.701639947Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/418f9df334aab6d1a9b956cfed740e057dca5c4fc577d669c495e8faa5d94bd0 pid=2005 runtime=io.containerd.runc.v2 May 15 10:47:09.720429 systemd[1]: Started cri-containerd-418f9df334aab6d1a9b956cfed740e057dca5c4fc577d669c495e8faa5d94bd0.scope. May 15 10:47:09.725977 systemd[1]: Started cri-containerd-6e7fb4e29c1a6b9badb91678d15069cbb153abaec8bd97b3fe1bdf14cc321508.scope. May 15 10:47:09.746499 env[1214]: time="2025-05-15T10:47:09.745318300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hnhwv,Uid:0e994a8c-1d95-4124-961a-a4972d4d3fbe,Namespace:kube-system,Attempt:0,} returns sandbox id \"418f9df334aab6d1a9b956cfed740e057dca5c4fc577d669c495e8faa5d94bd0\"" May 15 10:47:09.746678 kubelet[1909]: E0515 10:47:09.745930 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:47:09.747174 env[1214]: time="2025-05-15T10:47:09.747089140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xklc9,Uid:c07df115-bc56-4b47-bbdc-6d7a25dce805,Namespace:kube-system,Attempt:0,} returns sandbox id \"6e7fb4e29c1a6b9badb91678d15069cbb153abaec8bd97b3fe1bdf14cc321508\"" May 15 10:47:09.749927 kubelet[1909]: E0515 10:47:09.749578 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:47:09.750068 env[1214]: time="2025-05-15T10:47:09.749595782Z" level=info msg="CreateContainer within sandbox \"418f9df334aab6d1a9b956cfed740e057dca5c4fc577d669c495e8faa5d94bd0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 15 10:47:09.750561 env[1214]: time="2025-05-15T10:47:09.750541571Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 15 10:47:09.757309 kubelet[1909]: I0515 10:47:09.757220 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/04185e21-f162-45a9-ad40-10da46df426d-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-wv4z7\" (UID: \"04185e21-f162-45a9-ad40-10da46df426d\") " pod="kube-system/cilium-operator-6c4d7847fc-wv4z7" May 15 10:47:09.757309 kubelet[1909]: I0515 10:47:09.757259 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kh7g9\" (UniqueName: \"kubernetes.io/projected/04185e21-f162-45a9-ad40-10da46df426d-kube-api-access-kh7g9\") pod \"cilium-operator-6c4d7847fc-wv4z7\" (UID: \"04185e21-f162-45a9-ad40-10da46df426d\") " pod="kube-system/cilium-operator-6c4d7847fc-wv4z7" May 15 10:47:09.765910 env[1214]: time="2025-05-15T10:47:09.765862706Z" level=info msg="CreateContainer within sandbox \"418f9df334aab6d1a9b956cfed740e057dca5c4fc577d669c495e8faa5d94bd0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f75da0dbf345e35b1e51a9a9ed64b5bb3278e9a6d918c0ac0137a2871c7d3e78\"" May 15 10:47:09.766483 env[1214]: time="2025-05-15T10:47:09.766426383Z" level=info msg="StartContainer for \"f75da0dbf345e35b1e51a9a9ed64b5bb3278e9a6d918c0ac0137a2871c7d3e78\"" May 15 10:47:09.780198 systemd[1]: Started cri-containerd-f75da0dbf345e35b1e51a9a9ed64b5bb3278e9a6d918c0ac0137a2871c7d3e78.scope. May 15 10:47:09.805633 env[1214]: time="2025-05-15T10:47:09.805593963Z" level=info msg="StartContainer for \"f75da0dbf345e35b1e51a9a9ed64b5bb3278e9a6d918c0ac0137a2871c7d3e78\" returns successfully" May 15 10:47:10.004411 kubelet[1909]: E0515 10:47:10.004313 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:47:10.005117 env[1214]: time="2025-05-15T10:47:10.005070870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-wv4z7,Uid:04185e21-f162-45a9-ad40-10da46df426d,Namespace:kube-system,Attempt:0,}" May 15 10:47:10.019550 env[1214]: time="2025-05-15T10:47:10.019434700Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 10:47:10.019550 env[1214]: time="2025-05-15T10:47:10.019483282Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 10:47:10.019550 env[1214]: time="2025-05-15T10:47:10.019493130Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 10:47:10.019732 env[1214]: time="2025-05-15T10:47:10.019655968Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1a7b76fe34f1ed4d8930445b53c67a8934753798f0409c8a37f316408d22795c pid=2174 runtime=io.containerd.runc.v2 May 15 10:47:10.029581 systemd[1]: Started cri-containerd-1a7b76fe34f1ed4d8930445b53c67a8934753798f0409c8a37f316408d22795c.scope. May 15 10:47:10.067024 env[1214]: time="2025-05-15T10:47:10.066976494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-wv4z7,Uid:04185e21-f162-45a9-ad40-10da46df426d,Namespace:kube-system,Attempt:0,} returns sandbox id \"1a7b76fe34f1ed4d8930445b53c67a8934753798f0409c8a37f316408d22795c\"" May 15 10:47:10.067835 kubelet[1909]: E0515 10:47:10.067710 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:47:10.156361 kubelet[1909]: E0515 10:47:10.156109 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:47:10.695756 systemd[1]: run-containerd-runc-k8s.io-418f9df334aab6d1a9b956cfed740e057dca5c4fc577d669c495e8faa5d94bd0-runc.LHEaW7.mount: Deactivated successfully. May 15 10:47:14.259576 kubelet[1909]: E0515 10:47:14.257555 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:47:14.312764 kubelet[1909]: I0515 10:47:14.312705 1909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-hnhwv" podStartSLOduration=6.312688335 podStartE2EDuration="6.312688335s" podCreationTimestamp="2025-05-15 10:47:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 10:47:10.165504185 +0000 UTC m=+7.104228289" watchObservedRunningTime="2025-05-15 10:47:14.312688335 +0000 UTC m=+11.251412439" May 15 10:47:14.552363 kubelet[1909]: E0515 10:47:14.552228 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:47:15.170827 kubelet[1909]: E0515 10:47:15.170766 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:47:18.551079 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2502204941.mount: Deactivated successfully. May 15 10:47:18.796129 update_engine[1198]: I0515 10:47:18.796082 1198 update_attempter.cc:509] Updating boot flags... May 15 10:47:22.070075 env[1214]: time="2025-05-15T10:47:22.070013754Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:47:22.071917 env[1214]: time="2025-05-15T10:47:22.071879677Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:47:22.073591 env[1214]: time="2025-05-15T10:47:22.073561001Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:47:22.074213 env[1214]: time="2025-05-15T10:47:22.074185307Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 15 10:47:22.075597 env[1214]: time="2025-05-15T10:47:22.075355941Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 15 10:47:22.076219 env[1214]: time="2025-05-15T10:47:22.076192897Z" level=info msg="CreateContainer within sandbox \"6e7fb4e29c1a6b9badb91678d15069cbb153abaec8bd97b3fe1bdf14cc321508\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 15 10:47:22.088022 env[1214]: time="2025-05-15T10:47:22.087976902Z" level=info msg="CreateContainer within sandbox \"6e7fb4e29c1a6b9badb91678d15069cbb153abaec8bd97b3fe1bdf14cc321508\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3ca4913d8c81de77e9484fa31c59ffe3fcae5be9b18dd592e29e795aa1b1147e\"" May 15 10:47:22.089409 env[1214]: time="2025-05-15T10:47:22.088419936Z" level=info msg="StartContainer for \"3ca4913d8c81de77e9484fa31c59ffe3fcae5be9b18dd592e29e795aa1b1147e\"" May 15 10:47:22.102577 systemd[1]: Started cri-containerd-3ca4913d8c81de77e9484fa31c59ffe3fcae5be9b18dd592e29e795aa1b1147e.scope. May 15 10:47:22.132144 systemd[1]: cri-containerd-3ca4913d8c81de77e9484fa31c59ffe3fcae5be9b18dd592e29e795aa1b1147e.scope: Deactivated successfully. May 15 10:47:22.319390 env[1214]: time="2025-05-15T10:47:22.319187400Z" level=info msg="StartContainer for \"3ca4913d8c81de77e9484fa31c59ffe3fcae5be9b18dd592e29e795aa1b1147e\" returns successfully" May 15 10:47:22.649664 env[1214]: time="2025-05-15T10:47:22.649610023Z" level=info msg="shim disconnected" id=3ca4913d8c81de77e9484fa31c59ffe3fcae5be9b18dd592e29e795aa1b1147e May 15 10:47:22.649664 env[1214]: time="2025-05-15T10:47:22.649664135Z" level=warning msg="cleaning up after shim disconnected" id=3ca4913d8c81de77e9484fa31c59ffe3fcae5be9b18dd592e29e795aa1b1147e namespace=k8s.io May 15 10:47:22.650008 env[1214]: time="2025-05-15T10:47:22.649686227Z" level=info msg="cleaning up dead shim" May 15 10:47:22.659464 env[1214]: time="2025-05-15T10:47:22.659405746Z" level=warning msg="cleanup warnings time=\"2025-05-15T10:47:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2341 runtime=io.containerd.runc.v2\n" May 15 10:47:23.085627 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3ca4913d8c81de77e9484fa31c59ffe3fcae5be9b18dd592e29e795aa1b1147e-rootfs.mount: Deactivated successfully. May 15 10:47:23.323824 kubelet[1909]: E0515 10:47:23.323791 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:47:23.326230 env[1214]: time="2025-05-15T10:47:23.325611692Z" level=info msg="CreateContainer within sandbox \"6e7fb4e29c1a6b9badb91678d15069cbb153abaec8bd97b3fe1bdf14cc321508\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 15 10:47:23.340746 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount760287438.mount: Deactivated successfully. May 15 10:47:23.343977 env[1214]: time="2025-05-15T10:47:23.343931616Z" level=info msg="CreateContainer within sandbox \"6e7fb4e29c1a6b9badb91678d15069cbb153abaec8bd97b3fe1bdf14cc321508\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2d5739b220a2c86c4fe4d96ead37dcb472d324e7684faa8c1be201357254b749\"" May 15 10:47:23.344537 env[1214]: time="2025-05-15T10:47:23.344480098Z" level=info msg="StartContainer for \"2d5739b220a2c86c4fe4d96ead37dcb472d324e7684faa8c1be201357254b749\"" May 15 10:47:23.360593 systemd[1]: Started cri-containerd-2d5739b220a2c86c4fe4d96ead37dcb472d324e7684faa8c1be201357254b749.scope. May 15 10:47:23.399318 env[1214]: time="2025-05-15T10:47:23.398173132Z" level=info msg="StartContainer for \"2d5739b220a2c86c4fe4d96ead37dcb472d324e7684faa8c1be201357254b749\" returns successfully" May 15 10:47:23.404748 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 15 10:47:23.404939 systemd[1]: Stopped systemd-sysctl.service. May 15 10:47:23.405095 systemd[1]: Stopping systemd-sysctl.service... May 15 10:47:23.406408 systemd[1]: Starting systemd-sysctl.service... May 15 10:47:23.411908 systemd[1]: cri-containerd-2d5739b220a2c86c4fe4d96ead37dcb472d324e7684faa8c1be201357254b749.scope: Deactivated successfully. May 15 10:47:23.453561 systemd[1]: Finished systemd-sysctl.service. May 15 10:47:23.477391 env[1214]: time="2025-05-15T10:47:23.477353760Z" level=info msg="shim disconnected" id=2d5739b220a2c86c4fe4d96ead37dcb472d324e7684faa8c1be201357254b749 May 15 10:47:23.477503 env[1214]: time="2025-05-15T10:47:23.477393455Z" level=warning msg="cleaning up after shim disconnected" id=2d5739b220a2c86c4fe4d96ead37dcb472d324e7684faa8c1be201357254b749 namespace=k8s.io May 15 10:47:23.477503 env[1214]: time="2025-05-15T10:47:23.477403594Z" level=info msg="cleaning up dead shim" May 15 10:47:23.484062 env[1214]: time="2025-05-15T10:47:23.484030187Z" level=warning msg="cleanup warnings time=\"2025-05-15T10:47:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2406 runtime=io.containerd.runc.v2\n" May 15 10:47:24.085605 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2d5739b220a2c86c4fe4d96ead37dcb472d324e7684faa8c1be201357254b749-rootfs.mount: Deactivated successfully. May 15 10:47:24.214402 env[1214]: time="2025-05-15T10:47:24.214350007Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:47:24.216071 env[1214]: time="2025-05-15T10:47:24.216042040Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:47:24.217498 env[1214]: time="2025-05-15T10:47:24.217462574Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:47:24.217963 env[1214]: time="2025-05-15T10:47:24.217931226Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 15 10:47:24.219917 env[1214]: time="2025-05-15T10:47:24.219885934Z" level=info msg="CreateContainer within sandbox \"1a7b76fe34f1ed4d8930445b53c67a8934753798f0409c8a37f316408d22795c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 15 10:47:24.230773 env[1214]: time="2025-05-15T10:47:24.230725082Z" level=info msg="CreateContainer within sandbox \"1a7b76fe34f1ed4d8930445b53c67a8934753798f0409c8a37f316408d22795c\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"40d740aa312a107d5ff2b317c6edef543d5d66a7e16fe16d983eb906776f9d0f\"" May 15 10:47:24.231165 env[1214]: time="2025-05-15T10:47:24.231142818Z" level=info msg="StartContainer for \"40d740aa312a107d5ff2b317c6edef543d5d66a7e16fe16d983eb906776f9d0f\"" May 15 10:47:24.244618 systemd[1]: Started cri-containerd-40d740aa312a107d5ff2b317c6edef543d5d66a7e16fe16d983eb906776f9d0f.scope. May 15 10:47:24.430753 env[1214]: time="2025-05-15T10:47:24.430687931Z" level=info msg="StartContainer for \"40d740aa312a107d5ff2b317c6edef543d5d66a7e16fe16d983eb906776f9d0f\" returns successfully" May 15 10:47:24.434988 kubelet[1909]: E0515 10:47:24.434701 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:47:24.436445 env[1214]: time="2025-05-15T10:47:24.436418906Z" level=info msg="CreateContainer within sandbox \"6e7fb4e29c1a6b9badb91678d15069cbb153abaec8bd97b3fe1bdf14cc321508\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 15 10:47:24.453726 env[1214]: time="2025-05-15T10:47:24.453663347Z" level=info msg="CreateContainer within sandbox \"6e7fb4e29c1a6b9badb91678d15069cbb153abaec8bd97b3fe1bdf14cc321508\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"bc0c461ae8a24a124256ffa3f2fcacb6d9ce23533725e4c37a654609e528a059\"" May 15 10:47:24.456086 env[1214]: time="2025-05-15T10:47:24.456049357Z" level=info msg="StartContainer for \"bc0c461ae8a24a124256ffa3f2fcacb6d9ce23533725e4c37a654609e528a059\"" May 15 10:47:24.477996 systemd[1]: Started cri-containerd-bc0c461ae8a24a124256ffa3f2fcacb6d9ce23533725e4c37a654609e528a059.scope. May 15 10:47:24.530055 systemd[1]: cri-containerd-bc0c461ae8a24a124256ffa3f2fcacb6d9ce23533725e4c37a654609e528a059.scope: Deactivated successfully. May 15 10:47:24.532281 env[1214]: time="2025-05-15T10:47:24.532248722Z" level=info msg="StartContainer for \"bc0c461ae8a24a124256ffa3f2fcacb6d9ce23533725e4c37a654609e528a059\" returns successfully" May 15 10:47:24.699057 env[1214]: time="2025-05-15T10:47:24.698935070Z" level=info msg="shim disconnected" id=bc0c461ae8a24a124256ffa3f2fcacb6d9ce23533725e4c37a654609e528a059 May 15 10:47:24.699057 env[1214]: time="2025-05-15T10:47:24.698993010Z" level=warning msg="cleaning up after shim disconnected" id=bc0c461ae8a24a124256ffa3f2fcacb6d9ce23533725e4c37a654609e528a059 namespace=k8s.io May 15 10:47:24.699057 env[1214]: time="2025-05-15T10:47:24.699005223Z" level=info msg="cleaning up dead shim" May 15 10:47:24.705805 env[1214]: time="2025-05-15T10:47:24.705738864Z" level=warning msg="cleanup warnings time=\"2025-05-15T10:47:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2499 runtime=io.containerd.runc.v2\n" May 15 10:47:25.392067 systemd[1]: Started sshd@5-10.0.0.121:22-10.0.0.1:52840.service. May 15 10:47:25.427713 sshd[2513]: Accepted publickey for core from 10.0.0.1 port 52840 ssh2: RSA SHA256:haioSl9UPoE92ibERJujrg0rXVEisgSt061naG/EAtE May 15 10:47:25.428957 sshd[2513]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:47:25.432594 systemd-logind[1196]: New session 6 of user core. May 15 10:47:25.433357 systemd[1]: Started session-6.scope. May 15 10:47:25.439007 kubelet[1909]: E0515 10:47:25.438722 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:47:25.439007 kubelet[1909]: E0515 10:47:25.438754 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:47:25.440297 env[1214]: time="2025-05-15T10:47:25.440255522Z" level=info msg="CreateContainer within sandbox \"6e7fb4e29c1a6b9badb91678d15069cbb153abaec8bd97b3fe1bdf14cc321508\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 15 10:47:25.457320 env[1214]: time="2025-05-15T10:47:25.457267890Z" level=info msg="CreateContainer within sandbox \"6e7fb4e29c1a6b9badb91678d15069cbb153abaec8bd97b3fe1bdf14cc321508\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"45b954a14b157ef5f73adb108042d8efb5420b3885898935724b692ed2df9efe\"" May 15 10:47:25.457813 env[1214]: time="2025-05-15T10:47:25.457789391Z" level=info msg="StartContainer for \"45b954a14b157ef5f73adb108042d8efb5420b3885898935724b692ed2df9efe\"" May 15 10:47:25.460391 kubelet[1909]: I0515 10:47:25.460319 1909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-wv4z7" podStartSLOduration=2.310456647 podStartE2EDuration="16.459845169s" podCreationTimestamp="2025-05-15 10:47:09 +0000 UTC" firstStartedPulling="2025-05-15 10:47:10.069399997 +0000 UTC m=+7.008124101" lastFinishedPulling="2025-05-15 10:47:24.218788519 +0000 UTC m=+21.157512623" observedRunningTime="2025-05-15 10:47:25.459577716 +0000 UTC m=+22.398301820" watchObservedRunningTime="2025-05-15 10:47:25.459845169 +0000 UTC m=+22.398569263" May 15 10:47:25.478328 systemd[1]: Started cri-containerd-45b954a14b157ef5f73adb108042d8efb5420b3885898935724b692ed2df9efe.scope. May 15 10:47:25.500888 systemd[1]: cri-containerd-45b954a14b157ef5f73adb108042d8efb5420b3885898935724b692ed2df9efe.scope: Deactivated successfully. May 15 10:47:25.502091 env[1214]: time="2025-05-15T10:47:25.502058312Z" level=info msg="StartContainer for \"45b954a14b157ef5f73adb108042d8efb5420b3885898935724b692ed2df9efe\" returns successfully" May 15 10:47:25.520105 env[1214]: time="2025-05-15T10:47:25.520058177Z" level=info msg="shim disconnected" id=45b954a14b157ef5f73adb108042d8efb5420b3885898935724b692ed2df9efe May 15 10:47:25.520105 env[1214]: time="2025-05-15T10:47:25.520102962Z" level=warning msg="cleaning up after shim disconnected" id=45b954a14b157ef5f73adb108042d8efb5420b3885898935724b692ed2df9efe namespace=k8s.io May 15 10:47:25.520297 env[1214]: time="2025-05-15T10:47:25.520111318Z" level=info msg="cleaning up dead shim" May 15 10:47:25.526961 env[1214]: time="2025-05-15T10:47:25.526920559Z" level=warning msg="cleanup warnings time=\"2025-05-15T10:47:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2569 runtime=io.containerd.runc.v2\n" May 15 10:47:25.552042 sshd[2513]: pam_unix(sshd:session): session closed for user core May 15 10:47:25.554417 systemd[1]: sshd@5-10.0.0.121:22-10.0.0.1:52840.service: Deactivated successfully. May 15 10:47:25.555102 systemd[1]: session-6.scope: Deactivated successfully. May 15 10:47:25.555771 systemd-logind[1196]: Session 6 logged out. Waiting for processes to exit. May 15 10:47:25.556399 systemd-logind[1196]: Removed session 6. May 15 10:47:26.085230 systemd[1]: run-containerd-runc-k8s.io-45b954a14b157ef5f73adb108042d8efb5420b3885898935724b692ed2df9efe-runc.wabGHq.mount: Deactivated successfully. May 15 10:47:26.085318 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-45b954a14b157ef5f73adb108042d8efb5420b3885898935724b692ed2df9efe-rootfs.mount: Deactivated successfully. May 15 10:47:26.442746 kubelet[1909]: E0515 10:47:26.442715 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:47:26.443207 kubelet[1909]: E0515 10:47:26.442804 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:47:26.444487 env[1214]: time="2025-05-15T10:47:26.444450287Z" level=info msg="CreateContainer within sandbox \"6e7fb4e29c1a6b9badb91678d15069cbb153abaec8bd97b3fe1bdf14cc321508\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 15 10:47:26.460400 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount154076289.mount: Deactivated successfully. May 15 10:47:26.462202 env[1214]: time="2025-05-15T10:47:26.462167255Z" level=info msg="CreateContainer within sandbox \"6e7fb4e29c1a6b9badb91678d15069cbb153abaec8bd97b3fe1bdf14cc321508\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b45df6894edf807a72da1e5886632a03a938e66ad597ee0942823fa7a5323f47\"" May 15 10:47:26.462714 env[1214]: time="2025-05-15T10:47:26.462672776Z" level=info msg="StartContainer for \"b45df6894edf807a72da1e5886632a03a938e66ad597ee0942823fa7a5323f47\"" May 15 10:47:26.477925 systemd[1]: Started cri-containerd-b45df6894edf807a72da1e5886632a03a938e66ad597ee0942823fa7a5323f47.scope. May 15 10:47:26.502557 env[1214]: time="2025-05-15T10:47:26.500282403Z" level=info msg="StartContainer for \"b45df6894edf807a72da1e5886632a03a938e66ad597ee0942823fa7a5323f47\" returns successfully" May 15 10:47:26.632038 kubelet[1909]: I0515 10:47:26.631999 1909 kubelet_node_status.go:502] "Fast updating node status as it just became ready" May 15 10:47:26.679508 systemd[1]: Created slice kubepods-burstable-pod7e421266_bc9d_46e3_b844_0e71259f6815.slice. May 15 10:47:26.683696 systemd[1]: Created slice kubepods-burstable-pod6058da46_fb77_4ce8_952c_b119462e13b5.slice. May 15 10:47:26.772551 kubelet[1909]: I0515 10:47:26.772443 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6058da46-fb77-4ce8-952c-b119462e13b5-config-volume\") pod \"coredns-668d6bf9bc-wvcc7\" (UID: \"6058da46-fb77-4ce8-952c-b119462e13b5\") " pod="kube-system/coredns-668d6bf9bc-wvcc7" May 15 10:47:26.772551 kubelet[1909]: I0515 10:47:26.772510 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zp5w6\" (UniqueName: \"kubernetes.io/projected/7e421266-bc9d-46e3-b844-0e71259f6815-kube-api-access-zp5w6\") pod \"coredns-668d6bf9bc-pn6lz\" (UID: \"7e421266-bc9d-46e3-b844-0e71259f6815\") " pod="kube-system/coredns-668d6bf9bc-pn6lz" May 15 10:47:26.772551 kubelet[1909]: I0515 10:47:26.772543 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gfmm\" (UniqueName: \"kubernetes.io/projected/6058da46-fb77-4ce8-952c-b119462e13b5-kube-api-access-7gfmm\") pod \"coredns-668d6bf9bc-wvcc7\" (UID: \"6058da46-fb77-4ce8-952c-b119462e13b5\") " pod="kube-system/coredns-668d6bf9bc-wvcc7" May 15 10:47:26.772714 kubelet[1909]: I0515 10:47:26.772562 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7e421266-bc9d-46e3-b844-0e71259f6815-config-volume\") pod \"coredns-668d6bf9bc-pn6lz\" (UID: \"7e421266-bc9d-46e3-b844-0e71259f6815\") " pod="kube-system/coredns-668d6bf9bc-pn6lz" May 15 10:47:26.984554 kubelet[1909]: E0515 10:47:26.984493 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:47:26.986286 env[1214]: time="2025-05-15T10:47:26.986236863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pn6lz,Uid:7e421266-bc9d-46e3-b844-0e71259f6815,Namespace:kube-system,Attempt:0,}" May 15 10:47:26.986703 kubelet[1909]: E0515 10:47:26.986673 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:47:26.987110 env[1214]: time="2025-05-15T10:47:26.987052297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wvcc7,Uid:6058da46-fb77-4ce8-952c-b119462e13b5,Namespace:kube-system,Attempt:0,}" May 15 10:47:27.450372 kubelet[1909]: E0515 10:47:27.450315 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:47:28.421598 systemd-networkd[1031]: cilium_host: Link UP May 15 10:47:28.422000 systemd-networkd[1031]: cilium_net: Link UP May 15 10:47:28.425936 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready May 15 10:47:28.425989 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready May 15 10:47:28.426113 systemd-networkd[1031]: cilium_net: Gained carrier May 15 10:47:28.426261 systemd-networkd[1031]: cilium_host: Gained carrier May 15 10:47:28.426349 systemd-networkd[1031]: cilium_net: Gained IPv6LL May 15 10:47:28.426460 systemd-networkd[1031]: cilium_host: Gained IPv6LL May 15 10:47:28.450063 kubelet[1909]: E0515 10:47:28.449930 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:47:28.497105 systemd-networkd[1031]: cilium_vxlan: Link UP May 15 10:47:28.497112 systemd-networkd[1031]: cilium_vxlan: Gained carrier May 15 10:47:28.682580 kernel: NET: Registered PF_ALG protocol family May 15 10:47:29.212463 systemd-networkd[1031]: lxc_health: Link UP May 15 10:47:29.221660 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 15 10:47:29.222120 systemd-networkd[1031]: lxc_health: Gained carrier May 15 10:47:29.437911 kubelet[1909]: I0515 10:47:29.437824 1909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-xklc9" podStartSLOduration=9.112797714 podStartE2EDuration="21.437744636s" podCreationTimestamp="2025-05-15 10:47:08 +0000 UTC" firstStartedPulling="2025-05-15 10:47:09.750200055 +0000 UTC m=+6.688924159" lastFinishedPulling="2025-05-15 10:47:22.075146977 +0000 UTC m=+19.013871081" observedRunningTime="2025-05-15 10:47:27.463767481 +0000 UTC m=+24.402491585" watchObservedRunningTime="2025-05-15 10:47:29.437744636 +0000 UTC m=+26.376468740" May 15 10:47:29.452005 kubelet[1909]: E0515 10:47:29.451963 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:47:29.528783 systemd-networkd[1031]: lxc8f79234e696a: Link UP May 15 10:47:29.536562 kernel: eth0: renamed from tmp09fdc May 15 10:47:29.550675 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 15 10:47:29.550792 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc8f79234e696a: link becomes ready May 15 10:47:29.550937 systemd-networkd[1031]: lxc8f79234e696a: Gained carrier May 15 10:47:29.552110 systemd-networkd[1031]: lxcfc28c83a1660: Link UP May 15 10:47:29.561640 kernel: eth0: renamed from tmpe1dfb May 15 10:47:29.567652 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcfc28c83a1660: link becomes ready May 15 10:47:29.567745 systemd-networkd[1031]: lxcfc28c83a1660: Gained carrier May 15 10:47:29.838861 systemd-networkd[1031]: cilium_vxlan: Gained IPv6LL May 15 10:47:30.453212 kubelet[1909]: I0515 10:47:30.453175 1909 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 10:47:30.453676 kubelet[1909]: E0515 10:47:30.453660 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:47:30.557076 systemd[1]: Started sshd@6-10.0.0.121:22-10.0.0.1:52856.service. May 15 10:47:30.592312 sshd[3118]: Accepted publickey for core from 10.0.0.1 port 52856 ssh2: RSA SHA256:haioSl9UPoE92ibERJujrg0rXVEisgSt061naG/EAtE May 15 10:47:30.593383 sshd[3118]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:47:30.597104 systemd-logind[1196]: New session 7 of user core. May 15 10:47:30.598130 systemd[1]: Started session-7.scope. May 15 10:47:30.705811 sshd[3118]: pam_unix(sshd:session): session closed for user core May 15 10:47:30.708062 systemd[1]: sshd@6-10.0.0.121:22-10.0.0.1:52856.service: Deactivated successfully. May 15 10:47:30.708769 systemd[1]: session-7.scope: Deactivated successfully. May 15 10:47:30.709332 systemd-logind[1196]: Session 7 logged out. Waiting for processes to exit. May 15 10:47:30.710002 systemd-logind[1196]: Removed session 7. May 15 10:47:31.052703 systemd-networkd[1031]: lxc_health: Gained IPv6LL May 15 10:47:31.436723 systemd-networkd[1031]: lxcfc28c83a1660: Gained IPv6LL May 15 10:47:31.564627 systemd-networkd[1031]: lxc8f79234e696a: Gained IPv6LL May 15 10:47:32.780948 env[1214]: time="2025-05-15T10:47:32.780887734Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 10:47:32.780948 env[1214]: time="2025-05-15T10:47:32.780929413Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 10:47:32.780948 env[1214]: time="2025-05-15T10:47:32.780938971Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 10:47:32.781321 env[1214]: time="2025-05-15T10:47:32.781091528Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/09fdc68e60542e1271c86511f93f2af85a5dd6ffc0cf4ceb508b22cb055527aa pid=3155 runtime=io.containerd.runc.v2 May 15 10:47:32.782892 env[1214]: time="2025-05-15T10:47:32.782850273Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 10:47:32.782948 env[1214]: time="2025-05-15T10:47:32.782898683Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 10:47:32.782948 env[1214]: time="2025-05-15T10:47:32.782919372Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 10:47:32.783053 env[1214]: time="2025-05-15T10:47:32.783026544Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e1dfb256ad5576090ad2ea9b2d38dcc1b279232c04709863158fb0c817425371 pid=3163 runtime=io.containerd.runc.v2 May 15 10:47:32.794180 systemd[1]: Started cri-containerd-e1dfb256ad5576090ad2ea9b2d38dcc1b279232c04709863158fb0c817425371.scope. May 15 10:47:32.804162 systemd[1]: Started cri-containerd-09fdc68e60542e1271c86511f93f2af85a5dd6ffc0cf4ceb508b22cb055527aa.scope. May 15 10:47:32.809594 systemd-resolved[1148]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 10:47:32.813321 systemd-resolved[1148]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 10:47:32.834122 env[1214]: time="2025-05-15T10:47:32.834089751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wvcc7,Uid:6058da46-fb77-4ce8-952c-b119462e13b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"e1dfb256ad5576090ad2ea9b2d38dcc1b279232c04709863158fb0c817425371\"" May 15 10:47:32.835108 kubelet[1909]: E0515 10:47:32.834706 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:47:32.836735 env[1214]: time="2025-05-15T10:47:32.836697762Z" level=info msg="CreateContainer within sandbox \"e1dfb256ad5576090ad2ea9b2d38dcc1b279232c04709863158fb0c817425371\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 10:47:32.838826 env[1214]: time="2025-05-15T10:47:32.838612280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pn6lz,Uid:7e421266-bc9d-46e3-b844-0e71259f6815,Namespace:kube-system,Attempt:0,} returns sandbox id \"09fdc68e60542e1271c86511f93f2af85a5dd6ffc0cf4ceb508b22cb055527aa\"" May 15 10:47:32.839340 kubelet[1909]: E0515 10:47:32.839308 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:47:32.841044 env[1214]: time="2025-05-15T10:47:32.841004857Z" level=info msg="CreateContainer within sandbox \"09fdc68e60542e1271c86511f93f2af85a5dd6ffc0cf4ceb508b22cb055527aa\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 10:47:32.853238 env[1214]: time="2025-05-15T10:47:32.853209253Z" level=info msg="CreateContainer within sandbox \"e1dfb256ad5576090ad2ea9b2d38dcc1b279232c04709863158fb0c817425371\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"04bad1de2a2860c46d047698c2b007606abc4aceaf6438d3148b6883a38e512b\"" May 15 10:47:32.853712 env[1214]: time="2025-05-15T10:47:32.853678546Z" level=info msg="StartContainer for \"04bad1de2a2860c46d047698c2b007606abc4aceaf6438d3148b6883a38e512b\"" May 15 10:47:32.859810 env[1214]: time="2025-05-15T10:47:32.859762069Z" level=info msg="CreateContainer within sandbox \"09fdc68e60542e1271c86511f93f2af85a5dd6ffc0cf4ceb508b22cb055527aa\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"015c73620a0b9b35a4854c7a47df699ca34f19d56c2e67418c057242c9a8e2ca\"" May 15 10:47:32.860143 env[1214]: time="2025-05-15T10:47:32.860120772Z" level=info msg="StartContainer for \"015c73620a0b9b35a4854c7a47df699ca34f19d56c2e67418c057242c9a8e2ca\"" May 15 10:47:32.866913 systemd[1]: Started cri-containerd-04bad1de2a2860c46d047698c2b007606abc4aceaf6438d3148b6883a38e512b.scope. May 15 10:47:32.883824 systemd[1]: Started cri-containerd-015c73620a0b9b35a4854c7a47df699ca34f19d56c2e67418c057242c9a8e2ca.scope. May 15 10:47:32.891360 env[1214]: time="2025-05-15T10:47:32.891283058Z" level=info msg="StartContainer for \"04bad1de2a2860c46d047698c2b007606abc4aceaf6438d3148b6883a38e512b\" returns successfully" May 15 10:47:32.911651 env[1214]: time="2025-05-15T10:47:32.911603408Z" level=info msg="StartContainer for \"015c73620a0b9b35a4854c7a47df699ca34f19d56c2e67418c057242c9a8e2ca\" returns successfully" May 15 10:47:33.458255 kubelet[1909]: E0515 10:47:33.458223 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:47:33.459932 kubelet[1909]: E0515 10:47:33.459904 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:47:33.470682 kubelet[1909]: I0515 10:47:33.470418 1909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-wvcc7" podStartSLOduration=24.470397694 podStartE2EDuration="24.470397694s" podCreationTimestamp="2025-05-15 10:47:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 10:47:33.468303448 +0000 UTC m=+30.407027572" watchObservedRunningTime="2025-05-15 10:47:33.470397694 +0000 UTC m=+30.409121798" May 15 10:47:33.480363 kubelet[1909]: I0515 10:47:33.480132 1909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-pn6lz" podStartSLOduration=24.480117258 podStartE2EDuration="24.480117258s" podCreationTimestamp="2025-05-15 10:47:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 10:47:33.479816242 +0000 UTC m=+30.418540346" watchObservedRunningTime="2025-05-15 10:47:33.480117258 +0000 UTC m=+30.418841362" May 15 10:47:34.461315 kubelet[1909]: E0515 10:47:34.461280 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:47:34.461743 kubelet[1909]: E0515 10:47:34.461335 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:47:35.464705 kubelet[1909]: E0515 10:47:35.464676 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:47:35.465074 kubelet[1909]: E0515 10:47:35.464735 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:47:35.710020 systemd[1]: Started sshd@7-10.0.0.121:22-10.0.0.1:51980.service. May 15 10:47:35.744613 sshd[3310]: Accepted publickey for core from 10.0.0.1 port 51980 ssh2: RSA SHA256:haioSl9UPoE92ibERJujrg0rXVEisgSt061naG/EAtE May 15 10:47:35.745588 sshd[3310]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:47:35.748719 systemd-logind[1196]: New session 8 of user core. May 15 10:47:35.749655 systemd[1]: Started session-8.scope. May 15 10:47:35.852940 sshd[3310]: pam_unix(sshd:session): session closed for user core May 15 10:47:35.854776 systemd[1]: sshd@7-10.0.0.121:22-10.0.0.1:51980.service: Deactivated successfully. May 15 10:47:35.855409 systemd[1]: session-8.scope: Deactivated successfully. May 15 10:47:35.856059 systemd-logind[1196]: Session 8 logged out. Waiting for processes to exit. May 15 10:47:35.856691 systemd-logind[1196]: Removed session 8. May 15 10:47:36.877249 kubelet[1909]: I0515 10:47:36.877197 1909 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 10:47:36.877731 kubelet[1909]: E0515 10:47:36.877704 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:47:37.467491 kubelet[1909]: E0515 10:47:37.467462 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:47:40.857455 systemd[1]: Started sshd@8-10.0.0.121:22-10.0.0.1:51992.service. May 15 10:47:40.890133 sshd[3327]: Accepted publickey for core from 10.0.0.1 port 51992 ssh2: RSA SHA256:haioSl9UPoE92ibERJujrg0rXVEisgSt061naG/EAtE May 15 10:47:40.891131 sshd[3327]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:47:40.894442 systemd-logind[1196]: New session 9 of user core. May 15 10:47:40.895213 systemd[1]: Started session-9.scope. May 15 10:47:41.000458 sshd[3327]: pam_unix(sshd:session): session closed for user core May 15 10:47:41.002845 systemd[1]: sshd@8-10.0.0.121:22-10.0.0.1:51992.service: Deactivated successfully. May 15 10:47:41.003517 systemd[1]: session-9.scope: Deactivated successfully. May 15 10:47:41.004098 systemd-logind[1196]: Session 9 logged out. Waiting for processes to exit. May 15 10:47:41.004995 systemd-logind[1196]: Removed session 9. May 15 10:47:46.004931 systemd[1]: Started sshd@9-10.0.0.121:22-10.0.0.1:50876.service. May 15 10:47:46.038404 sshd[3341]: Accepted publickey for core from 10.0.0.1 port 50876 ssh2: RSA SHA256:haioSl9UPoE92ibERJujrg0rXVEisgSt061naG/EAtE May 15 10:47:46.039367 sshd[3341]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:47:46.042437 systemd-logind[1196]: New session 10 of user core. May 15 10:47:46.043379 systemd[1]: Started session-10.scope. May 15 10:47:46.147912 sshd[3341]: pam_unix(sshd:session): session closed for user core May 15 10:47:46.150315 systemd[1]: sshd@9-10.0.0.121:22-10.0.0.1:50876.service: Deactivated successfully. May 15 10:47:46.150851 systemd[1]: session-10.scope: Deactivated successfully. May 15 10:47:46.151338 systemd-logind[1196]: Session 10 logged out. Waiting for processes to exit. May 15 10:47:46.152366 systemd[1]: Started sshd@10-10.0.0.121:22-10.0.0.1:50890.service. May 15 10:47:46.153500 systemd-logind[1196]: Removed session 10. May 15 10:47:46.186212 sshd[3355]: Accepted publickey for core from 10.0.0.1 port 50890 ssh2: RSA SHA256:haioSl9UPoE92ibERJujrg0rXVEisgSt061naG/EAtE May 15 10:47:46.187232 sshd[3355]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:47:46.190447 systemd-logind[1196]: New session 11 of user core. May 15 10:47:46.191448 systemd[1]: Started session-11.scope. May 15 10:47:46.326014 sshd[3355]: pam_unix(sshd:session): session closed for user core May 15 10:47:46.329057 systemd[1]: Started sshd@11-10.0.0.121:22-10.0.0.1:50898.service. May 15 10:47:46.329454 systemd[1]: sshd@10-10.0.0.121:22-10.0.0.1:50890.service: Deactivated successfully. May 15 10:47:46.329935 systemd[1]: session-11.scope: Deactivated successfully. May 15 10:47:46.330461 systemd-logind[1196]: Session 11 logged out. Waiting for processes to exit. May 15 10:47:46.331311 systemd-logind[1196]: Removed session 11. May 15 10:47:46.363578 sshd[3366]: Accepted publickey for core from 10.0.0.1 port 50898 ssh2: RSA SHA256:haioSl9UPoE92ibERJujrg0rXVEisgSt061naG/EAtE May 15 10:47:46.364708 sshd[3366]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:47:46.367939 systemd-logind[1196]: New session 12 of user core. May 15 10:47:46.368709 systemd[1]: Started session-12.scope. May 15 10:47:46.468515 sshd[3366]: pam_unix(sshd:session): session closed for user core May 15 10:47:46.470684 systemd[1]: sshd@11-10.0.0.121:22-10.0.0.1:50898.service: Deactivated successfully. May 15 10:47:46.471356 systemd[1]: session-12.scope: Deactivated successfully. May 15 10:47:46.471843 systemd-logind[1196]: Session 12 logged out. Waiting for processes to exit. May 15 10:47:46.472446 systemd-logind[1196]: Removed session 12. May 15 10:47:51.472628 systemd[1]: Started sshd@12-10.0.0.121:22-10.0.0.1:50904.service. May 15 10:47:51.504904 sshd[3381]: Accepted publickey for core from 10.0.0.1 port 50904 ssh2: RSA SHA256:haioSl9UPoE92ibERJujrg0rXVEisgSt061naG/EAtE May 15 10:47:51.505859 sshd[3381]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:47:51.508760 systemd-logind[1196]: New session 13 of user core. May 15 10:47:51.509479 systemd[1]: Started session-13.scope. May 15 10:47:51.608165 sshd[3381]: pam_unix(sshd:session): session closed for user core May 15 10:47:51.610021 systemd[1]: sshd@12-10.0.0.121:22-10.0.0.1:50904.service: Deactivated successfully. May 15 10:47:51.610693 systemd[1]: session-13.scope: Deactivated successfully. May 15 10:47:51.611250 systemd-logind[1196]: Session 13 logged out. Waiting for processes to exit. May 15 10:47:51.611865 systemd-logind[1196]: Removed session 13. May 15 10:47:56.612002 systemd[1]: Started sshd@13-10.0.0.121:22-10.0.0.1:39120.service. May 15 10:47:56.647426 sshd[3394]: Accepted publickey for core from 10.0.0.1 port 39120 ssh2: RSA SHA256:haioSl9UPoE92ibERJujrg0rXVEisgSt061naG/EAtE May 15 10:47:56.648332 sshd[3394]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:47:56.651553 systemd-logind[1196]: New session 14 of user core. May 15 10:47:56.652279 systemd[1]: Started session-14.scope. May 15 10:47:56.755165 sshd[3394]: pam_unix(sshd:session): session closed for user core May 15 10:47:56.757865 systemd[1]: sshd@13-10.0.0.121:22-10.0.0.1:39120.service: Deactivated successfully. May 15 10:47:56.758359 systemd[1]: session-14.scope: Deactivated successfully. May 15 10:47:56.758836 systemd-logind[1196]: Session 14 logged out. Waiting for processes to exit. May 15 10:47:56.759734 systemd[1]: Started sshd@14-10.0.0.121:22-10.0.0.1:39130.service. May 15 10:47:56.760363 systemd-logind[1196]: Removed session 14. May 15 10:47:56.795365 sshd[3407]: Accepted publickey for core from 10.0.0.1 port 39130 ssh2: RSA SHA256:haioSl9UPoE92ibERJujrg0rXVEisgSt061naG/EAtE May 15 10:47:56.796502 sshd[3407]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:47:56.799834 systemd-logind[1196]: New session 15 of user core. May 15 10:47:56.800679 systemd[1]: Started session-15.scope. May 15 10:47:56.949184 sshd[3407]: pam_unix(sshd:session): session closed for user core May 15 10:47:56.951748 systemd[1]: sshd@14-10.0.0.121:22-10.0.0.1:39130.service: Deactivated successfully. May 15 10:47:56.952259 systemd[1]: session-15.scope: Deactivated successfully. May 15 10:47:56.953024 systemd-logind[1196]: Session 15 logged out. Waiting for processes to exit. May 15 10:47:56.954328 systemd[1]: Started sshd@15-10.0.0.121:22-10.0.0.1:39146.service. May 15 10:47:56.955054 systemd-logind[1196]: Removed session 15. May 15 10:47:56.989597 sshd[3418]: Accepted publickey for core from 10.0.0.1 port 39146 ssh2: RSA SHA256:haioSl9UPoE92ibERJujrg0rXVEisgSt061naG/EAtE May 15 10:47:56.990643 sshd[3418]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:47:56.994091 systemd-logind[1196]: New session 16 of user core. May 15 10:47:56.994862 systemd[1]: Started session-16.scope. May 15 10:47:57.821027 sshd[3418]: pam_unix(sshd:session): session closed for user core May 15 10:47:57.823817 systemd[1]: Started sshd@16-10.0.0.121:22-10.0.0.1:39160.service. May 15 10:47:57.824210 systemd[1]: sshd@15-10.0.0.121:22-10.0.0.1:39146.service: Deactivated successfully. May 15 10:47:57.824789 systemd[1]: session-16.scope: Deactivated successfully. May 15 10:47:57.825516 systemd-logind[1196]: Session 16 logged out. Waiting for processes to exit. May 15 10:47:57.826634 systemd-logind[1196]: Removed session 16. May 15 10:47:57.860271 sshd[3435]: Accepted publickey for core from 10.0.0.1 port 39160 ssh2: RSA SHA256:haioSl9UPoE92ibERJujrg0rXVEisgSt061naG/EAtE May 15 10:47:57.861365 sshd[3435]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:47:57.864565 systemd-logind[1196]: New session 17 of user core. May 15 10:47:57.865372 systemd[1]: Started session-17.scope. May 15 10:47:58.067975 sshd[3435]: pam_unix(sshd:session): session closed for user core May 15 10:47:58.071255 systemd[1]: Started sshd@17-10.0.0.121:22-10.0.0.1:39162.service. May 15 10:47:58.072073 systemd[1]: sshd@16-10.0.0.121:22-10.0.0.1:39160.service: Deactivated successfully. May 15 10:47:58.072890 systemd[1]: session-17.scope: Deactivated successfully. May 15 10:47:58.073507 systemd-logind[1196]: Session 17 logged out. Waiting for processes to exit. May 15 10:47:58.074262 systemd-logind[1196]: Removed session 17. May 15 10:47:58.104718 sshd[3448]: Accepted publickey for core from 10.0.0.1 port 39162 ssh2: RSA SHA256:haioSl9UPoE92ibERJujrg0rXVEisgSt061naG/EAtE May 15 10:47:58.105669 sshd[3448]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:47:58.108812 systemd-logind[1196]: New session 18 of user core. May 15 10:47:58.109544 systemd[1]: Started session-18.scope. May 15 10:47:58.205771 sshd[3448]: pam_unix(sshd:session): session closed for user core May 15 10:47:58.208136 systemd[1]: sshd@17-10.0.0.121:22-10.0.0.1:39162.service: Deactivated successfully. May 15 10:47:58.208790 systemd[1]: session-18.scope: Deactivated successfully. May 15 10:47:58.209255 systemd-logind[1196]: Session 18 logged out. Waiting for processes to exit. May 15 10:47:58.209952 systemd-logind[1196]: Removed session 18. May 15 10:48:03.209621 systemd[1]: Started sshd@18-10.0.0.121:22-10.0.0.1:39174.service. May 15 10:48:03.242805 sshd[3466]: Accepted publickey for core from 10.0.0.1 port 39174 ssh2: RSA SHA256:haioSl9UPoE92ibERJujrg0rXVEisgSt061naG/EAtE May 15 10:48:03.243933 sshd[3466]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:48:03.248258 systemd[1]: Started session-19.scope. May 15 10:48:03.248549 systemd-logind[1196]: New session 19 of user core. May 15 10:48:03.344793 sshd[3466]: pam_unix(sshd:session): session closed for user core May 15 10:48:03.347082 systemd[1]: sshd@18-10.0.0.121:22-10.0.0.1:39174.service: Deactivated successfully. May 15 10:48:03.347750 systemd[1]: session-19.scope: Deactivated successfully. May 15 10:48:03.348219 systemd-logind[1196]: Session 19 logged out. Waiting for processes to exit. May 15 10:48:03.348875 systemd-logind[1196]: Removed session 19. May 15 10:48:08.349750 systemd[1]: Started sshd@19-10.0.0.121:22-10.0.0.1:58522.service. May 15 10:48:08.385404 sshd[3482]: Accepted publickey for core from 10.0.0.1 port 58522 ssh2: RSA SHA256:haioSl9UPoE92ibERJujrg0rXVEisgSt061naG/EAtE May 15 10:48:08.386503 sshd[3482]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:48:08.389880 systemd-logind[1196]: New session 20 of user core. May 15 10:48:08.390677 systemd[1]: Started session-20.scope. May 15 10:48:08.487717 sshd[3482]: pam_unix(sshd:session): session closed for user core May 15 10:48:08.489915 systemd[1]: sshd@19-10.0.0.121:22-10.0.0.1:58522.service: Deactivated successfully. May 15 10:48:08.490610 systemd[1]: session-20.scope: Deactivated successfully. May 15 10:48:08.491184 systemd-logind[1196]: Session 20 logged out. Waiting for processes to exit. May 15 10:48:08.491856 systemd-logind[1196]: Removed session 20. May 15 10:48:13.491959 systemd[1]: Started sshd@20-10.0.0.121:22-10.0.0.1:57476.service. May 15 10:48:13.524444 sshd[3497]: Accepted publickey for core from 10.0.0.1 port 57476 ssh2: RSA SHA256:haioSl9UPoE92ibERJujrg0rXVEisgSt061naG/EAtE May 15 10:48:13.525418 sshd[3497]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:48:13.528392 systemd-logind[1196]: New session 21 of user core. May 15 10:48:13.529339 systemd[1]: Started session-21.scope. May 15 10:48:13.628366 sshd[3497]: pam_unix(sshd:session): session closed for user core May 15 10:48:13.630184 systemd[1]: sshd@20-10.0.0.121:22-10.0.0.1:57476.service: Deactivated successfully. May 15 10:48:13.630900 systemd[1]: session-21.scope: Deactivated successfully. May 15 10:48:13.631590 systemd-logind[1196]: Session 21 logged out. Waiting for processes to exit. May 15 10:48:13.632344 systemd-logind[1196]: Removed session 21. May 15 10:48:18.633079 systemd[1]: Started sshd@21-10.0.0.121:22-10.0.0.1:57488.service. May 15 10:48:18.669778 sshd[3511]: Accepted publickey for core from 10.0.0.1 port 57488 ssh2: RSA SHA256:haioSl9UPoE92ibERJujrg0rXVEisgSt061naG/EAtE May 15 10:48:18.670808 sshd[3511]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:48:18.673934 systemd-logind[1196]: New session 22 of user core. May 15 10:48:18.674726 systemd[1]: Started session-22.scope. May 15 10:48:18.776478 sshd[3511]: pam_unix(sshd:session): session closed for user core May 15 10:48:18.779232 systemd[1]: sshd@21-10.0.0.121:22-10.0.0.1:57488.service: Deactivated successfully. May 15 10:48:18.779770 systemd[1]: session-22.scope: Deactivated successfully. May 15 10:48:18.780277 systemd-logind[1196]: Session 22 logged out. Waiting for processes to exit. May 15 10:48:18.781267 systemd[1]: Started sshd@22-10.0.0.121:22-10.0.0.1:57492.service. May 15 10:48:18.781956 systemd-logind[1196]: Removed session 22. May 15 10:48:18.813793 sshd[3524]: Accepted publickey for core from 10.0.0.1 port 57492 ssh2: RSA SHA256:haioSl9UPoE92ibERJujrg0rXVEisgSt061naG/EAtE May 15 10:48:18.814734 sshd[3524]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:48:18.817649 systemd-logind[1196]: New session 23 of user core. May 15 10:48:18.818321 systemd[1]: Started session-23.scope. May 15 10:48:20.138314 env[1214]: time="2025-05-15T10:48:20.138264845Z" level=info msg="StopContainer for \"40d740aa312a107d5ff2b317c6edef543d5d66a7e16fe16d983eb906776f9d0f\" with timeout 30 (s)" May 15 10:48:20.138705 env[1214]: time="2025-05-15T10:48:20.138548518Z" level=info msg="Stop container \"40d740aa312a107d5ff2b317c6edef543d5d66a7e16fe16d983eb906776f9d0f\" with signal terminated" May 15 10:48:20.146687 systemd[1]: run-containerd-runc-k8s.io-b45df6894edf807a72da1e5886632a03a938e66ad597ee0942823fa7a5323f47-runc.ao3Utt.mount: Deactivated successfully. May 15 10:48:20.151032 systemd[1]: cri-containerd-40d740aa312a107d5ff2b317c6edef543d5d66a7e16fe16d983eb906776f9d0f.scope: Deactivated successfully. May 15 10:48:20.161295 env[1214]: time="2025-05-15T10:48:20.161230090Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 15 10:48:20.166100 env[1214]: time="2025-05-15T10:48:20.166054698Z" level=info msg="StopContainer for \"b45df6894edf807a72da1e5886632a03a938e66ad597ee0942823fa7a5323f47\" with timeout 2 (s)" May 15 10:48:20.166363 env[1214]: time="2025-05-15T10:48:20.166267867Z" level=info msg="Stop container \"b45df6894edf807a72da1e5886632a03a938e66ad597ee0942823fa7a5323f47\" with signal terminated" May 15 10:48:20.169173 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-40d740aa312a107d5ff2b317c6edef543d5d66a7e16fe16d983eb906776f9d0f-rootfs.mount: Deactivated successfully. May 15 10:48:20.171789 systemd-networkd[1031]: lxc_health: Link DOWN May 15 10:48:20.171795 systemd-networkd[1031]: lxc_health: Lost carrier May 15 10:48:20.177791 env[1214]: time="2025-05-15T10:48:20.177735727Z" level=info msg="shim disconnected" id=40d740aa312a107d5ff2b317c6edef543d5d66a7e16fe16d983eb906776f9d0f May 15 10:48:20.177791 env[1214]: time="2025-05-15T10:48:20.177788187Z" level=warning msg="cleaning up after shim disconnected" id=40d740aa312a107d5ff2b317c6edef543d5d66a7e16fe16d983eb906776f9d0f namespace=k8s.io May 15 10:48:20.177791 env[1214]: time="2025-05-15T10:48:20.177799098Z" level=info msg="cleaning up dead shim" May 15 10:48:20.186602 env[1214]: time="2025-05-15T10:48:20.186561978Z" level=warning msg="cleanup warnings time=\"2025-05-15T10:48:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3577 runtime=io.containerd.runc.v2\n" May 15 10:48:20.189734 env[1214]: time="2025-05-15T10:48:20.189673195Z" level=info msg="StopContainer for \"40d740aa312a107d5ff2b317c6edef543d5d66a7e16fe16d983eb906776f9d0f\" returns successfully" May 15 10:48:20.190645 env[1214]: time="2025-05-15T10:48:20.190519846Z" level=info msg="StopPodSandbox for \"1a7b76fe34f1ed4d8930445b53c67a8934753798f0409c8a37f316408d22795c\"" May 15 10:48:20.190699 env[1214]: time="2025-05-15T10:48:20.190679451Z" level=info msg="Container to stop \"40d740aa312a107d5ff2b317c6edef543d5d66a7e16fe16d983eb906776f9d0f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 10:48:20.192440 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1a7b76fe34f1ed4d8930445b53c67a8934753798f0409c8a37f316408d22795c-shm.mount: Deactivated successfully. May 15 10:48:20.198052 systemd[1]: cri-containerd-b45df6894edf807a72da1e5886632a03a938e66ad597ee0942823fa7a5323f47.scope: Deactivated successfully. May 15 10:48:20.198295 systemd[1]: cri-containerd-b45df6894edf807a72da1e5886632a03a938e66ad597ee0942823fa7a5323f47.scope: Consumed 5.815s CPU time. May 15 10:48:20.201724 systemd[1]: cri-containerd-1a7b76fe34f1ed4d8930445b53c67a8934753798f0409c8a37f316408d22795c.scope: Deactivated successfully. May 15 10:48:20.213088 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b45df6894edf807a72da1e5886632a03a938e66ad597ee0942823fa7a5323f47-rootfs.mount: Deactivated successfully. May 15 10:48:20.218000 env[1214]: time="2025-05-15T10:48:20.217948919Z" level=info msg="shim disconnected" id=b45df6894edf807a72da1e5886632a03a938e66ad597ee0942823fa7a5323f47 May 15 10:48:20.218000 env[1214]: time="2025-05-15T10:48:20.218005988Z" level=warning msg="cleaning up after shim disconnected" id=b45df6894edf807a72da1e5886632a03a938e66ad597ee0942823fa7a5323f47 namespace=k8s.io May 15 10:48:20.218146 env[1214]: time="2025-05-15T10:48:20.218016459Z" level=info msg="cleaning up dead shim" May 15 10:48:20.223666 env[1214]: time="2025-05-15T10:48:20.223613746Z" level=info msg="shim disconnected" id=1a7b76fe34f1ed4d8930445b53c67a8934753798f0409c8a37f316408d22795c May 15 10:48:20.223666 env[1214]: time="2025-05-15T10:48:20.223653363Z" level=warning msg="cleaning up after shim disconnected" id=1a7b76fe34f1ed4d8930445b53c67a8934753798f0409c8a37f316408d22795c namespace=k8s.io May 15 10:48:20.223666 env[1214]: time="2025-05-15T10:48:20.223661578Z" level=info msg="cleaning up dead shim" May 15 10:48:20.226410 env[1214]: time="2025-05-15T10:48:20.226379944Z" level=warning msg="cleanup warnings time=\"2025-05-15T10:48:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3622 runtime=io.containerd.runc.v2\n" May 15 10:48:20.228875 env[1214]: time="2025-05-15T10:48:20.228847098Z" level=info msg="StopContainer for \"b45df6894edf807a72da1e5886632a03a938e66ad597ee0942823fa7a5323f47\" returns successfully" May 15 10:48:20.229386 env[1214]: time="2025-05-15T10:48:20.229347466Z" level=info msg="StopPodSandbox for \"6e7fb4e29c1a6b9badb91678d15069cbb153abaec8bd97b3fe1bdf14cc321508\"" May 15 10:48:20.229447 env[1214]: time="2025-05-15T10:48:20.229413893Z" level=info msg="Container to stop \"45b954a14b157ef5f73adb108042d8efb5420b3885898935724b692ed2df9efe\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 10:48:20.229447 env[1214]: time="2025-05-15T10:48:20.229427390Z" level=info msg="Container to stop \"b45df6894edf807a72da1e5886632a03a938e66ad597ee0942823fa7a5323f47\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 10:48:20.229447 env[1214]: time="2025-05-15T10:48:20.229436807Z" level=info msg="Container to stop \"3ca4913d8c81de77e9484fa31c59ffe3fcae5be9b18dd592e29e795aa1b1147e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 10:48:20.229521 env[1214]: time="2025-05-15T10:48:20.229446486Z" level=info msg="Container to stop \"2d5739b220a2c86c4fe4d96ead37dcb472d324e7684faa8c1be201357254b749\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 10:48:20.229521 env[1214]: time="2025-05-15T10:48:20.229457116Z" level=info msg="Container to stop \"bc0c461ae8a24a124256ffa3f2fcacb6d9ce23533725e4c37a654609e528a059\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 10:48:20.231091 env[1214]: time="2025-05-15T10:48:20.231056318Z" level=warning msg="cleanup warnings time=\"2025-05-15T10:48:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3634 runtime=io.containerd.runc.v2\n" May 15 10:48:20.231299 env[1214]: time="2025-05-15T10:48:20.231269256Z" level=info msg="TearDown network for sandbox \"1a7b76fe34f1ed4d8930445b53c67a8934753798f0409c8a37f316408d22795c\" successfully" May 15 10:48:20.231368 env[1214]: time="2025-05-15T10:48:20.231336695Z" level=info msg="StopPodSandbox for \"1a7b76fe34f1ed4d8930445b53c67a8934753798f0409c8a37f316408d22795c\" returns successfully" May 15 10:48:20.235106 systemd[1]: cri-containerd-6e7fb4e29c1a6b9badb91678d15069cbb153abaec8bd97b3fe1bdf14cc321508.scope: Deactivated successfully. May 15 10:48:20.254068 env[1214]: time="2025-05-15T10:48:20.254015763Z" level=info msg="shim disconnected" id=6e7fb4e29c1a6b9badb91678d15069cbb153abaec8bd97b3fe1bdf14cc321508 May 15 10:48:20.254068 env[1214]: time="2025-05-15T10:48:20.254057132Z" level=warning msg="cleaning up after shim disconnected" id=6e7fb4e29c1a6b9badb91678d15069cbb153abaec8bd97b3fe1bdf14cc321508 namespace=k8s.io May 15 10:48:20.254068 env[1214]: time="2025-05-15T10:48:20.254065598Z" level=info msg="cleaning up dead shim" May 15 10:48:20.264261 env[1214]: time="2025-05-15T10:48:20.263851096Z" level=warning msg="cleanup warnings time=\"2025-05-15T10:48:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3665 runtime=io.containerd.runc.v2\n" May 15 10:48:20.264261 env[1214]: time="2025-05-15T10:48:20.264105183Z" level=info msg="TearDown network for sandbox \"6e7fb4e29c1a6b9badb91678d15069cbb153abaec8bd97b3fe1bdf14cc321508\" successfully" May 15 10:48:20.264261 env[1214]: time="2025-05-15T10:48:20.264121695Z" level=info msg="StopPodSandbox for \"6e7fb4e29c1a6b9badb91678d15069cbb153abaec8bd97b3fe1bdf14cc321508\" returns successfully" May 15 10:48:20.273415 kubelet[1909]: I0515 10:48:20.273383 1909 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kh7g9\" (UniqueName: \"kubernetes.io/projected/04185e21-f162-45a9-ad40-10da46df426d-kube-api-access-kh7g9\") pod \"04185e21-f162-45a9-ad40-10da46df426d\" (UID: \"04185e21-f162-45a9-ad40-10da46df426d\") " May 15 10:48:20.273415 kubelet[1909]: I0515 10:48:20.273420 1909 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/04185e21-f162-45a9-ad40-10da46df426d-cilium-config-path\") pod \"04185e21-f162-45a9-ad40-10da46df426d\" (UID: \"04185e21-f162-45a9-ad40-10da46df426d\") " May 15 10:48:20.275328 kubelet[1909]: I0515 10:48:20.275306 1909 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04185e21-f162-45a9-ad40-10da46df426d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "04185e21-f162-45a9-ad40-10da46df426d" (UID: "04185e21-f162-45a9-ad40-10da46df426d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 15 10:48:20.276555 kubelet[1909]: I0515 10:48:20.276497 1909 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04185e21-f162-45a9-ad40-10da46df426d-kube-api-access-kh7g9" (OuterVolumeSpecName: "kube-api-access-kh7g9") pod "04185e21-f162-45a9-ad40-10da46df426d" (UID: "04185e21-f162-45a9-ad40-10da46df426d"). InnerVolumeSpecName "kube-api-access-kh7g9". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 15 10:48:20.373645 kubelet[1909]: I0515 10:48:20.373614 1909 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c07df115-bc56-4b47-bbdc-6d7a25dce805-xtables-lock\") pod \"c07df115-bc56-4b47-bbdc-6d7a25dce805\" (UID: \"c07df115-bc56-4b47-bbdc-6d7a25dce805\") " May 15 10:48:20.373645 kubelet[1909]: I0515 10:48:20.373648 1909 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c07df115-bc56-4b47-bbdc-6d7a25dce805-lib-modules\") pod \"c07df115-bc56-4b47-bbdc-6d7a25dce805\" (UID: \"c07df115-bc56-4b47-bbdc-6d7a25dce805\") " May 15 10:48:20.373762 kubelet[1909]: I0515 10:48:20.373665 1909 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c07df115-bc56-4b47-bbdc-6d7a25dce805-cilium-run\") pod \"c07df115-bc56-4b47-bbdc-6d7a25dce805\" (UID: \"c07df115-bc56-4b47-bbdc-6d7a25dce805\") " May 15 10:48:20.373762 kubelet[1909]: I0515 10:48:20.373691 1909 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c07df115-bc56-4b47-bbdc-6d7a25dce805-hubble-tls\") pod \"c07df115-bc56-4b47-bbdc-6d7a25dce805\" (UID: \"c07df115-bc56-4b47-bbdc-6d7a25dce805\") " May 15 10:48:20.373762 kubelet[1909]: I0515 10:48:20.373705 1909 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c07df115-bc56-4b47-bbdc-6d7a25dce805-host-proc-sys-net\") pod \"c07df115-bc56-4b47-bbdc-6d7a25dce805\" (UID: \"c07df115-bc56-4b47-bbdc-6d7a25dce805\") " May 15 10:48:20.373762 kubelet[1909]: I0515 10:48:20.373721 1909 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c07df115-bc56-4b47-bbdc-6d7a25dce805-hostproc\") pod \"c07df115-bc56-4b47-bbdc-6d7a25dce805\" (UID: \"c07df115-bc56-4b47-bbdc-6d7a25dce805\") " May 15 10:48:20.373762 kubelet[1909]: I0515 10:48:20.373739 1909 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c07df115-bc56-4b47-bbdc-6d7a25dce805-clustermesh-secrets\") pod \"c07df115-bc56-4b47-bbdc-6d7a25dce805\" (UID: \"c07df115-bc56-4b47-bbdc-6d7a25dce805\") " May 15 10:48:20.373762 kubelet[1909]: I0515 10:48:20.373738 1909 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c07df115-bc56-4b47-bbdc-6d7a25dce805-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c07df115-bc56-4b47-bbdc-6d7a25dce805" (UID: "c07df115-bc56-4b47-bbdc-6d7a25dce805"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 10:48:20.373909 kubelet[1909]: I0515 10:48:20.373756 1909 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c07df115-bc56-4b47-bbdc-6d7a25dce805-host-proc-sys-kernel\") pod \"c07df115-bc56-4b47-bbdc-6d7a25dce805\" (UID: \"c07df115-bc56-4b47-bbdc-6d7a25dce805\") " May 15 10:48:20.373909 kubelet[1909]: I0515 10:48:20.373774 1909 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c07df115-bc56-4b47-bbdc-6d7a25dce805-cilium-cgroup\") pod \"c07df115-bc56-4b47-bbdc-6d7a25dce805\" (UID: \"c07df115-bc56-4b47-bbdc-6d7a25dce805\") " May 15 10:48:20.373909 kubelet[1909]: I0515 10:48:20.373781 1909 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c07df115-bc56-4b47-bbdc-6d7a25dce805-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "c07df115-bc56-4b47-bbdc-6d7a25dce805" (UID: "c07df115-bc56-4b47-bbdc-6d7a25dce805"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 10:48:20.373909 kubelet[1909]: I0515 10:48:20.373793 1909 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrrkh\" (UniqueName: \"kubernetes.io/projected/c07df115-bc56-4b47-bbdc-6d7a25dce805-kube-api-access-zrrkh\") pod \"c07df115-bc56-4b47-bbdc-6d7a25dce805\" (UID: \"c07df115-bc56-4b47-bbdc-6d7a25dce805\") " May 15 10:48:20.373909 kubelet[1909]: I0515 10:48:20.373797 1909 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c07df115-bc56-4b47-bbdc-6d7a25dce805-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "c07df115-bc56-4b47-bbdc-6d7a25dce805" (UID: "c07df115-bc56-4b47-bbdc-6d7a25dce805"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 10:48:20.374042 kubelet[1909]: I0515 10:48:20.373774 1909 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c07df115-bc56-4b47-bbdc-6d7a25dce805-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c07df115-bc56-4b47-bbdc-6d7a25dce805" (UID: "c07df115-bc56-4b47-bbdc-6d7a25dce805"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 10:48:20.374042 kubelet[1909]: I0515 10:48:20.373809 1909 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c07df115-bc56-4b47-bbdc-6d7a25dce805-etc-cni-netd\") pod \"c07df115-bc56-4b47-bbdc-6d7a25dce805\" (UID: \"c07df115-bc56-4b47-bbdc-6d7a25dce805\") " May 15 10:48:20.374042 kubelet[1909]: I0515 10:48:20.373827 1909 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c07df115-bc56-4b47-bbdc-6d7a25dce805-cni-path\") pod \"c07df115-bc56-4b47-bbdc-6d7a25dce805\" (UID: \"c07df115-bc56-4b47-bbdc-6d7a25dce805\") " May 15 10:48:20.374042 kubelet[1909]: I0515 10:48:20.373837 1909 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c07df115-bc56-4b47-bbdc-6d7a25dce805-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "c07df115-bc56-4b47-bbdc-6d7a25dce805" (UID: "c07df115-bc56-4b47-bbdc-6d7a25dce805"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 10:48:20.374042 kubelet[1909]: I0515 10:48:20.373839 1909 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c07df115-bc56-4b47-bbdc-6d7a25dce805-bpf-maps\") pod \"c07df115-bc56-4b47-bbdc-6d7a25dce805\" (UID: \"c07df115-bc56-4b47-bbdc-6d7a25dce805\") " May 15 10:48:20.374190 kubelet[1909]: I0515 10:48:20.373865 1909 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c07df115-bc56-4b47-bbdc-6d7a25dce805-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "c07df115-bc56-4b47-bbdc-6d7a25dce805" (UID: "c07df115-bc56-4b47-bbdc-6d7a25dce805"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 10:48:20.374190 kubelet[1909]: I0515 10:48:20.373898 1909 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c07df115-bc56-4b47-bbdc-6d7a25dce805-cilium-config-path\") pod \"c07df115-bc56-4b47-bbdc-6d7a25dce805\" (UID: \"c07df115-bc56-4b47-bbdc-6d7a25dce805\") " May 15 10:48:20.374190 kubelet[1909]: I0515 10:48:20.373967 1909 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c07df115-bc56-4b47-bbdc-6d7a25dce805-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 15 10:48:20.374190 kubelet[1909]: I0515 10:48:20.373977 1909 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c07df115-bc56-4b47-bbdc-6d7a25dce805-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 15 10:48:20.374190 kubelet[1909]: I0515 10:48:20.373988 1909 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c07df115-bc56-4b47-bbdc-6d7a25dce805-lib-modules\") on node \"localhost\" DevicePath \"\"" May 15 10:48:20.374190 kubelet[1909]: I0515 10:48:20.373997 1909 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c07df115-bc56-4b47-bbdc-6d7a25dce805-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 15 10:48:20.374190 kubelet[1909]: I0515 10:48:20.374004 1909 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kh7g9\" (UniqueName: \"kubernetes.io/projected/04185e21-f162-45a9-ad40-10da46df426d-kube-api-access-kh7g9\") on node \"localhost\" DevicePath \"\"" May 15 10:48:20.374348 kubelet[1909]: I0515 10:48:20.374012 1909 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c07df115-bc56-4b47-bbdc-6d7a25dce805-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 15 10:48:20.374348 kubelet[1909]: I0515 10:48:20.374033 1909 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/04185e21-f162-45a9-ad40-10da46df426d-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 15 10:48:20.374495 kubelet[1909]: I0515 10:48:20.374437 1909 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c07df115-bc56-4b47-bbdc-6d7a25dce805-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "c07df115-bc56-4b47-bbdc-6d7a25dce805" (UID: "c07df115-bc56-4b47-bbdc-6d7a25dce805"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 10:48:20.374495 kubelet[1909]: I0515 10:48:20.374450 1909 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c07df115-bc56-4b47-bbdc-6d7a25dce805-cni-path" (OuterVolumeSpecName: "cni-path") pod "c07df115-bc56-4b47-bbdc-6d7a25dce805" (UID: "c07df115-bc56-4b47-bbdc-6d7a25dce805"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 10:48:20.374495 kubelet[1909]: I0515 10:48:20.374473 1909 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c07df115-bc56-4b47-bbdc-6d7a25dce805-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "c07df115-bc56-4b47-bbdc-6d7a25dce805" (UID: "c07df115-bc56-4b47-bbdc-6d7a25dce805"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 10:48:20.374495 kubelet[1909]: I0515 10:48:20.374477 1909 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c07df115-bc56-4b47-bbdc-6d7a25dce805-hostproc" (OuterVolumeSpecName: "hostproc") pod "c07df115-bc56-4b47-bbdc-6d7a25dce805" (UID: "c07df115-bc56-4b47-bbdc-6d7a25dce805"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 10:48:20.376196 kubelet[1909]: I0515 10:48:20.376161 1909 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c07df115-bc56-4b47-bbdc-6d7a25dce805-kube-api-access-zrrkh" (OuterVolumeSpecName: "kube-api-access-zrrkh") pod "c07df115-bc56-4b47-bbdc-6d7a25dce805" (UID: "c07df115-bc56-4b47-bbdc-6d7a25dce805"). InnerVolumeSpecName "kube-api-access-zrrkh". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 15 10:48:20.376388 kubelet[1909]: I0515 10:48:20.376360 1909 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c07df115-bc56-4b47-bbdc-6d7a25dce805-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c07df115-bc56-4b47-bbdc-6d7a25dce805" (UID: "c07df115-bc56-4b47-bbdc-6d7a25dce805"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 15 10:48:20.376619 kubelet[1909]: I0515 10:48:20.376580 1909 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c07df115-bc56-4b47-bbdc-6d7a25dce805-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "c07df115-bc56-4b47-bbdc-6d7a25dce805" (UID: "c07df115-bc56-4b47-bbdc-6d7a25dce805"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 15 10:48:20.376955 kubelet[1909]: I0515 10:48:20.376914 1909 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c07df115-bc56-4b47-bbdc-6d7a25dce805-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "c07df115-bc56-4b47-bbdc-6d7a25dce805" (UID: "c07df115-bc56-4b47-bbdc-6d7a25dce805"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 15 10:48:20.475107 kubelet[1909]: I0515 10:48:20.475048 1909 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c07df115-bc56-4b47-bbdc-6d7a25dce805-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 15 10:48:20.475107 kubelet[1909]: I0515 10:48:20.475069 1909 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c07df115-bc56-4b47-bbdc-6d7a25dce805-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 15 10:48:20.475107 kubelet[1909]: I0515 10:48:20.475078 1909 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c07df115-bc56-4b47-bbdc-6d7a25dce805-hostproc\") on node \"localhost\" DevicePath \"\"" May 15 10:48:20.475107 kubelet[1909]: I0515 10:48:20.475085 1909 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c07df115-bc56-4b47-bbdc-6d7a25dce805-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 15 10:48:20.475107 kubelet[1909]: I0515 10:48:20.475092 1909 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zrrkh\" (UniqueName: \"kubernetes.io/projected/c07df115-bc56-4b47-bbdc-6d7a25dce805-kube-api-access-zrrkh\") on node \"localhost\" DevicePath \"\"" May 15 10:48:20.475107 kubelet[1909]: I0515 10:48:20.475099 1909 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c07df115-bc56-4b47-bbdc-6d7a25dce805-cni-path\") on node \"localhost\" DevicePath \"\"" May 15 10:48:20.475107 kubelet[1909]: I0515 10:48:20.475106 1909 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c07df115-bc56-4b47-bbdc-6d7a25dce805-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 15 10:48:20.475107 kubelet[1909]: I0515 10:48:20.475113 1909 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c07df115-bc56-4b47-bbdc-6d7a25dce805-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 15 10:48:20.475345 kubelet[1909]: I0515 10:48:20.475121 1909 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c07df115-bc56-4b47-bbdc-6d7a25dce805-cilium-run\") on node \"localhost\" DevicePath \"\"" May 15 10:48:20.538678 kubelet[1909]: I0515 10:48:20.538646 1909 scope.go:117] "RemoveContainer" containerID="40d740aa312a107d5ff2b317c6edef543d5d66a7e16fe16d983eb906776f9d0f" May 15 10:48:20.539966 env[1214]: time="2025-05-15T10:48:20.539919984Z" level=info msg="RemoveContainer for \"40d740aa312a107d5ff2b317c6edef543d5d66a7e16fe16d983eb906776f9d0f\"" May 15 10:48:20.542429 systemd[1]: Removed slice kubepods-besteffort-pod04185e21_f162_45a9_ad40_10da46df426d.slice. May 15 10:48:20.545020 systemd[1]: Removed slice kubepods-burstable-podc07df115_bc56_4b47_bbdc_6d7a25dce805.slice. May 15 10:48:20.545089 systemd[1]: kubepods-burstable-podc07df115_bc56_4b47_bbdc_6d7a25dce805.slice: Consumed 5.904s CPU time. May 15 10:48:20.545776 env[1214]: time="2025-05-15T10:48:20.545675546Z" level=info msg="RemoveContainer for \"40d740aa312a107d5ff2b317c6edef543d5d66a7e16fe16d983eb906776f9d0f\" returns successfully" May 15 10:48:20.545921 kubelet[1909]: I0515 10:48:20.545873 1909 scope.go:117] "RemoveContainer" containerID="40d740aa312a107d5ff2b317c6edef543d5d66a7e16fe16d983eb906776f9d0f" May 15 10:48:20.546086 env[1214]: time="2025-05-15T10:48:20.546023442Z" level=error msg="ContainerStatus for \"40d740aa312a107d5ff2b317c6edef543d5d66a7e16fe16d983eb906776f9d0f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"40d740aa312a107d5ff2b317c6edef543d5d66a7e16fe16d983eb906776f9d0f\": not found" May 15 10:48:20.546188 kubelet[1909]: E0515 10:48:20.546165 1909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"40d740aa312a107d5ff2b317c6edef543d5d66a7e16fe16d983eb906776f9d0f\": not found" containerID="40d740aa312a107d5ff2b317c6edef543d5d66a7e16fe16d983eb906776f9d0f" May 15 10:48:20.546273 kubelet[1909]: I0515 10:48:20.546192 1909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"40d740aa312a107d5ff2b317c6edef543d5d66a7e16fe16d983eb906776f9d0f"} err="failed to get container status \"40d740aa312a107d5ff2b317c6edef543d5d66a7e16fe16d983eb906776f9d0f\": rpc error: code = NotFound desc = an error occurred when try to find container \"40d740aa312a107d5ff2b317c6edef543d5d66a7e16fe16d983eb906776f9d0f\": not found" May 15 10:48:20.546273 kubelet[1909]: I0515 10:48:20.546265 1909 scope.go:117] "RemoveContainer" containerID="b45df6894edf807a72da1e5886632a03a938e66ad597ee0942823fa7a5323f47" May 15 10:48:20.547557 env[1214]: time="2025-05-15T10:48:20.547176319Z" level=info msg="RemoveContainer for \"b45df6894edf807a72da1e5886632a03a938e66ad597ee0942823fa7a5323f47\"" May 15 10:48:20.550476 env[1214]: time="2025-05-15T10:48:20.550430069Z" level=info msg="RemoveContainer for \"b45df6894edf807a72da1e5886632a03a938e66ad597ee0942823fa7a5323f47\" returns successfully" May 15 10:48:20.550631 kubelet[1909]: I0515 10:48:20.550598 1909 scope.go:117] "RemoveContainer" containerID="45b954a14b157ef5f73adb108042d8efb5420b3885898935724b692ed2df9efe" May 15 10:48:20.551436 env[1214]: time="2025-05-15T10:48:20.551414906Z" level=info msg="RemoveContainer for \"45b954a14b157ef5f73adb108042d8efb5420b3885898935724b692ed2df9efe\"" May 15 10:48:20.554701 env[1214]: time="2025-05-15T10:48:20.554645813Z" level=info msg="RemoveContainer for \"45b954a14b157ef5f73adb108042d8efb5420b3885898935724b692ed2df9efe\" returns successfully" May 15 10:48:20.555564 kubelet[1909]: I0515 10:48:20.555520 1909 scope.go:117] "RemoveContainer" containerID="bc0c461ae8a24a124256ffa3f2fcacb6d9ce23533725e4c37a654609e528a059" May 15 10:48:20.557381 env[1214]: time="2025-05-15T10:48:20.557353237Z" level=info msg="RemoveContainer for \"bc0c461ae8a24a124256ffa3f2fcacb6d9ce23533725e4c37a654609e528a059\"" May 15 10:48:20.560134 env[1214]: time="2025-05-15T10:48:20.560098404Z" level=info msg="RemoveContainer for \"bc0c461ae8a24a124256ffa3f2fcacb6d9ce23533725e4c37a654609e528a059\" returns successfully" May 15 10:48:20.560264 kubelet[1909]: I0515 10:48:20.560245 1909 scope.go:117] "RemoveContainer" containerID="2d5739b220a2c86c4fe4d96ead37dcb472d324e7684faa8c1be201357254b749" May 15 10:48:20.561549 env[1214]: time="2025-05-15T10:48:20.561494095Z" level=info msg="RemoveContainer for \"2d5739b220a2c86c4fe4d96ead37dcb472d324e7684faa8c1be201357254b749\"" May 15 10:48:20.565583 env[1214]: time="2025-05-15T10:48:20.565549171Z" level=info msg="RemoveContainer for \"2d5739b220a2c86c4fe4d96ead37dcb472d324e7684faa8c1be201357254b749\" returns successfully" May 15 10:48:20.566910 kubelet[1909]: I0515 10:48:20.566887 1909 scope.go:117] "RemoveContainer" containerID="3ca4913d8c81de77e9484fa31c59ffe3fcae5be9b18dd592e29e795aa1b1147e" May 15 10:48:20.567892 env[1214]: time="2025-05-15T10:48:20.567866218Z" level=info msg="RemoveContainer for \"3ca4913d8c81de77e9484fa31c59ffe3fcae5be9b18dd592e29e795aa1b1147e\"" May 15 10:48:20.570509 env[1214]: time="2025-05-15T10:48:20.570476817Z" level=info msg="RemoveContainer for \"3ca4913d8c81de77e9484fa31c59ffe3fcae5be9b18dd592e29e795aa1b1147e\" returns successfully" May 15 10:48:20.570660 kubelet[1909]: I0515 10:48:20.570636 1909 scope.go:117] "RemoveContainer" containerID="b45df6894edf807a72da1e5886632a03a938e66ad597ee0942823fa7a5323f47" May 15 10:48:20.570870 env[1214]: time="2025-05-15T10:48:20.570805416Z" level=error msg="ContainerStatus for \"b45df6894edf807a72da1e5886632a03a938e66ad597ee0942823fa7a5323f47\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b45df6894edf807a72da1e5886632a03a938e66ad597ee0942823fa7a5323f47\": not found" May 15 10:48:20.571017 kubelet[1909]: E0515 10:48:20.570950 1909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b45df6894edf807a72da1e5886632a03a938e66ad597ee0942823fa7a5323f47\": not found" containerID="b45df6894edf807a72da1e5886632a03a938e66ad597ee0942823fa7a5323f47" May 15 10:48:20.571017 kubelet[1909]: I0515 10:48:20.570977 1909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b45df6894edf807a72da1e5886632a03a938e66ad597ee0942823fa7a5323f47"} err="failed to get container status \"b45df6894edf807a72da1e5886632a03a938e66ad597ee0942823fa7a5323f47\": rpc error: code = NotFound desc = an error occurred when try to find container \"b45df6894edf807a72da1e5886632a03a938e66ad597ee0942823fa7a5323f47\": not found" May 15 10:48:20.571017 kubelet[1909]: I0515 10:48:20.570997 1909 scope.go:117] "RemoveContainer" containerID="45b954a14b157ef5f73adb108042d8efb5420b3885898935724b692ed2df9efe" May 15 10:48:20.571269 env[1214]: time="2025-05-15T10:48:20.571212396Z" level=error msg="ContainerStatus for \"45b954a14b157ef5f73adb108042d8efb5420b3885898935724b692ed2df9efe\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"45b954a14b157ef5f73adb108042d8efb5420b3885898935724b692ed2df9efe\": not found" May 15 10:48:20.571398 kubelet[1909]: E0515 10:48:20.571327 1909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"45b954a14b157ef5f73adb108042d8efb5420b3885898935724b692ed2df9efe\": not found" containerID="45b954a14b157ef5f73adb108042d8efb5420b3885898935724b692ed2df9efe" May 15 10:48:20.571398 kubelet[1909]: I0515 10:48:20.571339 1909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"45b954a14b157ef5f73adb108042d8efb5420b3885898935724b692ed2df9efe"} err="failed to get container status \"45b954a14b157ef5f73adb108042d8efb5420b3885898935724b692ed2df9efe\": rpc error: code = NotFound desc = an error occurred when try to find container \"45b954a14b157ef5f73adb108042d8efb5420b3885898935724b692ed2df9efe\": not found" May 15 10:48:20.571398 kubelet[1909]: I0515 10:48:20.571349 1909 scope.go:117] "RemoveContainer" containerID="bc0c461ae8a24a124256ffa3f2fcacb6d9ce23533725e4c37a654609e528a059" May 15 10:48:20.571576 env[1214]: time="2025-05-15T10:48:20.571504024Z" level=error msg="ContainerStatus for \"bc0c461ae8a24a124256ffa3f2fcacb6d9ce23533725e4c37a654609e528a059\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bc0c461ae8a24a124256ffa3f2fcacb6d9ce23533725e4c37a654609e528a059\": not found" May 15 10:48:20.571718 kubelet[1909]: E0515 10:48:20.571694 1909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bc0c461ae8a24a124256ffa3f2fcacb6d9ce23533725e4c37a654609e528a059\": not found" containerID="bc0c461ae8a24a124256ffa3f2fcacb6d9ce23533725e4c37a654609e528a059" May 15 10:48:20.571753 kubelet[1909]: I0515 10:48:20.571728 1909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bc0c461ae8a24a124256ffa3f2fcacb6d9ce23533725e4c37a654609e528a059"} err="failed to get container status \"bc0c461ae8a24a124256ffa3f2fcacb6d9ce23533725e4c37a654609e528a059\": rpc error: code = NotFound desc = an error occurred when try to find container \"bc0c461ae8a24a124256ffa3f2fcacb6d9ce23533725e4c37a654609e528a059\": not found" May 15 10:48:20.571781 kubelet[1909]: I0515 10:48:20.571754 1909 scope.go:117] "RemoveContainer" containerID="2d5739b220a2c86c4fe4d96ead37dcb472d324e7684faa8c1be201357254b749" May 15 10:48:20.571958 env[1214]: time="2025-05-15T10:48:20.571907737Z" level=error msg="ContainerStatus for \"2d5739b220a2c86c4fe4d96ead37dcb472d324e7684faa8c1be201357254b749\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2d5739b220a2c86c4fe4d96ead37dcb472d324e7684faa8c1be201357254b749\": not found" May 15 10:48:20.572139 kubelet[1909]: E0515 10:48:20.572119 1909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2d5739b220a2c86c4fe4d96ead37dcb472d324e7684faa8c1be201357254b749\": not found" containerID="2d5739b220a2c86c4fe4d96ead37dcb472d324e7684faa8c1be201357254b749" May 15 10:48:20.572194 kubelet[1909]: I0515 10:48:20.572143 1909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2d5739b220a2c86c4fe4d96ead37dcb472d324e7684faa8c1be201357254b749"} err="failed to get container status \"2d5739b220a2c86c4fe4d96ead37dcb472d324e7684faa8c1be201357254b749\": rpc error: code = NotFound desc = an error occurred when try to find container \"2d5739b220a2c86c4fe4d96ead37dcb472d324e7684faa8c1be201357254b749\": not found" May 15 10:48:20.572194 kubelet[1909]: I0515 10:48:20.572165 1909 scope.go:117] "RemoveContainer" containerID="3ca4913d8c81de77e9484fa31c59ffe3fcae5be9b18dd592e29e795aa1b1147e" May 15 10:48:20.572364 env[1214]: time="2025-05-15T10:48:20.572323022Z" level=error msg="ContainerStatus for \"3ca4913d8c81de77e9484fa31c59ffe3fcae5be9b18dd592e29e795aa1b1147e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3ca4913d8c81de77e9484fa31c59ffe3fcae5be9b18dd592e29e795aa1b1147e\": not found" May 15 10:48:20.572455 kubelet[1909]: E0515 10:48:20.572430 1909 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3ca4913d8c81de77e9484fa31c59ffe3fcae5be9b18dd592e29e795aa1b1147e\": not found" containerID="3ca4913d8c81de77e9484fa31c59ffe3fcae5be9b18dd592e29e795aa1b1147e" May 15 10:48:20.572455 kubelet[1909]: I0515 10:48:20.572448 1909 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3ca4913d8c81de77e9484fa31c59ffe3fcae5be9b18dd592e29e795aa1b1147e"} err="failed to get container status \"3ca4913d8c81de77e9484fa31c59ffe3fcae5be9b18dd592e29e795aa1b1147e\": rpc error: code = NotFound desc = an error occurred when try to find container \"3ca4913d8c81de77e9484fa31c59ffe3fcae5be9b18dd592e29e795aa1b1147e\": not found" May 15 10:48:21.142283 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1a7b76fe34f1ed4d8930445b53c67a8934753798f0409c8a37f316408d22795c-rootfs.mount: Deactivated successfully. May 15 10:48:21.144543 kubelet[1909]: I0515 10:48:21.143409 1909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04185e21-f162-45a9-ad40-10da46df426d" path="/var/lib/kubelet/pods/04185e21-f162-45a9-ad40-10da46df426d/volumes" May 15 10:48:21.144543 kubelet[1909]: I0515 10:48:21.143784 1909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c07df115-bc56-4b47-bbdc-6d7a25dce805" path="/var/lib/kubelet/pods/c07df115-bc56-4b47-bbdc-6d7a25dce805/volumes" May 15 10:48:21.142372 systemd[1]: var-lib-kubelet-pods-04185e21\x2df162\x2d45a9\x2dad40\x2d10da46df426d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkh7g9.mount: Deactivated successfully. May 15 10:48:21.142424 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6e7fb4e29c1a6b9badb91678d15069cbb153abaec8bd97b3fe1bdf14cc321508-rootfs.mount: Deactivated successfully. May 15 10:48:21.142474 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6e7fb4e29c1a6b9badb91678d15069cbb153abaec8bd97b3fe1bdf14cc321508-shm.mount: Deactivated successfully. May 15 10:48:21.142551 systemd[1]: var-lib-kubelet-pods-c07df115\x2dbc56\x2d4b47\x2dbbdc\x2d6d7a25dce805-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzrrkh.mount: Deactivated successfully. May 15 10:48:21.142600 systemd[1]: var-lib-kubelet-pods-c07df115\x2dbc56\x2d4b47\x2dbbdc\x2d6d7a25dce805-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 15 10:48:21.142647 systemd[1]: var-lib-kubelet-pods-c07df115\x2dbc56\x2d4b47\x2dbbdc\x2d6d7a25dce805-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 15 10:48:22.102086 sshd[3524]: pam_unix(sshd:session): session closed for user core May 15 10:48:22.105587 systemd[1]: Started sshd@23-10.0.0.121:22-10.0.0.1:57504.service. May 15 10:48:22.107340 systemd[1]: sshd@22-10.0.0.121:22-10.0.0.1:57492.service: Deactivated successfully. May 15 10:48:22.108144 systemd[1]: session-23.scope: Deactivated successfully. May 15 10:48:22.108788 systemd-logind[1196]: Session 23 logged out. Waiting for processes to exit. May 15 10:48:22.109426 systemd-logind[1196]: Removed session 23. May 15 10:48:22.141402 sshd[3684]: Accepted publickey for core from 10.0.0.1 port 57504 ssh2: RSA SHA256:haioSl9UPoE92ibERJujrg0rXVEisgSt061naG/EAtE May 15 10:48:22.142326 sshd[3684]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:48:22.145288 systemd-logind[1196]: New session 24 of user core. May 15 10:48:22.146039 systemd[1]: Started session-24.scope. May 15 10:48:22.852663 sshd[3684]: pam_unix(sshd:session): session closed for user core May 15 10:48:22.856361 systemd[1]: Started sshd@24-10.0.0.121:22-10.0.0.1:57512.service. May 15 10:48:22.860194 systemd[1]: sshd@23-10.0.0.121:22-10.0.0.1:57504.service: Deactivated successfully. May 15 10:48:22.860696 systemd[1]: session-24.scope: Deactivated successfully. May 15 10:48:22.861248 systemd-logind[1196]: Session 24 logged out. Waiting for processes to exit. May 15 10:48:22.861935 systemd-logind[1196]: Removed session 24. May 15 10:48:22.867178 kubelet[1909]: I0515 10:48:22.866586 1909 memory_manager.go:355] "RemoveStaleState removing state" podUID="04185e21-f162-45a9-ad40-10da46df426d" containerName="cilium-operator" May 15 10:48:22.867178 kubelet[1909]: I0515 10:48:22.866609 1909 memory_manager.go:355] "RemoveStaleState removing state" podUID="c07df115-bc56-4b47-bbdc-6d7a25dce805" containerName="cilium-agent" May 15 10:48:22.874648 systemd[1]: Created slice kubepods-burstable-pod39358308_2ac6_4dd8_8402_7f75c43dc62a.slice. May 15 10:48:22.889606 kubelet[1909]: I0515 10:48:22.889576 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5bb6\" (UniqueName: \"kubernetes.io/projected/39358308-2ac6-4dd8-8402-7f75c43dc62a-kube-api-access-h5bb6\") pod \"cilium-v7gsv\" (UID: \"39358308-2ac6-4dd8-8402-7f75c43dc62a\") " pod="kube-system/cilium-v7gsv" May 15 10:48:22.889934 kubelet[1909]: I0515 10:48:22.889919 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/39358308-2ac6-4dd8-8402-7f75c43dc62a-etc-cni-netd\") pod \"cilium-v7gsv\" (UID: \"39358308-2ac6-4dd8-8402-7f75c43dc62a\") " pod="kube-system/cilium-v7gsv" May 15 10:48:22.890066 kubelet[1909]: I0515 10:48:22.890049 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/39358308-2ac6-4dd8-8402-7f75c43dc62a-cilium-ipsec-secrets\") pod \"cilium-v7gsv\" (UID: \"39358308-2ac6-4dd8-8402-7f75c43dc62a\") " pod="kube-system/cilium-v7gsv" May 15 10:48:22.890174 kubelet[1909]: I0515 10:48:22.890155 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/39358308-2ac6-4dd8-8402-7f75c43dc62a-cilium-run\") pod \"cilium-v7gsv\" (UID: \"39358308-2ac6-4dd8-8402-7f75c43dc62a\") " pod="kube-system/cilium-v7gsv" May 15 10:48:22.890260 kubelet[1909]: I0515 10:48:22.890244 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/39358308-2ac6-4dd8-8402-7f75c43dc62a-hostproc\") pod \"cilium-v7gsv\" (UID: \"39358308-2ac6-4dd8-8402-7f75c43dc62a\") " pod="kube-system/cilium-v7gsv" May 15 10:48:22.890347 kubelet[1909]: I0515 10:48:22.890331 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/39358308-2ac6-4dd8-8402-7f75c43dc62a-cilium-cgroup\") pod \"cilium-v7gsv\" (UID: \"39358308-2ac6-4dd8-8402-7f75c43dc62a\") " pod="kube-system/cilium-v7gsv" May 15 10:48:22.890434 kubelet[1909]: I0515 10:48:22.890416 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/39358308-2ac6-4dd8-8402-7f75c43dc62a-cni-path\") pod \"cilium-v7gsv\" (UID: \"39358308-2ac6-4dd8-8402-7f75c43dc62a\") " pod="kube-system/cilium-v7gsv" May 15 10:48:22.890517 kubelet[1909]: I0515 10:48:22.890501 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/39358308-2ac6-4dd8-8402-7f75c43dc62a-cilium-config-path\") pod \"cilium-v7gsv\" (UID: \"39358308-2ac6-4dd8-8402-7f75c43dc62a\") " pod="kube-system/cilium-v7gsv" May 15 10:48:22.890629 kubelet[1909]: I0515 10:48:22.890613 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/39358308-2ac6-4dd8-8402-7f75c43dc62a-host-proc-sys-kernel\") pod \"cilium-v7gsv\" (UID: \"39358308-2ac6-4dd8-8402-7f75c43dc62a\") " pod="kube-system/cilium-v7gsv" May 15 10:48:22.890711 kubelet[1909]: I0515 10:48:22.890695 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/39358308-2ac6-4dd8-8402-7f75c43dc62a-hubble-tls\") pod \"cilium-v7gsv\" (UID: \"39358308-2ac6-4dd8-8402-7f75c43dc62a\") " pod="kube-system/cilium-v7gsv" May 15 10:48:22.890797 kubelet[1909]: I0515 10:48:22.890778 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/39358308-2ac6-4dd8-8402-7f75c43dc62a-xtables-lock\") pod \"cilium-v7gsv\" (UID: \"39358308-2ac6-4dd8-8402-7f75c43dc62a\") " pod="kube-system/cilium-v7gsv" May 15 10:48:22.890878 kubelet[1909]: I0515 10:48:22.890862 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/39358308-2ac6-4dd8-8402-7f75c43dc62a-lib-modules\") pod \"cilium-v7gsv\" (UID: \"39358308-2ac6-4dd8-8402-7f75c43dc62a\") " pod="kube-system/cilium-v7gsv" May 15 10:48:22.890969 kubelet[1909]: I0515 10:48:22.890945 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/39358308-2ac6-4dd8-8402-7f75c43dc62a-host-proc-sys-net\") pod \"cilium-v7gsv\" (UID: \"39358308-2ac6-4dd8-8402-7f75c43dc62a\") " pod="kube-system/cilium-v7gsv" May 15 10:48:22.891061 kubelet[1909]: I0515 10:48:22.891045 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/39358308-2ac6-4dd8-8402-7f75c43dc62a-bpf-maps\") pod \"cilium-v7gsv\" (UID: \"39358308-2ac6-4dd8-8402-7f75c43dc62a\") " pod="kube-system/cilium-v7gsv" May 15 10:48:22.891145 kubelet[1909]: I0515 10:48:22.891129 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/39358308-2ac6-4dd8-8402-7f75c43dc62a-clustermesh-secrets\") pod \"cilium-v7gsv\" (UID: \"39358308-2ac6-4dd8-8402-7f75c43dc62a\") " pod="kube-system/cilium-v7gsv" May 15 10:48:22.900450 sshd[3696]: Accepted publickey for core from 10.0.0.1 port 57512 ssh2: RSA SHA256:haioSl9UPoE92ibERJujrg0rXVEisgSt061naG/EAtE May 15 10:48:22.901474 sshd[3696]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:48:22.905817 systemd-logind[1196]: New session 25 of user core. May 15 10:48:22.905988 systemd[1]: Started session-25.scope. May 15 10:48:23.025521 sshd[3696]: pam_unix(sshd:session): session closed for user core May 15 10:48:23.028265 systemd[1]: sshd@24-10.0.0.121:22-10.0.0.1:57512.service: Deactivated successfully. May 15 10:48:23.028816 systemd[1]: session-25.scope: Deactivated successfully. May 15 10:48:23.029670 systemd-logind[1196]: Session 25 logged out. Waiting for processes to exit. May 15 10:48:23.030923 systemd[1]: Started sshd@25-10.0.0.121:22-10.0.0.1:57514.service. May 15 10:48:23.032049 systemd-logind[1196]: Removed session 25. May 15 10:48:23.037569 kubelet[1909]: E0515 10:48:23.037512 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:48:23.038298 env[1214]: time="2025-05-15T10:48:23.038001148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-v7gsv,Uid:39358308-2ac6-4dd8-8402-7f75c43dc62a,Namespace:kube-system,Attempt:0,}" May 15 10:48:23.054975 env[1214]: time="2025-05-15T10:48:23.054898443Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 10:48:23.054975 env[1214]: time="2025-05-15T10:48:23.054951083Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 10:48:23.054975 env[1214]: time="2025-05-15T10:48:23.054973255Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 10:48:23.055183 env[1214]: time="2025-05-15T10:48:23.055145736Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b73f458a8d0e59fe74147ae61a1f936a2fb801dcc6d8c99cc1447deea3a0065e pid=3722 runtime=io.containerd.runc.v2 May 15 10:48:23.065521 systemd[1]: Started cri-containerd-b73f458a8d0e59fe74147ae61a1f936a2fb801dcc6d8c99cc1447deea3a0065e.scope. May 15 10:48:23.066049 sshd[3714]: Accepted publickey for core from 10.0.0.1 port 57514 ssh2: RSA SHA256:haioSl9UPoE92ibERJujrg0rXVEisgSt061naG/EAtE May 15 10:48:23.067128 sshd[3714]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:48:23.072149 systemd[1]: Started session-26.scope. May 15 10:48:23.072493 systemd-logind[1196]: New session 26 of user core. May 15 10:48:23.083565 env[1214]: time="2025-05-15T10:48:23.083485216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-v7gsv,Uid:39358308-2ac6-4dd8-8402-7f75c43dc62a,Namespace:kube-system,Attempt:0,} returns sandbox id \"b73f458a8d0e59fe74147ae61a1f936a2fb801dcc6d8c99cc1447deea3a0065e\"" May 15 10:48:23.084236 kubelet[1909]: E0515 10:48:23.084210 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:48:23.086073 env[1214]: time="2025-05-15T10:48:23.086043066Z" level=info msg="CreateContainer within sandbox \"b73f458a8d0e59fe74147ae61a1f936a2fb801dcc6d8c99cc1447deea3a0065e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 15 10:48:23.097499 env[1214]: time="2025-05-15T10:48:23.097458090Z" level=info msg="CreateContainer within sandbox \"b73f458a8d0e59fe74147ae61a1f936a2fb801dcc6d8c99cc1447deea3a0065e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"fc4b95b8dbf8efdb8c750bc873fcde66a001acdf3583be34ea67a01698f0aabb\"" May 15 10:48:23.097798 env[1214]: time="2025-05-15T10:48:23.097779545Z" level=info msg="StartContainer for \"fc4b95b8dbf8efdb8c750bc873fcde66a001acdf3583be34ea67a01698f0aabb\"" May 15 10:48:23.110752 systemd[1]: Started cri-containerd-fc4b95b8dbf8efdb8c750bc873fcde66a001acdf3583be34ea67a01698f0aabb.scope. May 15 10:48:23.118783 systemd[1]: cri-containerd-fc4b95b8dbf8efdb8c750bc873fcde66a001acdf3583be34ea67a01698f0aabb.scope: Deactivated successfully. May 15 10:48:23.118978 systemd[1]: Stopped cri-containerd-fc4b95b8dbf8efdb8c750bc873fcde66a001acdf3583be34ea67a01698f0aabb.scope. May 15 10:48:23.135519 env[1214]: time="2025-05-15T10:48:23.135435476Z" level=info msg="shim disconnected" id=fc4b95b8dbf8efdb8c750bc873fcde66a001acdf3583be34ea67a01698f0aabb May 15 10:48:23.135519 env[1214]: time="2025-05-15T10:48:23.135479561Z" level=warning msg="cleaning up after shim disconnected" id=fc4b95b8dbf8efdb8c750bc873fcde66a001acdf3583be34ea67a01698f0aabb namespace=k8s.io May 15 10:48:23.135519 env[1214]: time="2025-05-15T10:48:23.135488407Z" level=info msg="cleaning up dead shim" May 15 10:48:23.144737 env[1214]: time="2025-05-15T10:48:23.144690600Z" level=warning msg="cleanup warnings time=\"2025-05-15T10:48:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3786 runtime=io.containerd.runc.v2\ntime=\"2025-05-15T10:48:23Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\ntime=\"2025-05-15T10:48:23Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/fc4b95b8dbf8efdb8c750bc873fcde66a001acdf3583be34ea67a01698f0aabb/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" May 15 10:48:23.145011 env[1214]: time="2025-05-15T10:48:23.144919549Z" level=error msg="copy shim log" error="read /proc/self/fd/29: file already closed" May 15 10:48:23.145266 env[1214]: time="2025-05-15T10:48:23.145210635Z" level=error msg="Failed to pipe stderr of container \"fc4b95b8dbf8efdb8c750bc873fcde66a001acdf3583be34ea67a01698f0aabb\"" error="reading from a closed fifo" May 15 10:48:23.145570 env[1214]: time="2025-05-15T10:48:23.145503715Z" level=error msg="Failed to pipe stdout of container \"fc4b95b8dbf8efdb8c750bc873fcde66a001acdf3583be34ea67a01698f0aabb\"" error="reading from a closed fifo" May 15 10:48:23.148253 env[1214]: time="2025-05-15T10:48:23.148212063Z" level=error msg="StartContainer for \"fc4b95b8dbf8efdb8c750bc873fcde66a001acdf3583be34ea67a01698f0aabb\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" May 15 10:48:23.148519 kubelet[1909]: E0515 10:48:23.148473 1909 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="fc4b95b8dbf8efdb8c750bc873fcde66a001acdf3583be34ea67a01698f0aabb" May 15 10:48:23.150060 kubelet[1909]: E0515 10:48:23.150028 1909 kuberuntime_manager.go:1341] "Unhandled Error" err=< May 15 10:48:23.150060 kubelet[1909]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; May 15 10:48:23.150060 kubelet[1909]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; May 15 10:48:23.150060 kubelet[1909]: rm /hostbin/cilium-mount May 15 10:48:23.150190 kubelet[1909]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h5bb6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-v7gsv_kube-system(39358308-2ac6-4dd8-8402-7f75c43dc62a): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown May 15 10:48:23.150190 kubelet[1909]: > logger="UnhandledError" May 15 10:48:23.151334 kubelet[1909]: E0515 10:48:23.151291 1909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-v7gsv" podUID="39358308-2ac6-4dd8-8402-7f75c43dc62a" May 15 10:48:23.169723 kubelet[1909]: E0515 10:48:23.169684 1909 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 15 10:48:23.550183 env[1214]: time="2025-05-15T10:48:23.550149683Z" level=info msg="StopPodSandbox for \"b73f458a8d0e59fe74147ae61a1f936a2fb801dcc6d8c99cc1447deea3a0065e\"" May 15 10:48:23.550304 env[1214]: time="2025-05-15T10:48:23.550206472Z" level=info msg="Container to stop \"fc4b95b8dbf8efdb8c750bc873fcde66a001acdf3583be34ea67a01698f0aabb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 10:48:23.555047 systemd[1]: cri-containerd-b73f458a8d0e59fe74147ae61a1f936a2fb801dcc6d8c99cc1447deea3a0065e.scope: Deactivated successfully. May 15 10:48:23.580501 env[1214]: time="2025-05-15T10:48:23.580454641Z" level=info msg="shim disconnected" id=b73f458a8d0e59fe74147ae61a1f936a2fb801dcc6d8c99cc1447deea3a0065e May 15 10:48:23.581130 env[1214]: time="2025-05-15T10:48:23.581103792Z" level=warning msg="cleaning up after shim disconnected" id=b73f458a8d0e59fe74147ae61a1f936a2fb801dcc6d8c99cc1447deea3a0065e namespace=k8s.io May 15 10:48:23.581130 env[1214]: time="2025-05-15T10:48:23.581120024Z" level=info msg="cleaning up dead shim" May 15 10:48:23.586992 env[1214]: time="2025-05-15T10:48:23.586947234Z" level=warning msg="cleanup warnings time=\"2025-05-15T10:48:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3817 runtime=io.containerd.runc.v2\n" May 15 10:48:23.587254 env[1214]: time="2025-05-15T10:48:23.587225546Z" level=info msg="TearDown network for sandbox \"b73f458a8d0e59fe74147ae61a1f936a2fb801dcc6d8c99cc1447deea3a0065e\" successfully" May 15 10:48:23.587254 env[1214]: time="2025-05-15T10:48:23.587247819Z" level=info msg="StopPodSandbox for \"b73f458a8d0e59fe74147ae61a1f936a2fb801dcc6d8c99cc1447deea3a0065e\" returns successfully" May 15 10:48:23.698422 kubelet[1909]: I0515 10:48:23.698373 1909 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/39358308-2ac6-4dd8-8402-7f75c43dc62a-xtables-lock\") pod \"39358308-2ac6-4dd8-8402-7f75c43dc62a\" (UID: \"39358308-2ac6-4dd8-8402-7f75c43dc62a\") " May 15 10:48:23.698422 kubelet[1909]: I0515 10:48:23.698416 1909 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/39358308-2ac6-4dd8-8402-7f75c43dc62a-cilium-ipsec-secrets\") pod \"39358308-2ac6-4dd8-8402-7f75c43dc62a\" (UID: \"39358308-2ac6-4dd8-8402-7f75c43dc62a\") " May 15 10:48:23.698674 kubelet[1909]: I0515 10:48:23.698437 1909 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/39358308-2ac6-4dd8-8402-7f75c43dc62a-host-proc-sys-kernel\") pod \"39358308-2ac6-4dd8-8402-7f75c43dc62a\" (UID: \"39358308-2ac6-4dd8-8402-7f75c43dc62a\") " May 15 10:48:23.698674 kubelet[1909]: I0515 10:48:23.698452 1909 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/39358308-2ac6-4dd8-8402-7f75c43dc62a-host-proc-sys-net\") pod \"39358308-2ac6-4dd8-8402-7f75c43dc62a\" (UID: \"39358308-2ac6-4dd8-8402-7f75c43dc62a\") " May 15 10:48:23.698674 kubelet[1909]: I0515 10:48:23.698470 1909 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/39358308-2ac6-4dd8-8402-7f75c43dc62a-cilium-config-path\") pod \"39358308-2ac6-4dd8-8402-7f75c43dc62a\" (UID: \"39358308-2ac6-4dd8-8402-7f75c43dc62a\") " May 15 10:48:23.698674 kubelet[1909]: I0515 10:48:23.698489 1909 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h5bb6\" (UniqueName: \"kubernetes.io/projected/39358308-2ac6-4dd8-8402-7f75c43dc62a-kube-api-access-h5bb6\") pod \"39358308-2ac6-4dd8-8402-7f75c43dc62a\" (UID: \"39358308-2ac6-4dd8-8402-7f75c43dc62a\") " May 15 10:48:23.698674 kubelet[1909]: I0515 10:48:23.698504 1909 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/39358308-2ac6-4dd8-8402-7f75c43dc62a-etc-cni-netd\") pod \"39358308-2ac6-4dd8-8402-7f75c43dc62a\" (UID: \"39358308-2ac6-4dd8-8402-7f75c43dc62a\") " May 15 10:48:23.698674 kubelet[1909]: I0515 10:48:23.698497 1909 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39358308-2ac6-4dd8-8402-7f75c43dc62a-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "39358308-2ac6-4dd8-8402-7f75c43dc62a" (UID: "39358308-2ac6-4dd8-8402-7f75c43dc62a"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 10:48:23.698674 kubelet[1909]: I0515 10:48:23.698517 1909 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/39358308-2ac6-4dd8-8402-7f75c43dc62a-bpf-maps\") pod \"39358308-2ac6-4dd8-8402-7f75c43dc62a\" (UID: \"39358308-2ac6-4dd8-8402-7f75c43dc62a\") " May 15 10:48:23.698674 kubelet[1909]: I0515 10:48:23.698553 1909 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/39358308-2ac6-4dd8-8402-7f75c43dc62a-hostproc\") pod \"39358308-2ac6-4dd8-8402-7f75c43dc62a\" (UID: \"39358308-2ac6-4dd8-8402-7f75c43dc62a\") " May 15 10:48:23.698674 kubelet[1909]: I0515 10:48:23.698555 1909 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39358308-2ac6-4dd8-8402-7f75c43dc62a-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "39358308-2ac6-4dd8-8402-7f75c43dc62a" (UID: "39358308-2ac6-4dd8-8402-7f75c43dc62a"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 10:48:23.698674 kubelet[1909]: I0515 10:48:23.698571 1909 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/39358308-2ac6-4dd8-8402-7f75c43dc62a-hubble-tls\") pod \"39358308-2ac6-4dd8-8402-7f75c43dc62a\" (UID: \"39358308-2ac6-4dd8-8402-7f75c43dc62a\") " May 15 10:48:23.698674 kubelet[1909]: I0515 10:48:23.698590 1909 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/39358308-2ac6-4dd8-8402-7f75c43dc62a-clustermesh-secrets\") pod \"39358308-2ac6-4dd8-8402-7f75c43dc62a\" (UID: \"39358308-2ac6-4dd8-8402-7f75c43dc62a\") " May 15 10:48:23.698674 kubelet[1909]: I0515 10:48:23.698604 1909 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/39358308-2ac6-4dd8-8402-7f75c43dc62a-cilium-run\") pod \"39358308-2ac6-4dd8-8402-7f75c43dc62a\" (UID: \"39358308-2ac6-4dd8-8402-7f75c43dc62a\") " May 15 10:48:23.698674 kubelet[1909]: I0515 10:48:23.698618 1909 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/39358308-2ac6-4dd8-8402-7f75c43dc62a-cilium-cgroup\") pod \"39358308-2ac6-4dd8-8402-7f75c43dc62a\" (UID: \"39358308-2ac6-4dd8-8402-7f75c43dc62a\") " May 15 10:48:23.698674 kubelet[1909]: I0515 10:48:23.698634 1909 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/39358308-2ac6-4dd8-8402-7f75c43dc62a-cni-path\") pod \"39358308-2ac6-4dd8-8402-7f75c43dc62a\" (UID: \"39358308-2ac6-4dd8-8402-7f75c43dc62a\") " May 15 10:48:23.698674 kubelet[1909]: I0515 10:48:23.698648 1909 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/39358308-2ac6-4dd8-8402-7f75c43dc62a-lib-modules\") pod \"39358308-2ac6-4dd8-8402-7f75c43dc62a\" (UID: \"39358308-2ac6-4dd8-8402-7f75c43dc62a\") " May 15 10:48:23.699093 kubelet[1909]: I0515 10:48:23.698693 1909 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/39358308-2ac6-4dd8-8402-7f75c43dc62a-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 15 10:48:23.699093 kubelet[1909]: I0515 10:48:23.698704 1909 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/39358308-2ac6-4dd8-8402-7f75c43dc62a-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 15 10:48:23.699093 kubelet[1909]: I0515 10:48:23.698738 1909 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39358308-2ac6-4dd8-8402-7f75c43dc62a-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "39358308-2ac6-4dd8-8402-7f75c43dc62a" (UID: "39358308-2ac6-4dd8-8402-7f75c43dc62a"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 10:48:23.699093 kubelet[1909]: I0515 10:48:23.698824 1909 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39358308-2ac6-4dd8-8402-7f75c43dc62a-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "39358308-2ac6-4dd8-8402-7f75c43dc62a" (UID: "39358308-2ac6-4dd8-8402-7f75c43dc62a"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 10:48:23.699210 kubelet[1909]: I0515 10:48:23.699156 1909 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39358308-2ac6-4dd8-8402-7f75c43dc62a-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "39358308-2ac6-4dd8-8402-7f75c43dc62a" (UID: "39358308-2ac6-4dd8-8402-7f75c43dc62a"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 10:48:23.700610 kubelet[1909]: I0515 10:48:23.699281 1909 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39358308-2ac6-4dd8-8402-7f75c43dc62a-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "39358308-2ac6-4dd8-8402-7f75c43dc62a" (UID: "39358308-2ac6-4dd8-8402-7f75c43dc62a"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 10:48:23.700610 kubelet[1909]: I0515 10:48:23.699541 1909 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39358308-2ac6-4dd8-8402-7f75c43dc62a-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "39358308-2ac6-4dd8-8402-7f75c43dc62a" (UID: "39358308-2ac6-4dd8-8402-7f75c43dc62a"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 10:48:23.700610 kubelet[1909]: I0515 10:48:23.699566 1909 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39358308-2ac6-4dd8-8402-7f75c43dc62a-hostproc" (OuterVolumeSpecName: "hostproc") pod "39358308-2ac6-4dd8-8402-7f75c43dc62a" (UID: "39358308-2ac6-4dd8-8402-7f75c43dc62a"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 10:48:23.700610 kubelet[1909]: I0515 10:48:23.699582 1909 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39358308-2ac6-4dd8-8402-7f75c43dc62a-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "39358308-2ac6-4dd8-8402-7f75c43dc62a" (UID: "39358308-2ac6-4dd8-8402-7f75c43dc62a"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 10:48:23.700610 kubelet[1909]: I0515 10:48:23.699595 1909 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39358308-2ac6-4dd8-8402-7f75c43dc62a-cni-path" (OuterVolumeSpecName: "cni-path") pod "39358308-2ac6-4dd8-8402-7f75c43dc62a" (UID: "39358308-2ac6-4dd8-8402-7f75c43dc62a"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 10:48:23.700610 kubelet[1909]: I0515 10:48:23.700566 1909 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39358308-2ac6-4dd8-8402-7f75c43dc62a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "39358308-2ac6-4dd8-8402-7f75c43dc62a" (UID: "39358308-2ac6-4dd8-8402-7f75c43dc62a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 15 10:48:23.701244 kubelet[1909]: I0515 10:48:23.701227 1909 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39358308-2ac6-4dd8-8402-7f75c43dc62a-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "39358308-2ac6-4dd8-8402-7f75c43dc62a" (UID: "39358308-2ac6-4dd8-8402-7f75c43dc62a"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 15 10:48:23.701788 kubelet[1909]: I0515 10:48:23.701755 1909 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39358308-2ac6-4dd8-8402-7f75c43dc62a-kube-api-access-h5bb6" (OuterVolumeSpecName: "kube-api-access-h5bb6") pod "39358308-2ac6-4dd8-8402-7f75c43dc62a" (UID: "39358308-2ac6-4dd8-8402-7f75c43dc62a"). InnerVolumeSpecName "kube-api-access-h5bb6". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 15 10:48:23.702075 kubelet[1909]: I0515 10:48:23.702049 1909 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39358308-2ac6-4dd8-8402-7f75c43dc62a-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "39358308-2ac6-4dd8-8402-7f75c43dc62a" (UID: "39358308-2ac6-4dd8-8402-7f75c43dc62a"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 15 10:48:23.702915 kubelet[1909]: I0515 10:48:23.702880 1909 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39358308-2ac6-4dd8-8402-7f75c43dc62a-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "39358308-2ac6-4dd8-8402-7f75c43dc62a" (UID: "39358308-2ac6-4dd8-8402-7f75c43dc62a"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 15 10:48:23.799322 kubelet[1909]: I0515 10:48:23.799291 1909 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/39358308-2ac6-4dd8-8402-7f75c43dc62a-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 15 10:48:23.799322 kubelet[1909]: I0515 10:48:23.799311 1909 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-h5bb6\" (UniqueName: \"kubernetes.io/projected/39358308-2ac6-4dd8-8402-7f75c43dc62a-kube-api-access-h5bb6\") on node \"localhost\" DevicePath \"\"" May 15 10:48:23.799322 kubelet[1909]: I0515 10:48:23.799320 1909 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/39358308-2ac6-4dd8-8402-7f75c43dc62a-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 15 10:48:23.799430 kubelet[1909]: I0515 10:48:23.799329 1909 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/39358308-2ac6-4dd8-8402-7f75c43dc62a-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 15 10:48:23.799430 kubelet[1909]: I0515 10:48:23.799337 1909 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/39358308-2ac6-4dd8-8402-7f75c43dc62a-hostproc\") on node \"localhost\" DevicePath \"\"" May 15 10:48:23.799430 kubelet[1909]: I0515 10:48:23.799345 1909 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/39358308-2ac6-4dd8-8402-7f75c43dc62a-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 15 10:48:23.799430 kubelet[1909]: I0515 10:48:23.799352 1909 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/39358308-2ac6-4dd8-8402-7f75c43dc62a-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 15 10:48:23.799430 kubelet[1909]: I0515 10:48:23.799360 1909 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/39358308-2ac6-4dd8-8402-7f75c43dc62a-cilium-run\") on node \"localhost\" DevicePath \"\"" May 15 10:48:23.799430 kubelet[1909]: I0515 10:48:23.799368 1909 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/39358308-2ac6-4dd8-8402-7f75c43dc62a-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 15 10:48:23.799430 kubelet[1909]: I0515 10:48:23.799374 1909 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/39358308-2ac6-4dd8-8402-7f75c43dc62a-cni-path\") on node \"localhost\" DevicePath \"\"" May 15 10:48:23.799430 kubelet[1909]: I0515 10:48:23.799381 1909 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/39358308-2ac6-4dd8-8402-7f75c43dc62a-lib-modules\") on node \"localhost\" DevicePath \"\"" May 15 10:48:23.799430 kubelet[1909]: I0515 10:48:23.799388 1909 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/39358308-2ac6-4dd8-8402-7f75c43dc62a-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" May 15 10:48:23.799430 kubelet[1909]: I0515 10:48:23.799395 1909 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/39358308-2ac6-4dd8-8402-7f75c43dc62a-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 15 10:48:23.996039 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b73f458a8d0e59fe74147ae61a1f936a2fb801dcc6d8c99cc1447deea3a0065e-rootfs.mount: Deactivated successfully. May 15 10:48:23.996131 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b73f458a8d0e59fe74147ae61a1f936a2fb801dcc6d8c99cc1447deea3a0065e-shm.mount: Deactivated successfully. May 15 10:48:23.996182 systemd[1]: var-lib-kubelet-pods-39358308\x2d2ac6\x2d4dd8\x2d8402\x2d7f75c43dc62a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dh5bb6.mount: Deactivated successfully. May 15 10:48:23.996234 systemd[1]: var-lib-kubelet-pods-39358308\x2d2ac6\x2d4dd8\x2d8402\x2d7f75c43dc62a-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 15 10:48:23.996284 systemd[1]: var-lib-kubelet-pods-39358308\x2d2ac6\x2d4dd8\x2d8402\x2d7f75c43dc62a-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. May 15 10:48:23.996332 systemd[1]: var-lib-kubelet-pods-39358308\x2d2ac6\x2d4dd8\x2d8402\x2d7f75c43dc62a-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 15 10:48:24.552704 kubelet[1909]: I0515 10:48:24.552676 1909 scope.go:117] "RemoveContainer" containerID="fc4b95b8dbf8efdb8c750bc873fcde66a001acdf3583be34ea67a01698f0aabb" May 15 10:48:24.553766 env[1214]: time="2025-05-15T10:48:24.553722258Z" level=info msg="RemoveContainer for \"fc4b95b8dbf8efdb8c750bc873fcde66a001acdf3583be34ea67a01698f0aabb\"" May 15 10:48:24.555771 systemd[1]: Removed slice kubepods-burstable-pod39358308_2ac6_4dd8_8402_7f75c43dc62a.slice. May 15 10:48:24.556875 env[1214]: time="2025-05-15T10:48:24.556842591Z" level=info msg="RemoveContainer for \"fc4b95b8dbf8efdb8c750bc873fcde66a001acdf3583be34ea67a01698f0aabb\" returns successfully" May 15 10:48:24.584312 kubelet[1909]: I0515 10:48:24.584273 1909 memory_manager.go:355] "RemoveStaleState removing state" podUID="39358308-2ac6-4dd8-8402-7f75c43dc62a" containerName="mount-cgroup" May 15 10:48:24.586389 kubelet[1909]: W0515 10:48:24.586366 1909 reflector.go:569] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object May 15 10:48:24.586499 kubelet[1909]: E0515 10:48:24.586472 1909 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-ipsec-keys\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-ipsec-keys\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" May 15 10:48:24.587025 kubelet[1909]: W0515 10:48:24.586991 1909 reflector.go:569] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object May 15 10:48:24.587025 kubelet[1909]: W0515 10:48:24.587006 1909 reflector.go:569] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object May 15 10:48:24.587025 kubelet[1909]: E0515 10:48:24.587023 1909 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" May 15 10:48:24.587232 kubelet[1909]: E0515 10:48:24.587045 1909 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" May 15 10:48:24.587232 kubelet[1909]: I0515 10:48:24.587083 1909 status_manager.go:890] "Failed to get status for pod" podUID="d338d035-075d-46eb-bd5e-6ae07c9f71fd" pod="kube-system/cilium-5xsf7" err="pods \"cilium-5xsf7\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" May 15 10:48:24.588787 systemd[1]: Created slice kubepods-burstable-podd338d035_075d_46eb_bd5e_6ae07c9f71fd.slice. May 15 10:48:24.703680 kubelet[1909]: I0515 10:48:24.703653 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d338d035-075d-46eb-bd5e-6ae07c9f71fd-etc-cni-netd\") pod \"cilium-5xsf7\" (UID: \"d338d035-075d-46eb-bd5e-6ae07c9f71fd\") " pod="kube-system/cilium-5xsf7" May 15 10:48:24.703680 kubelet[1909]: I0515 10:48:24.703682 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d338d035-075d-46eb-bd5e-6ae07c9f71fd-cilium-config-path\") pod \"cilium-5xsf7\" (UID: \"d338d035-075d-46eb-bd5e-6ae07c9f71fd\") " pod="kube-system/cilium-5xsf7" May 15 10:48:24.703806 kubelet[1909]: I0515 10:48:24.703701 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d338d035-075d-46eb-bd5e-6ae07c9f71fd-cilium-cgroup\") pod \"cilium-5xsf7\" (UID: \"d338d035-075d-46eb-bd5e-6ae07c9f71fd\") " pod="kube-system/cilium-5xsf7" May 15 10:48:24.703806 kubelet[1909]: I0515 10:48:24.703716 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d338d035-075d-46eb-bd5e-6ae07c9f71fd-hubble-tls\") pod \"cilium-5xsf7\" (UID: \"d338d035-075d-46eb-bd5e-6ae07c9f71fd\") " pod="kube-system/cilium-5xsf7" May 15 10:48:24.703806 kubelet[1909]: I0515 10:48:24.703731 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d338d035-075d-46eb-bd5e-6ae07c9f71fd-cilium-run\") pod \"cilium-5xsf7\" (UID: \"d338d035-075d-46eb-bd5e-6ae07c9f71fd\") " pod="kube-system/cilium-5xsf7" May 15 10:48:24.703806 kubelet[1909]: I0515 10:48:24.703745 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d338d035-075d-46eb-bd5e-6ae07c9f71fd-hostproc\") pod \"cilium-5xsf7\" (UID: \"d338d035-075d-46eb-bd5e-6ae07c9f71fd\") " pod="kube-system/cilium-5xsf7" May 15 10:48:24.703806 kubelet[1909]: I0515 10:48:24.703802 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d338d035-075d-46eb-bd5e-6ae07c9f71fd-cni-path\") pod \"cilium-5xsf7\" (UID: \"d338d035-075d-46eb-bd5e-6ae07c9f71fd\") " pod="kube-system/cilium-5xsf7" May 15 10:48:24.703928 kubelet[1909]: I0515 10:48:24.703846 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d338d035-075d-46eb-bd5e-6ae07c9f71fd-lib-modules\") pod \"cilium-5xsf7\" (UID: \"d338d035-075d-46eb-bd5e-6ae07c9f71fd\") " pod="kube-system/cilium-5xsf7" May 15 10:48:24.703928 kubelet[1909]: I0515 10:48:24.703860 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d338d035-075d-46eb-bd5e-6ae07c9f71fd-xtables-lock\") pod \"cilium-5xsf7\" (UID: \"d338d035-075d-46eb-bd5e-6ae07c9f71fd\") " pod="kube-system/cilium-5xsf7" May 15 10:48:24.703928 kubelet[1909]: I0515 10:48:24.703875 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d338d035-075d-46eb-bd5e-6ae07c9f71fd-host-proc-sys-kernel\") pod \"cilium-5xsf7\" (UID: \"d338d035-075d-46eb-bd5e-6ae07c9f71fd\") " pod="kube-system/cilium-5xsf7" May 15 10:48:24.704104 kubelet[1909]: I0515 10:48:24.703935 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d338d035-075d-46eb-bd5e-6ae07c9f71fd-bpf-maps\") pod \"cilium-5xsf7\" (UID: \"d338d035-075d-46eb-bd5e-6ae07c9f71fd\") " pod="kube-system/cilium-5xsf7" May 15 10:48:24.704104 kubelet[1909]: I0515 10:48:24.703967 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d338d035-075d-46eb-bd5e-6ae07c9f71fd-host-proc-sys-net\") pod \"cilium-5xsf7\" (UID: \"d338d035-075d-46eb-bd5e-6ae07c9f71fd\") " pod="kube-system/cilium-5xsf7" May 15 10:48:24.704104 kubelet[1909]: I0515 10:48:24.704033 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5rx6\" (UniqueName: \"kubernetes.io/projected/d338d035-075d-46eb-bd5e-6ae07c9f71fd-kube-api-access-l5rx6\") pod \"cilium-5xsf7\" (UID: \"d338d035-075d-46eb-bd5e-6ae07c9f71fd\") " pod="kube-system/cilium-5xsf7" May 15 10:48:24.704104 kubelet[1909]: I0515 10:48:24.704091 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d338d035-075d-46eb-bd5e-6ae07c9f71fd-cilium-ipsec-secrets\") pod \"cilium-5xsf7\" (UID: \"d338d035-075d-46eb-bd5e-6ae07c9f71fd\") " pod="kube-system/cilium-5xsf7" May 15 10:48:24.704204 kubelet[1909]: I0515 10:48:24.704120 1909 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d338d035-075d-46eb-bd5e-6ae07c9f71fd-clustermesh-secrets\") pod \"cilium-5xsf7\" (UID: \"d338d035-075d-46eb-bd5e-6ae07c9f71fd\") " pod="kube-system/cilium-5xsf7" May 15 10:48:25.142375 kubelet[1909]: I0515 10:48:25.142325 1909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39358308-2ac6-4dd8-8402-7f75c43dc62a" path="/var/lib/kubelet/pods/39358308-2ac6-4dd8-8402-7f75c43dc62a/volumes" May 15 10:48:25.447985 kubelet[1909]: I0515 10:48:25.447870 1909 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-15T10:48:25Z","lastTransitionTime":"2025-05-15T10:48:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 15 10:48:25.790977 kubelet[1909]: E0515 10:48:25.790876 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:48:25.791287 env[1214]: time="2025-05-15T10:48:25.791242427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5xsf7,Uid:d338d035-075d-46eb-bd5e-6ae07c9f71fd,Namespace:kube-system,Attempt:0,}" May 15 10:48:25.802761 env[1214]: time="2025-05-15T10:48:25.802707826Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 10:48:25.802761 env[1214]: time="2025-05-15T10:48:25.802745257Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 10:48:25.802761 env[1214]: time="2025-05-15T10:48:25.802756469Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 10:48:25.802961 env[1214]: time="2025-05-15T10:48:25.802898260Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5aa9c7db119c7a4fc4216f98664bc0e5bc283452b19e463c1a3d7383b34f6fe6 pid=3844 runtime=io.containerd.runc.v2 May 15 10:48:25.816179 systemd[1]: Started cri-containerd-5aa9c7db119c7a4fc4216f98664bc0e5bc283452b19e463c1a3d7383b34f6fe6.scope. May 15 10:48:25.832969 env[1214]: time="2025-05-15T10:48:25.831540336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5xsf7,Uid:d338d035-075d-46eb-bd5e-6ae07c9f71fd,Namespace:kube-system,Attempt:0,} returns sandbox id \"5aa9c7db119c7a4fc4216f98664bc0e5bc283452b19e463c1a3d7383b34f6fe6\"" May 15 10:48:25.833058 kubelet[1909]: E0515 10:48:25.832308 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:48:25.834739 env[1214]: time="2025-05-15T10:48:25.834710801Z" level=info msg="CreateContainer within sandbox \"5aa9c7db119c7a4fc4216f98664bc0e5bc283452b19e463c1a3d7383b34f6fe6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 15 10:48:25.845495 env[1214]: time="2025-05-15T10:48:25.845462306Z" level=info msg="CreateContainer within sandbox \"5aa9c7db119c7a4fc4216f98664bc0e5bc283452b19e463c1a3d7383b34f6fe6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d19de475bfe246eab7d221ce99041c9b0889313ac5ddc964b964c68a8fe42470\"" May 15 10:48:25.845874 env[1214]: time="2025-05-15T10:48:25.845833254Z" level=info msg="StartContainer for \"d19de475bfe246eab7d221ce99041c9b0889313ac5ddc964b964c68a8fe42470\"" May 15 10:48:25.857896 systemd[1]: Started cri-containerd-d19de475bfe246eab7d221ce99041c9b0889313ac5ddc964b964c68a8fe42470.scope. May 15 10:48:25.880762 env[1214]: time="2025-05-15T10:48:25.880717132Z" level=info msg="StartContainer for \"d19de475bfe246eab7d221ce99041c9b0889313ac5ddc964b964c68a8fe42470\" returns successfully" May 15 10:48:25.884110 systemd[1]: cri-containerd-d19de475bfe246eab7d221ce99041c9b0889313ac5ddc964b964c68a8fe42470.scope: Deactivated successfully. May 15 10:48:25.907798 env[1214]: time="2025-05-15T10:48:25.907748721Z" level=info msg="shim disconnected" id=d19de475bfe246eab7d221ce99041c9b0889313ac5ddc964b964c68a8fe42470 May 15 10:48:25.907938 env[1214]: time="2025-05-15T10:48:25.907800280Z" level=warning msg="cleaning up after shim disconnected" id=d19de475bfe246eab7d221ce99041c9b0889313ac5ddc964b964c68a8fe42470 namespace=k8s.io May 15 10:48:25.907938 env[1214]: time="2025-05-15T10:48:25.907809507Z" level=info msg="cleaning up dead shim" May 15 10:48:25.913435 env[1214]: time="2025-05-15T10:48:25.913413431Z" level=warning msg="cleanup warnings time=\"2025-05-15T10:48:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3927 runtime=io.containerd.runc.v2\n" May 15 10:48:26.140784 kubelet[1909]: E0515 10:48:26.140749 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:48:26.240202 kubelet[1909]: W0515 10:48:26.240129 1909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod39358308_2ac6_4dd8_8402_7f75c43dc62a.slice/cri-containerd-fc4b95b8dbf8efdb8c750bc873fcde66a001acdf3583be34ea67a01698f0aabb.scope WatchSource:0}: container "fc4b95b8dbf8efdb8c750bc873fcde66a001acdf3583be34ea67a01698f0aabb" in namespace "k8s.io": not found May 15 10:48:26.557409 kubelet[1909]: E0515 10:48:26.557146 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:48:26.558855 env[1214]: time="2025-05-15T10:48:26.558807327Z" level=info msg="CreateContainer within sandbox \"5aa9c7db119c7a4fc4216f98664bc0e5bc283452b19e463c1a3d7383b34f6fe6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 15 10:48:26.568863 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1441159960.mount: Deactivated successfully. May 15 10:48:26.570956 env[1214]: time="2025-05-15T10:48:26.570900565Z" level=info msg="CreateContainer within sandbox \"5aa9c7db119c7a4fc4216f98664bc0e5bc283452b19e463c1a3d7383b34f6fe6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2aa3a68c56bde458d6581d3cdac0319b2fee9ba2d541c35e33ec88745ecea130\"" May 15 10:48:26.571383 env[1214]: time="2025-05-15T10:48:26.571331226Z" level=info msg="StartContainer for \"2aa3a68c56bde458d6581d3cdac0319b2fee9ba2d541c35e33ec88745ecea130\"" May 15 10:48:26.584839 systemd[1]: Started cri-containerd-2aa3a68c56bde458d6581d3cdac0319b2fee9ba2d541c35e33ec88745ecea130.scope. May 15 10:48:26.606299 env[1214]: time="2025-05-15T10:48:26.606258455Z" level=info msg="StartContainer for \"2aa3a68c56bde458d6581d3cdac0319b2fee9ba2d541c35e33ec88745ecea130\" returns successfully" May 15 10:48:26.609960 systemd[1]: cri-containerd-2aa3a68c56bde458d6581d3cdac0319b2fee9ba2d541c35e33ec88745ecea130.scope: Deactivated successfully. May 15 10:48:26.629286 env[1214]: time="2025-05-15T10:48:26.629227164Z" level=info msg="shim disconnected" id=2aa3a68c56bde458d6581d3cdac0319b2fee9ba2d541c35e33ec88745ecea130 May 15 10:48:26.629286 env[1214]: time="2025-05-15T10:48:26.629271078Z" level=warning msg="cleaning up after shim disconnected" id=2aa3a68c56bde458d6581d3cdac0319b2fee9ba2d541c35e33ec88745ecea130 namespace=k8s.io May 15 10:48:26.629286 env[1214]: time="2025-05-15T10:48:26.629279564Z" level=info msg="cleaning up dead shim" May 15 10:48:26.635352 env[1214]: time="2025-05-15T10:48:26.635317361Z" level=warning msg="cleanup warnings time=\"2025-05-15T10:48:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3989 runtime=io.containerd.runc.v2\n" May 15 10:48:27.141023 kubelet[1909]: E0515 10:48:27.140973 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:48:27.491383 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2aa3a68c56bde458d6581d3cdac0319b2fee9ba2d541c35e33ec88745ecea130-rootfs.mount: Deactivated successfully. May 15 10:48:27.560843 kubelet[1909]: E0515 10:48:27.560817 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:48:27.562369 env[1214]: time="2025-05-15T10:48:27.562307696Z" level=info msg="CreateContainer within sandbox \"5aa9c7db119c7a4fc4216f98664bc0e5bc283452b19e463c1a3d7383b34f6fe6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 15 10:48:27.573186 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3864201913.mount: Deactivated successfully. May 15 10:48:27.574762 env[1214]: time="2025-05-15T10:48:27.574714000Z" level=info msg="CreateContainer within sandbox \"5aa9c7db119c7a4fc4216f98664bc0e5bc283452b19e463c1a3d7383b34f6fe6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"eb2eb60a2fec471e91087c47d4882deada898365246b57a828b05c911919ad50\"" May 15 10:48:27.575245 env[1214]: time="2025-05-15T10:48:27.575205668Z" level=info msg="StartContainer for \"eb2eb60a2fec471e91087c47d4882deada898365246b57a828b05c911919ad50\"" May 15 10:48:27.590048 systemd[1]: Started cri-containerd-eb2eb60a2fec471e91087c47d4882deada898365246b57a828b05c911919ad50.scope. May 15 10:48:27.612148 env[1214]: time="2025-05-15T10:48:27.612108777Z" level=info msg="StartContainer for \"eb2eb60a2fec471e91087c47d4882deada898365246b57a828b05c911919ad50\" returns successfully" May 15 10:48:27.612821 systemd[1]: cri-containerd-eb2eb60a2fec471e91087c47d4882deada898365246b57a828b05c911919ad50.scope: Deactivated successfully. May 15 10:48:27.633287 env[1214]: time="2025-05-15T10:48:27.633233859Z" level=info msg="shim disconnected" id=eb2eb60a2fec471e91087c47d4882deada898365246b57a828b05c911919ad50 May 15 10:48:27.633287 env[1214]: time="2025-05-15T10:48:27.633281178Z" level=warning msg="cleaning up after shim disconnected" id=eb2eb60a2fec471e91087c47d4882deada898365246b57a828b05c911919ad50 namespace=k8s.io May 15 10:48:27.633287 env[1214]: time="2025-05-15T10:48:27.633289785Z" level=info msg="cleaning up dead shim" May 15 10:48:27.639166 env[1214]: time="2025-05-15T10:48:27.639142964Z" level=warning msg="cleanup warnings time=\"2025-05-15T10:48:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4046 runtime=io.containerd.runc.v2\n" May 15 10:48:28.170956 kubelet[1909]: E0515 10:48:28.170924 1909 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 15 10:48:28.491119 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eb2eb60a2fec471e91087c47d4882deada898365246b57a828b05c911919ad50-rootfs.mount: Deactivated successfully. May 15 10:48:28.564212 kubelet[1909]: E0515 10:48:28.564182 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:48:28.565645 env[1214]: time="2025-05-15T10:48:28.565592940Z" level=info msg="CreateContainer within sandbox \"5aa9c7db119c7a4fc4216f98664bc0e5bc283452b19e463c1a3d7383b34f6fe6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 15 10:48:28.576183 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount101561990.mount: Deactivated successfully. May 15 10:48:28.579051 env[1214]: time="2025-05-15T10:48:28.577708920Z" level=info msg="CreateContainer within sandbox \"5aa9c7db119c7a4fc4216f98664bc0e5bc283452b19e463c1a3d7383b34f6fe6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"004ef22445579259516b702c1a972923bc3459f4186f3c845419eab652c408b6\"" May 15 10:48:28.579051 env[1214]: time="2025-05-15T10:48:28.578405318Z" level=info msg="StartContainer for \"004ef22445579259516b702c1a972923bc3459f4186f3c845419eab652c408b6\"" May 15 10:48:28.593887 systemd[1]: Started cri-containerd-004ef22445579259516b702c1a972923bc3459f4186f3c845419eab652c408b6.scope. May 15 10:48:28.613443 systemd[1]: cri-containerd-004ef22445579259516b702c1a972923bc3459f4186f3c845419eab652c408b6.scope: Deactivated successfully. May 15 10:48:28.614421 env[1214]: time="2025-05-15T10:48:28.614386057Z" level=info msg="StartContainer for \"004ef22445579259516b702c1a972923bc3459f4186f3c845419eab652c408b6\" returns successfully" May 15 10:48:28.633811 env[1214]: time="2025-05-15T10:48:28.633767895Z" level=info msg="shim disconnected" id=004ef22445579259516b702c1a972923bc3459f4186f3c845419eab652c408b6 May 15 10:48:28.633811 env[1214]: time="2025-05-15T10:48:28.633811228Z" level=warning msg="cleaning up after shim disconnected" id=004ef22445579259516b702c1a972923bc3459f4186f3c845419eab652c408b6 namespace=k8s.io May 15 10:48:28.633973 env[1214]: time="2025-05-15T10:48:28.633821057Z" level=info msg="cleaning up dead shim" May 15 10:48:28.639775 env[1214]: time="2025-05-15T10:48:28.639751348Z" level=warning msg="cleanup warnings time=\"2025-05-15T10:48:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4100 runtime=io.containerd.runc.v2\n" May 15 10:48:29.348727 kubelet[1909]: W0515 10:48:29.348678 1909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd338d035_075d_46eb_bd5e_6ae07c9f71fd.slice/cri-containerd-d19de475bfe246eab7d221ce99041c9b0889313ac5ddc964b964c68a8fe42470.scope WatchSource:0}: task d19de475bfe246eab7d221ce99041c9b0889313ac5ddc964b964c68a8fe42470 not found: not found May 15 10:48:29.491200 systemd[1]: run-containerd-runc-k8s.io-004ef22445579259516b702c1a972923bc3459f4186f3c845419eab652c408b6-runc.PVdfbs.mount: Deactivated successfully. May 15 10:48:29.491293 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-004ef22445579259516b702c1a972923bc3459f4186f3c845419eab652c408b6-rootfs.mount: Deactivated successfully. May 15 10:48:29.571929 kubelet[1909]: E0515 10:48:29.571890 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:48:29.573574 env[1214]: time="2025-05-15T10:48:29.573523443Z" level=info msg="CreateContainer within sandbox \"5aa9c7db119c7a4fc4216f98664bc0e5bc283452b19e463c1a3d7383b34f6fe6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 15 10:48:29.592701 env[1214]: time="2025-05-15T10:48:29.592650660Z" level=info msg="CreateContainer within sandbox \"5aa9c7db119c7a4fc4216f98664bc0e5bc283452b19e463c1a3d7383b34f6fe6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"cfa74184babf95d59fee2a48fcc440f8e499fb1aa7108df9cbddc0c4686effa0\"" May 15 10:48:29.593219 env[1214]: time="2025-05-15T10:48:29.593177094Z" level=info msg="StartContainer for \"cfa74184babf95d59fee2a48fcc440f8e499fb1aa7108df9cbddc0c4686effa0\"" May 15 10:48:29.611015 systemd[1]: Started cri-containerd-cfa74184babf95d59fee2a48fcc440f8e499fb1aa7108df9cbddc0c4686effa0.scope. May 15 10:48:29.636695 env[1214]: time="2025-05-15T10:48:29.636640851Z" level=info msg="StartContainer for \"cfa74184babf95d59fee2a48fcc440f8e499fb1aa7108df9cbddc0c4686effa0\" returns successfully" May 15 10:48:29.893643 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) May 15 10:48:30.576084 kubelet[1909]: E0515 10:48:30.576054 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:48:30.588235 kubelet[1909]: I0515 10:48:30.588183 1909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-5xsf7" podStartSLOduration=6.588164805 podStartE2EDuration="6.588164805s" podCreationTimestamp="2025-05-15 10:48:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 10:48:30.587563059 +0000 UTC m=+87.526287183" watchObservedRunningTime="2025-05-15 10:48:30.588164805 +0000 UTC m=+87.526888899" May 15 10:48:31.329626 systemd[1]: run-containerd-runc-k8s.io-cfa74184babf95d59fee2a48fcc440f8e499fb1aa7108df9cbddc0c4686effa0-runc.VCO6WS.mount: Deactivated successfully. May 15 10:48:31.791685 kubelet[1909]: E0515 10:48:31.791651 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:48:32.361872 systemd-networkd[1031]: lxc_health: Link UP May 15 10:48:32.372291 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 15 10:48:32.371971 systemd-networkd[1031]: lxc_health: Gained carrier May 15 10:48:32.454347 kubelet[1909]: W0515 10:48:32.454303 1909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd338d035_075d_46eb_bd5e_6ae07c9f71fd.slice/cri-containerd-2aa3a68c56bde458d6581d3cdac0319b2fee9ba2d541c35e33ec88745ecea130.scope WatchSource:0}: task 2aa3a68c56bde458d6581d3cdac0319b2fee9ba2d541c35e33ec88745ecea130 not found: not found May 15 10:48:33.140777 kubelet[1909]: E0515 10:48:33.140755 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:48:33.708666 systemd-networkd[1031]: lxc_health: Gained IPv6LL May 15 10:48:33.792748 kubelet[1909]: E0515 10:48:33.792715 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:48:34.581718 kubelet[1909]: E0515 10:48:34.581680 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:48:35.560546 kubelet[1909]: W0515 10:48:35.560503 1909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd338d035_075d_46eb_bd5e_6ae07c9f71fd.slice/cri-containerd-eb2eb60a2fec471e91087c47d4882deada898365246b57a828b05c911919ad50.scope WatchSource:0}: task eb2eb60a2fec471e91087c47d4882deada898365246b57a828b05c911919ad50 not found: not found May 15 10:48:35.583486 kubelet[1909]: E0515 10:48:35.583455 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:48:37.607581 sshd[3714]: pam_unix(sshd:session): session closed for user core May 15 10:48:37.609949 systemd[1]: sshd@25-10.0.0.121:22-10.0.0.1:57514.service: Deactivated successfully. May 15 10:48:37.610608 systemd[1]: session-26.scope: Deactivated successfully. May 15 10:48:37.611315 systemd-logind[1196]: Session 26 logged out. Waiting for processes to exit. May 15 10:48:37.611963 systemd-logind[1196]: Removed session 26. May 15 10:48:38.665260 kubelet[1909]: W0515 10:48:38.665222 1909 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd338d035_075d_46eb_bd5e_6ae07c9f71fd.slice/cri-containerd-004ef22445579259516b702c1a972923bc3459f4186f3c845419eab652c408b6.scope WatchSource:0}: task 004ef22445579259516b702c1a972923bc3459f4186f3c845419eab652c408b6 not found: not found May 15 10:48:39.140759 kubelet[1909]: E0515 10:48:39.140739 1909 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"