May 15 00:55:46.823044 kernel: Linux version 5.15.181-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Wed May 14 23:14:51 -00 2025 May 15 00:55:46.823062 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=bd2e5c4f6706621ae2eebb207adba6951c52e019661e3e87d19fb6c7284acf54 May 15 00:55:46.823071 kernel: BIOS-provided physical RAM map: May 15 00:55:46.823077 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable May 15 00:55:46.823082 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable May 15 00:55:46.823088 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS May 15 00:55:46.823094 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable May 15 00:55:46.823100 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS May 15 00:55:46.823106 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable May 15 00:55:46.823112 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS May 15 00:55:46.823118 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable May 15 00:55:46.823123 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved May 15 00:55:46.823128 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data May 15 00:55:46.823134 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS May 15 00:55:46.823141 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable May 15 00:55:46.823148 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved May 15 00:55:46.823154 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS May 15 00:55:46.823160 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 15 00:55:46.823166 kernel: NX (Execute Disable) protection: active May 15 00:55:46.823189 kernel: e820: update [mem 0x9b475018-0x9b47ec57] usable ==> usable May 15 00:55:46.823195 kernel: e820: update [mem 0x9b475018-0x9b47ec57] usable ==> usable May 15 00:55:46.823201 kernel: e820: update [mem 0x9b438018-0x9b474e57] usable ==> usable May 15 00:55:46.823207 kernel: e820: update [mem 0x9b438018-0x9b474e57] usable ==> usable May 15 00:55:46.823213 kernel: extended physical RAM map: May 15 00:55:46.823219 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable May 15 00:55:46.823226 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable May 15 00:55:46.823232 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS May 15 00:55:46.823238 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable May 15 00:55:46.823244 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS May 15 00:55:46.823250 kernel: reserve setup_data: [mem 0x000000000080c000-0x000000000080ffff] usable May 15 00:55:46.823256 kernel: reserve setup_data: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS May 15 00:55:46.823261 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b438017] usable May 15 00:55:46.823267 kernel: reserve setup_data: [mem 0x000000009b438018-0x000000009b474e57] usable May 15 00:55:46.823273 kernel: reserve setup_data: [mem 0x000000009b474e58-0x000000009b475017] usable May 15 00:55:46.823279 kernel: reserve setup_data: [mem 0x000000009b475018-0x000000009b47ec57] usable May 15 00:55:46.823285 kernel: reserve setup_data: [mem 0x000000009b47ec58-0x000000009c8eefff] usable May 15 00:55:46.823291 kernel: reserve setup_data: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved May 15 00:55:46.823297 kernel: reserve setup_data: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data May 15 00:55:46.823303 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS May 15 00:55:46.823309 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009cf3ffff] usable May 15 00:55:46.823318 kernel: reserve setup_data: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved May 15 00:55:46.823324 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS May 15 00:55:46.823330 kernel: reserve setup_data: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 15 00:55:46.823337 kernel: efi: EFI v2.70 by EDK II May 15 00:55:46.823344 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b673018 RNG=0x9cb73018 May 15 00:55:46.823350 kernel: random: crng init done May 15 00:55:46.823357 kernel: SMBIOS 2.8 present. May 15 00:55:46.823363 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 May 15 00:55:46.823369 kernel: Hypervisor detected: KVM May 15 00:55:46.823375 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 15 00:55:46.823382 kernel: kvm-clock: cpu 0, msr 39196001, primary cpu clock May 15 00:55:46.823388 kernel: kvm-clock: using sched offset of 3989453276 cycles May 15 00:55:46.823396 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 15 00:55:46.823403 kernel: tsc: Detected 2794.746 MHz processor May 15 00:55:46.823410 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 15 00:55:46.823416 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 15 00:55:46.823423 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 May 15 00:55:46.823429 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 15 00:55:46.823436 kernel: Using GB pages for direct mapping May 15 00:55:46.823442 kernel: Secure boot disabled May 15 00:55:46.823449 kernel: ACPI: Early table checksum verification disabled May 15 00:55:46.823456 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) May 15 00:55:46.823463 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) May 15 00:55:46.823469 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 15 00:55:46.823476 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 00:55:46.823482 kernel: ACPI: FACS 0x000000009CBDD000 000040 May 15 00:55:46.823489 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 00:55:46.823496 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 00:55:46.823502 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 00:55:46.823509 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 00:55:46.823516 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) May 15 00:55:46.823523 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] May 15 00:55:46.823529 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] May 15 00:55:46.823536 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] May 15 00:55:46.823542 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] May 15 00:55:46.823549 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] May 15 00:55:46.823555 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] May 15 00:55:46.823561 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] May 15 00:55:46.823568 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] May 15 00:55:46.823575 kernel: No NUMA configuration found May 15 00:55:46.823582 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] May 15 00:55:46.823588 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] May 15 00:55:46.823595 kernel: Zone ranges: May 15 00:55:46.823601 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 15 00:55:46.823608 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] May 15 00:55:46.823614 kernel: Normal empty May 15 00:55:46.823621 kernel: Movable zone start for each node May 15 00:55:46.823627 kernel: Early memory node ranges May 15 00:55:46.823634 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] May 15 00:55:46.823641 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] May 15 00:55:46.823647 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] May 15 00:55:46.823654 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] May 15 00:55:46.823660 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] May 15 00:55:46.823666 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] May 15 00:55:46.823673 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] May 15 00:55:46.823679 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 15 00:55:46.823686 kernel: On node 0, zone DMA: 96 pages in unavailable ranges May 15 00:55:46.823692 kernel: On node 0, zone DMA: 8 pages in unavailable ranges May 15 00:55:46.823700 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 15 00:55:46.823706 kernel: On node 0, zone DMA: 240 pages in unavailable ranges May 15 00:55:46.823713 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges May 15 00:55:46.823719 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges May 15 00:55:46.823726 kernel: ACPI: PM-Timer IO Port: 0x608 May 15 00:55:46.823732 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 15 00:55:46.823738 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 15 00:55:46.823745 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 15 00:55:46.823751 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 15 00:55:46.823759 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 15 00:55:46.823766 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 15 00:55:46.823772 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 15 00:55:46.823778 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 15 00:55:46.823785 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 15 00:55:46.823792 kernel: TSC deadline timer available May 15 00:55:46.823798 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs May 15 00:55:46.823805 kernel: kvm-guest: KVM setup pv remote TLB flush May 15 00:55:46.823811 kernel: kvm-guest: setup PV sched yield May 15 00:55:46.823819 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices May 15 00:55:46.823825 kernel: Booting paravirtualized kernel on KVM May 15 00:55:46.823836 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 15 00:55:46.823844 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 May 15 00:55:46.823851 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 May 15 00:55:46.823858 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 May 15 00:55:46.823865 kernel: pcpu-alloc: [0] 0 1 2 3 May 15 00:55:46.823871 kernel: kvm-guest: setup async PF for cpu 0 May 15 00:55:46.823878 kernel: kvm-guest: stealtime: cpu 0, msr 9b21c0c0 May 15 00:55:46.823885 kernel: kvm-guest: PV spinlocks enabled May 15 00:55:46.823891 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 15 00:55:46.823898 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 May 15 00:55:46.823906 kernel: Policy zone: DMA32 May 15 00:55:46.823914 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=bd2e5c4f6706621ae2eebb207adba6951c52e019661e3e87d19fb6c7284acf54 May 15 00:55:46.823921 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 15 00:55:46.823928 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 15 00:55:46.823936 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 15 00:55:46.823943 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 15 00:55:46.823950 kernel: Memory: 2397432K/2567000K available (12294K kernel code, 2276K rwdata, 13724K rodata, 47456K init, 4124K bss, 169308K reserved, 0K cma-reserved) May 15 00:55:46.823957 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 15 00:55:46.823964 kernel: ftrace: allocating 34584 entries in 136 pages May 15 00:55:46.823971 kernel: ftrace: allocated 136 pages with 2 groups May 15 00:55:46.823978 kernel: rcu: Hierarchical RCU implementation. May 15 00:55:46.823985 kernel: rcu: RCU event tracing is enabled. May 15 00:55:46.823992 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 15 00:55:46.824000 kernel: Rude variant of Tasks RCU enabled. May 15 00:55:46.824007 kernel: Tracing variant of Tasks RCU enabled. May 15 00:55:46.824014 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 15 00:55:46.824021 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 15 00:55:46.824027 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 15 00:55:46.824034 kernel: Console: colour dummy device 80x25 May 15 00:55:46.824041 kernel: printk: console [ttyS0] enabled May 15 00:55:46.824048 kernel: ACPI: Core revision 20210730 May 15 00:55:46.824055 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 15 00:55:46.824063 kernel: APIC: Switch to symmetric I/O mode setup May 15 00:55:46.824070 kernel: x2apic enabled May 15 00:55:46.824076 kernel: Switched APIC routing to physical x2apic. May 15 00:55:46.824083 kernel: kvm-guest: setup PV IPIs May 15 00:55:46.824090 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 15 00:55:46.824097 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 15 00:55:46.824104 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794746) May 15 00:55:46.824111 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 15 00:55:46.824117 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 15 00:55:46.824125 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 15 00:55:46.824132 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 15 00:55:46.824139 kernel: Spectre V2 : Mitigation: Retpolines May 15 00:55:46.824146 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 15 00:55:46.824153 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 15 00:55:46.824159 kernel: RETBleed: Mitigation: untrained return thunk May 15 00:55:46.824166 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 15 00:55:46.824192 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp May 15 00:55:46.824199 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 15 00:55:46.824207 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 15 00:55:46.824214 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 15 00:55:46.824221 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 15 00:55:46.824228 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. May 15 00:55:46.824235 kernel: Freeing SMP alternatives memory: 32K May 15 00:55:46.824241 kernel: pid_max: default: 32768 minimum: 301 May 15 00:55:46.824248 kernel: LSM: Security Framework initializing May 15 00:55:46.824255 kernel: SELinux: Initializing. May 15 00:55:46.824262 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 15 00:55:46.824270 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 15 00:55:46.824277 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 15 00:55:46.824284 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 15 00:55:46.824291 kernel: ... version: 0 May 15 00:55:46.824298 kernel: ... bit width: 48 May 15 00:55:46.824304 kernel: ... generic registers: 6 May 15 00:55:46.824311 kernel: ... value mask: 0000ffffffffffff May 15 00:55:46.824318 kernel: ... max period: 00007fffffffffff May 15 00:55:46.824325 kernel: ... fixed-purpose events: 0 May 15 00:55:46.824333 kernel: ... event mask: 000000000000003f May 15 00:55:46.824339 kernel: signal: max sigframe size: 1776 May 15 00:55:46.824346 kernel: rcu: Hierarchical SRCU implementation. May 15 00:55:46.824353 kernel: smp: Bringing up secondary CPUs ... May 15 00:55:46.824359 kernel: x86: Booting SMP configuration: May 15 00:55:46.824366 kernel: .... node #0, CPUs: #1 May 15 00:55:46.824373 kernel: kvm-clock: cpu 1, msr 39196041, secondary cpu clock May 15 00:55:46.824380 kernel: kvm-guest: setup async PF for cpu 1 May 15 00:55:46.824387 kernel: kvm-guest: stealtime: cpu 1, msr 9b29c0c0 May 15 00:55:46.824394 kernel: #2 May 15 00:55:46.824402 kernel: kvm-clock: cpu 2, msr 39196081, secondary cpu clock May 15 00:55:46.824408 kernel: kvm-guest: setup async PF for cpu 2 May 15 00:55:46.824415 kernel: kvm-guest: stealtime: cpu 2, msr 9b31c0c0 May 15 00:55:46.824422 kernel: #3 May 15 00:55:46.824428 kernel: kvm-clock: cpu 3, msr 391960c1, secondary cpu clock May 15 00:55:46.824435 kernel: kvm-guest: setup async PF for cpu 3 May 15 00:55:46.824442 kernel: kvm-guest: stealtime: cpu 3, msr 9b39c0c0 May 15 00:55:46.824449 kernel: smp: Brought up 1 node, 4 CPUs May 15 00:55:46.824455 kernel: smpboot: Max logical packages: 1 May 15 00:55:46.824463 kernel: smpboot: Total of 4 processors activated (22357.96 BogoMIPS) May 15 00:55:46.824470 kernel: devtmpfs: initialized May 15 00:55:46.824477 kernel: x86/mm: Memory block size: 128MB May 15 00:55:46.824484 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) May 15 00:55:46.824491 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) May 15 00:55:46.824497 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) May 15 00:55:46.824504 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) May 15 00:55:46.824511 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) May 15 00:55:46.824518 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 15 00:55:46.824526 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 15 00:55:46.824533 kernel: pinctrl core: initialized pinctrl subsystem May 15 00:55:46.824540 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 15 00:55:46.824547 kernel: audit: initializing netlink subsys (disabled) May 15 00:55:46.824554 kernel: audit: type=2000 audit(1747270546.232:1): state=initialized audit_enabled=0 res=1 May 15 00:55:46.824560 kernel: thermal_sys: Registered thermal governor 'step_wise' May 15 00:55:46.824567 kernel: thermal_sys: Registered thermal governor 'user_space' May 15 00:55:46.824574 kernel: cpuidle: using governor menu May 15 00:55:46.824581 kernel: ACPI: bus type PCI registered May 15 00:55:46.824588 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 15 00:55:46.824595 kernel: dca service started, version 1.12.1 May 15 00:55:46.824602 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) May 15 00:55:46.824609 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 May 15 00:55:46.824616 kernel: PCI: Using configuration type 1 for base access May 15 00:55:46.824623 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 15 00:55:46.824630 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages May 15 00:55:46.824637 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages May 15 00:55:46.824644 kernel: ACPI: Added _OSI(Module Device) May 15 00:55:46.824651 kernel: ACPI: Added _OSI(Processor Device) May 15 00:55:46.824658 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 15 00:55:46.824665 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 15 00:55:46.824671 kernel: ACPI: Added _OSI(Linux-Dell-Video) May 15 00:55:46.824678 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) May 15 00:55:46.824685 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) May 15 00:55:46.824692 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 15 00:55:46.824698 kernel: ACPI: Interpreter enabled May 15 00:55:46.824705 kernel: ACPI: PM: (supports S0 S3 S5) May 15 00:55:46.824713 kernel: ACPI: Using IOAPIC for interrupt routing May 15 00:55:46.824720 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 15 00:55:46.824727 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 15 00:55:46.824734 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 15 00:55:46.824844 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 15 00:55:46.824915 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 15 00:55:46.824983 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 15 00:55:46.824994 kernel: PCI host bridge to bus 0000:00 May 15 00:55:46.825068 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 15 00:55:46.825132 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 15 00:55:46.825214 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 15 00:55:46.825275 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] May 15 00:55:46.825336 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 15 00:55:46.825396 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] May 15 00:55:46.825460 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 15 00:55:46.825540 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 15 00:55:46.825616 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 May 15 00:55:46.825686 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] May 15 00:55:46.825754 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] May 15 00:55:46.825823 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] May 15 00:55:46.825898 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb May 15 00:55:46.825965 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 15 00:55:46.826124 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 May 15 00:55:46.826222 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] May 15 00:55:46.826296 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] May 15 00:55:46.826365 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] May 15 00:55:46.826441 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 May 15 00:55:46.826514 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] May 15 00:55:46.826583 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] May 15 00:55:46.826650 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] May 15 00:55:46.826726 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 May 15 00:55:46.826796 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] May 15 00:55:46.826865 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] May 15 00:55:46.826935 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] May 15 00:55:46.827006 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] May 15 00:55:46.827086 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 15 00:55:46.827153 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 15 00:55:46.827252 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 15 00:55:46.827321 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] May 15 00:55:46.827389 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] May 15 00:55:46.827461 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 15 00:55:46.827533 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] May 15 00:55:46.827542 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 15 00:55:46.827550 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 15 00:55:46.827557 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 15 00:55:46.827564 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 15 00:55:46.827570 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 15 00:55:46.827577 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 15 00:55:46.827584 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 15 00:55:46.827593 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 15 00:55:46.827599 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 15 00:55:46.827606 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 15 00:55:46.827613 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 15 00:55:46.827620 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 15 00:55:46.827627 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 15 00:55:46.827633 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 15 00:55:46.827640 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 15 00:55:46.827647 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 15 00:55:46.827655 kernel: iommu: Default domain type: Translated May 15 00:55:46.827662 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 15 00:55:46.827731 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 15 00:55:46.827799 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 15 00:55:46.827872 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 15 00:55:46.827882 kernel: vgaarb: loaded May 15 00:55:46.827889 kernel: pps_core: LinuxPPS API ver. 1 registered May 15 00:55:46.827896 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 15 00:55:46.827903 kernel: PTP clock support registered May 15 00:55:46.827912 kernel: Registered efivars operations May 15 00:55:46.827919 kernel: PCI: Using ACPI for IRQ routing May 15 00:55:46.827926 kernel: PCI: pci_cache_line_size set to 64 bytes May 15 00:55:46.827932 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] May 15 00:55:46.827939 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] May 15 00:55:46.827946 kernel: e820: reserve RAM buffer [mem 0x9b438018-0x9bffffff] May 15 00:55:46.827953 kernel: e820: reserve RAM buffer [mem 0x9b475018-0x9bffffff] May 15 00:55:46.827959 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] May 15 00:55:46.827966 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] May 15 00:55:46.827974 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 15 00:55:46.827981 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 15 00:55:46.827988 kernel: clocksource: Switched to clocksource kvm-clock May 15 00:55:46.827995 kernel: VFS: Disk quotas dquot_6.6.0 May 15 00:55:46.828002 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 15 00:55:46.828009 kernel: pnp: PnP ACPI init May 15 00:55:46.828081 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved May 15 00:55:46.828092 kernel: pnp: PnP ACPI: found 6 devices May 15 00:55:46.828101 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 15 00:55:46.828108 kernel: NET: Registered PF_INET protocol family May 15 00:55:46.828115 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 15 00:55:46.828122 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 15 00:55:46.828129 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 15 00:55:46.828136 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 15 00:55:46.828143 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) May 15 00:55:46.828150 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 15 00:55:46.828158 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 15 00:55:46.828165 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 15 00:55:46.828211 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 15 00:55:46.828218 kernel: NET: Registered PF_XDP protocol family May 15 00:55:46.828292 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window May 15 00:55:46.828363 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] May 15 00:55:46.828424 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 15 00:55:46.828485 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 15 00:55:46.828549 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 15 00:55:46.828608 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] May 15 00:55:46.828669 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 15 00:55:46.828737 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] May 15 00:55:46.828747 kernel: PCI: CLS 0 bytes, default 64 May 15 00:55:46.828754 kernel: Initialise system trusted keyrings May 15 00:55:46.828761 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 15 00:55:46.828768 kernel: Key type asymmetric registered May 15 00:55:46.828775 kernel: Asymmetric key parser 'x509' registered May 15 00:55:46.828784 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 15 00:55:46.828791 kernel: io scheduler mq-deadline registered May 15 00:55:46.828806 kernel: io scheduler kyber registered May 15 00:55:46.828814 kernel: io scheduler bfq registered May 15 00:55:46.828822 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 15 00:55:46.828829 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 15 00:55:46.828838 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 15 00:55:46.828846 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 15 00:55:46.828854 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 15 00:55:46.828864 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 15 00:55:46.828872 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 15 00:55:46.828879 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 15 00:55:46.828886 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 15 00:55:46.828958 kernel: rtc_cmos 00:04: RTC can wake from S4 May 15 00:55:46.828969 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 15 00:55:46.829031 kernel: rtc_cmos 00:04: registered as rtc0 May 15 00:55:46.829094 kernel: rtc_cmos 00:04: setting system clock to 2025-05-15T00:55:46 UTC (1747270546) May 15 00:55:46.829159 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 15 00:55:46.829186 kernel: efifb: probing for efifb May 15 00:55:46.829195 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k May 15 00:55:46.829202 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 May 15 00:55:46.829209 kernel: efifb: scrolling: redraw May 15 00:55:46.829217 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 15 00:55:46.829224 kernel: Console: switching to colour frame buffer device 160x50 May 15 00:55:46.829231 kernel: fb0: EFI VGA frame buffer device May 15 00:55:46.829238 kernel: pstore: Registered efi as persistent store backend May 15 00:55:46.829247 kernel: NET: Registered PF_INET6 protocol family May 15 00:55:46.829254 kernel: Segment Routing with IPv6 May 15 00:55:46.829262 kernel: In-situ OAM (IOAM) with IPv6 May 15 00:55:46.829270 kernel: NET: Registered PF_PACKET protocol family May 15 00:55:46.829277 kernel: Key type dns_resolver registered May 15 00:55:46.829284 kernel: IPI shorthand broadcast: enabled May 15 00:55:46.829293 kernel: sched_clock: Marking stable (415545557, 127124716)->(588197982, -45527709) May 15 00:55:46.829300 kernel: registered taskstats version 1 May 15 00:55:46.829307 kernel: Loading compiled-in X.509 certificates May 15 00:55:46.829315 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.181-flatcar: a3400373b5c34ccb74f940604f224840f2b40bdd' May 15 00:55:46.829322 kernel: Key type .fscrypt registered May 15 00:55:46.829329 kernel: Key type fscrypt-provisioning registered May 15 00:55:46.829336 kernel: pstore: Using crash dump compression: deflate May 15 00:55:46.829343 kernel: ima: No TPM chip found, activating TPM-bypass! May 15 00:55:46.829351 kernel: ima: Allocated hash algorithm: sha1 May 15 00:55:46.829359 kernel: ima: No architecture policies found May 15 00:55:46.829366 kernel: clk: Disabling unused clocks May 15 00:55:46.829373 kernel: Freeing unused kernel image (initmem) memory: 47456K May 15 00:55:46.829381 kernel: Write protecting the kernel read-only data: 28672k May 15 00:55:46.829388 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K May 15 00:55:46.829396 kernel: Freeing unused kernel image (rodata/data gap) memory: 612K May 15 00:55:46.829403 kernel: Run /init as init process May 15 00:55:46.829410 kernel: with arguments: May 15 00:55:46.829419 kernel: /init May 15 00:55:46.829426 kernel: with environment: May 15 00:55:46.829432 kernel: HOME=/ May 15 00:55:46.829439 kernel: TERM=linux May 15 00:55:46.829447 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 15 00:55:46.829456 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 15 00:55:46.829465 systemd[1]: Detected virtualization kvm. May 15 00:55:46.829473 systemd[1]: Detected architecture x86-64. May 15 00:55:46.829482 systemd[1]: Running in initrd. May 15 00:55:46.829489 systemd[1]: No hostname configured, using default hostname. May 15 00:55:46.829497 systemd[1]: Hostname set to . May 15 00:55:46.829505 systemd[1]: Initializing machine ID from VM UUID. May 15 00:55:46.829512 systemd[1]: Queued start job for default target initrd.target. May 15 00:55:46.829520 systemd[1]: Started systemd-ask-password-console.path. May 15 00:55:46.829527 systemd[1]: Reached target cryptsetup.target. May 15 00:55:46.829535 systemd[1]: Reached target paths.target. May 15 00:55:46.829542 systemd[1]: Reached target slices.target. May 15 00:55:46.829551 systemd[1]: Reached target swap.target. May 15 00:55:46.829558 systemd[1]: Reached target timers.target. May 15 00:55:46.829566 systemd[1]: Listening on iscsid.socket. May 15 00:55:46.829574 systemd[1]: Listening on iscsiuio.socket. May 15 00:55:46.829581 systemd[1]: Listening on systemd-journald-audit.socket. May 15 00:55:46.829589 systemd[1]: Listening on systemd-journald-dev-log.socket. May 15 00:55:46.829597 systemd[1]: Listening on systemd-journald.socket. May 15 00:55:46.829606 systemd[1]: Listening on systemd-networkd.socket. May 15 00:55:46.829613 systemd[1]: Listening on systemd-udevd-control.socket. May 15 00:55:46.829621 systemd[1]: Listening on systemd-udevd-kernel.socket. May 15 00:55:46.829629 systemd[1]: Reached target sockets.target. May 15 00:55:46.829636 systemd[1]: Starting kmod-static-nodes.service... May 15 00:55:46.829644 systemd[1]: Finished network-cleanup.service. May 15 00:55:46.829651 systemd[1]: Starting systemd-fsck-usr.service... May 15 00:55:46.829659 systemd[1]: Starting systemd-journald.service... May 15 00:55:46.829667 systemd[1]: Starting systemd-modules-load.service... May 15 00:55:46.829675 systemd[1]: Starting systemd-resolved.service... May 15 00:55:46.829683 systemd[1]: Starting systemd-vconsole-setup.service... May 15 00:55:46.829691 systemd[1]: Finished kmod-static-nodes.service. May 15 00:55:46.829699 kernel: audit: type=1130 audit(1747270546.822:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:46.829706 systemd[1]: Finished systemd-fsck-usr.service. May 15 00:55:46.829714 kernel: audit: type=1130 audit(1747270546.827:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:46.829724 systemd-journald[198]: Journal started May 15 00:55:46.829761 systemd-journald[198]: Runtime Journal (/run/log/journal/ed2ced3b209e428b90d27fcea0a68792) is 6.0M, max 48.4M, 42.4M free. May 15 00:55:46.822000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:46.827000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:46.831197 systemd[1]: Started systemd-journald.service. May 15 00:55:46.832272 systemd-modules-load[199]: Inserted module 'overlay' May 15 00:55:46.832000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:46.832552 systemd[1]: Finished systemd-vconsole-setup.service. May 15 00:55:46.842754 kernel: audit: type=1130 audit(1747270546.832:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:46.842787 kernel: audit: type=1130 audit(1747270546.838:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:46.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:46.838827 systemd[1]: Starting dracut-cmdline-ask.service... May 15 00:55:46.848554 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 15 00:55:46.854000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:46.851465 systemd-resolved[200]: Positive Trust Anchors: May 15 00:55:46.858230 kernel: audit: type=1130 audit(1747270546.854:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:46.851474 systemd-resolved[200]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 00:55:46.851499 systemd-resolved[200]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 15 00:55:46.853723 systemd-resolved[200]: Defaulting to hostname 'linux'. May 15 00:55:46.854435 systemd[1]: Started systemd-resolved.service. May 15 00:55:46.854587 systemd[1]: Reached target nss-lookup.target. May 15 00:55:46.863528 kernel: audit: type=1130 audit(1747270546.859:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:46.859000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:46.859696 systemd[1]: Finished dracut-cmdline-ask.service. May 15 00:55:46.863460 systemd[1]: Starting dracut-cmdline.service... May 15 00:55:46.866217 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 15 00:55:46.873073 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 15 00:55:46.873097 kernel: audit: type=1130 audit(1747270546.867:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:46.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:46.875720 dracut-cmdline[218]: dracut-dracut-053 May 15 00:55:46.877590 dracut-cmdline[218]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=bd2e5c4f6706621ae2eebb207adba6951c52e019661e3e87d19fb6c7284acf54 May 15 00:55:46.882726 kernel: Bridge firewalling registered May 15 00:55:46.878055 systemd-modules-load[199]: Inserted module 'br_netfilter' May 15 00:55:46.898193 kernel: SCSI subsystem initialized May 15 00:55:46.910343 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 15 00:55:46.910386 kernel: device-mapper: uevent: version 1.0.3 May 15 00:55:46.910396 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com May 15 00:55:46.912955 systemd-modules-load[199]: Inserted module 'dm_multipath' May 15 00:55:46.913609 systemd[1]: Finished systemd-modules-load.service. May 15 00:55:46.918470 kernel: audit: type=1130 audit(1747270546.914:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:46.914000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:46.915325 systemd[1]: Starting systemd-sysctl.service... May 15 00:55:46.925226 systemd[1]: Finished systemd-sysctl.service. May 15 00:55:46.929532 kernel: audit: type=1130 audit(1747270546.925:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:46.925000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:46.939195 kernel: Loading iSCSI transport class v2.0-870. May 15 00:55:46.955193 kernel: iscsi: registered transport (tcp) May 15 00:55:46.977200 kernel: iscsi: registered transport (qla4xxx) May 15 00:55:46.977220 kernel: QLogic iSCSI HBA Driver May 15 00:55:47.004914 systemd[1]: Finished dracut-cmdline.service. May 15 00:55:47.006000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:47.007386 systemd[1]: Starting dracut-pre-udev.service... May 15 00:55:47.052206 kernel: raid6: avx2x4 gen() 29790 MB/s May 15 00:55:47.069193 kernel: raid6: avx2x4 xor() 7500 MB/s May 15 00:55:47.086193 kernel: raid6: avx2x2 gen() 32166 MB/s May 15 00:55:47.103195 kernel: raid6: avx2x2 xor() 19310 MB/s May 15 00:55:47.120191 kernel: raid6: avx2x1 gen() 26581 MB/s May 15 00:55:47.137195 kernel: raid6: avx2x1 xor() 15364 MB/s May 15 00:55:47.154191 kernel: raid6: sse2x4 gen() 14844 MB/s May 15 00:55:47.171191 kernel: raid6: sse2x4 xor() 7112 MB/s May 15 00:55:47.188196 kernel: raid6: sse2x2 gen() 16160 MB/s May 15 00:55:47.205191 kernel: raid6: sse2x2 xor() 9859 MB/s May 15 00:55:47.222203 kernel: raid6: sse2x1 gen() 12269 MB/s May 15 00:55:47.239735 kernel: raid6: sse2x1 xor() 7721 MB/s May 15 00:55:47.239749 kernel: raid6: using algorithm avx2x2 gen() 32166 MB/s May 15 00:55:47.239758 kernel: raid6: .... xor() 19310 MB/s, rmw enabled May 15 00:55:47.240475 kernel: raid6: using avx2x2 recovery algorithm May 15 00:55:47.254235 kernel: xor: automatically using best checksumming function avx May 15 00:55:47.349212 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no May 15 00:55:47.356408 systemd[1]: Finished dracut-pre-udev.service. May 15 00:55:47.357000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:47.358000 audit: BPF prog-id=7 op=LOAD May 15 00:55:47.358000 audit: BPF prog-id=8 op=LOAD May 15 00:55:47.358833 systemd[1]: Starting systemd-udevd.service... May 15 00:55:47.370745 systemd-udevd[402]: Using default interface naming scheme 'v252'. May 15 00:55:47.374336 systemd[1]: Started systemd-udevd.service. May 15 00:55:47.375000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:47.377037 systemd[1]: Starting dracut-pre-trigger.service... May 15 00:55:47.386150 dracut-pre-trigger[411]: rd.md=0: removing MD RAID activation May 15 00:55:47.408269 systemd[1]: Finished dracut-pre-trigger.service. May 15 00:55:47.408000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:47.409151 systemd[1]: Starting systemd-udev-trigger.service... May 15 00:55:47.438893 systemd[1]: Finished systemd-udev-trigger.service. May 15 00:55:47.440000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:47.469215 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 15 00:55:47.475063 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 15 00:55:47.475080 kernel: GPT:9289727 != 19775487 May 15 00:55:47.475095 kernel: GPT:Alternate GPT header not at the end of the disk. May 15 00:55:47.475104 kernel: GPT:9289727 != 19775487 May 15 00:55:47.475112 kernel: GPT: Use GNU Parted to correct GPT errors. May 15 00:55:47.475120 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 00:55:47.484194 kernel: cryptd: max_cpu_qlen set to 1000 May 15 00:55:47.497114 kernel: libata version 3.00 loaded. May 15 00:55:47.497138 kernel: AVX2 version of gcm_enc/dec engaged. May 15 00:55:47.504202 kernel: AES CTR mode by8 optimization enabled May 15 00:55:47.511187 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (450) May 15 00:55:47.511243 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. May 15 00:55:47.516137 kernel: ahci 0000:00:1f.2: version 3.0 May 15 00:55:47.535446 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 15 00:55:47.535461 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 15 00:55:47.535547 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 15 00:55:47.535623 kernel: scsi host0: ahci May 15 00:55:47.535714 kernel: scsi host1: ahci May 15 00:55:47.535797 kernel: scsi host2: ahci May 15 00:55:47.535876 kernel: scsi host3: ahci May 15 00:55:47.535959 kernel: scsi host4: ahci May 15 00:55:47.536038 kernel: scsi host5: ahci May 15 00:55:47.536120 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 May 15 00:55:47.536130 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 May 15 00:55:47.536141 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 May 15 00:55:47.536150 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 May 15 00:55:47.536167 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 May 15 00:55:47.536201 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 May 15 00:55:47.512972 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. May 15 00:55:47.524834 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. May 15 00:55:47.530207 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. May 15 00:55:47.538305 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 15 00:55:47.541292 systemd[1]: Starting disk-uuid.service... May 15 00:55:47.548300 disk-uuid[510]: Primary Header is updated. May 15 00:55:47.548300 disk-uuid[510]: Secondary Entries is updated. May 15 00:55:47.548300 disk-uuid[510]: Secondary Header is updated. May 15 00:55:47.553190 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 00:55:47.556189 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 00:55:47.849626 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 15 00:55:47.849695 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 15 00:55:47.849705 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 15 00:55:47.849713 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 15 00:55:47.849722 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 15 00:55:47.851203 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 15 00:55:47.852199 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 15 00:55:47.853507 kernel: ata3.00: applying bridge limits May 15 00:55:47.853517 kernel: ata3.00: configured for UDMA/100 May 15 00:55:47.854198 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 15 00:55:47.887192 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 15 00:55:47.904673 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 15 00:55:47.904689 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 15 00:55:48.556876 disk-uuid[527]: The operation has completed successfully. May 15 00:55:48.558376 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 00:55:48.576380 systemd[1]: disk-uuid.service: Deactivated successfully. May 15 00:55:48.577000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:48.577000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:48.576461 systemd[1]: Finished disk-uuid.service. May 15 00:55:48.585158 systemd[1]: Starting verity-setup.service... May 15 00:55:48.596200 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" May 15 00:55:48.614527 systemd[1]: Found device dev-mapper-usr.device. May 15 00:55:48.616495 systemd[1]: Mounting sysusr-usr.mount... May 15 00:55:48.618137 systemd[1]: Finished verity-setup.service. May 15 00:55:48.618000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:48.674195 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. May 15 00:55:48.674467 systemd[1]: Mounted sysusr-usr.mount. May 15 00:55:48.674655 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. May 15 00:55:48.675866 systemd[1]: Starting ignition-setup.service... May 15 00:55:48.678337 systemd[1]: Starting parse-ip-for-networkd.service... May 15 00:55:48.688946 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 15 00:55:48.688974 kernel: BTRFS info (device vda6): using free space tree May 15 00:55:48.688988 kernel: BTRFS info (device vda6): has skinny extents May 15 00:55:48.695417 systemd[1]: mnt-oem.mount: Deactivated successfully. May 15 00:55:48.703334 systemd[1]: Finished ignition-setup.service. May 15 00:55:48.703000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:48.704126 systemd[1]: Starting ignition-fetch-offline.service... May 15 00:55:48.736664 ignition[650]: Ignition 2.14.0 May 15 00:55:48.736676 ignition[650]: Stage: fetch-offline May 15 00:55:48.736743 ignition[650]: no configs at "/usr/lib/ignition/base.d" May 15 00:55:48.736752 ignition[650]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 00:55:48.736839 ignition[650]: parsed url from cmdline: "" May 15 00:55:48.736841 ignition[650]: no config URL provided May 15 00:55:48.736845 ignition[650]: reading system config file "/usr/lib/ignition/user.ign" May 15 00:55:48.736852 ignition[650]: no config at "/usr/lib/ignition/user.ign" May 15 00:55:48.736867 ignition[650]: op(1): [started] loading QEMU firmware config module May 15 00:55:48.736871 ignition[650]: op(1): executing: "modprobe" "qemu_fw_cfg" May 15 00:55:48.741203 ignition[650]: op(1): [finished] loading QEMU firmware config module May 15 00:55:48.744424 systemd[1]: Finished parse-ip-for-networkd.service. May 15 00:55:48.747395 systemd[1]: Starting systemd-networkd.service... May 15 00:55:48.745000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:48.746000 audit: BPF prog-id=9 op=LOAD May 15 00:55:48.784891 ignition[650]: parsing config with SHA512: 7a814bb8d41f2f215150f6f06abeb3debd42817a5c0a524729d84c2e2ba096fa947ccca1df420a656bb9faeb5c4e9f48e9f064839712e113adb3b6d9126e3814 May 15 00:55:48.791637 unknown[650]: fetched base config from "system" May 15 00:55:48.791649 unknown[650]: fetched user config from "qemu" May 15 00:55:48.792255 ignition[650]: fetch-offline: fetch-offline passed May 15 00:55:48.793000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:48.793296 systemd[1]: Finished ignition-fetch-offline.service. May 15 00:55:48.792316 ignition[650]: Ignition finished successfully May 15 00:55:48.805219 systemd-networkd[720]: lo: Link UP May 15 00:55:48.805228 systemd-networkd[720]: lo: Gained carrier May 15 00:55:48.806931 systemd-networkd[720]: Enumeration completed May 15 00:55:48.807070 systemd[1]: Started systemd-networkd.service. May 15 00:55:48.809000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:48.809316 systemd[1]: Reached target network.target. May 15 00:55:48.809338 systemd-networkd[720]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 00:55:48.809398 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 15 00:55:48.810278 systemd[1]: Starting ignition-kargs.service... May 15 00:55:48.816000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:48.811365 systemd[1]: Starting iscsiuio.service... May 15 00:55:48.814141 systemd-networkd[720]: eth0: Link UP May 15 00:55:48.814144 systemd-networkd[720]: eth0: Gained carrier May 15 00:55:48.815214 systemd[1]: Started iscsiuio.service. May 15 00:55:48.817561 systemd[1]: Starting iscsid.service... May 15 00:55:48.820901 ignition[723]: Ignition 2.14.0 May 15 00:55:48.823619 iscsid[731]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi May 15 00:55:48.823619 iscsid[731]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. May 15 00:55:48.823619 iscsid[731]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. May 15 00:55:48.823619 iscsid[731]: If using hardware iscsi like qla4xxx this message can be ignored. May 15 00:55:48.823619 iscsid[731]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi May 15 00:55:48.823619 iscsid[731]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf May 15 00:55:48.823000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:48.822104 systemd[1]: Started iscsid.service. May 15 00:55:48.820908 ignition[723]: Stage: kargs May 15 00:55:48.830023 systemd[1]: Finished ignition-kargs.service. May 15 00:55:48.821006 ignition[723]: no configs at "/usr/lib/ignition/base.d" May 15 00:55:48.821016 ignition[723]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 00:55:48.822288 ignition[723]: kargs: kargs passed May 15 00:55:48.822331 ignition[723]: Ignition finished successfully May 15 00:55:48.839000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:48.840757 systemd[1]: Starting dracut-initqueue.service... May 15 00:55:48.841376 systemd[1]: Starting ignition-disks.service... May 15 00:55:48.842238 systemd-networkd[720]: eth0: DHCPv4 address 10.0.0.117/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 15 00:55:48.850445 ignition[734]: Ignition 2.14.0 May 15 00:55:48.850457 ignition[734]: Stage: disks May 15 00:55:48.850555 ignition[734]: no configs at "/usr/lib/ignition/base.d" May 15 00:55:48.852143 systemd[1]: Finished dracut-initqueue.service. May 15 00:55:48.853000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:48.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:48.850566 ignition[734]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 00:55:48.854063 systemd[1]: Finished ignition-disks.service. May 15 00:55:48.851845 ignition[734]: disks: disks passed May 15 00:55:48.855496 systemd[1]: Reached target initrd-root-device.target. May 15 00:55:48.851885 ignition[734]: Ignition finished successfully May 15 00:55:48.857440 systemd[1]: Reached target local-fs-pre.target. May 15 00:55:48.859046 systemd[1]: Reached target local-fs.target. May 15 00:55:48.860470 systemd[1]: Reached target remote-fs-pre.target. May 15 00:55:48.862144 systemd[1]: Reached target remote-cryptsetup.target. May 15 00:55:48.863761 systemd[1]: Reached target remote-fs.target. May 15 00:55:48.865345 systemd[1]: Reached target sysinit.target. May 15 00:55:48.866861 systemd[1]: Reached target basic.target. May 15 00:55:48.868028 systemd[1]: Starting dracut-pre-mount.service... May 15 00:55:48.875393 systemd[1]: Finished dracut-pre-mount.service. May 15 00:55:48.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:48.876164 systemd[1]: Starting systemd-fsck-root.service... May 15 00:55:48.884968 systemd-fsck[755]: ROOT: clean, 619/553520 files, 56023/553472 blocks May 15 00:55:48.890375 systemd[1]: Finished systemd-fsck-root.service. May 15 00:55:48.892000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:48.893668 systemd[1]: Mounting sysroot.mount... May 15 00:55:48.900191 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. May 15 00:55:48.900363 systemd[1]: Mounted sysroot.mount. May 15 00:55:48.901759 systemd[1]: Reached target initrd-root-fs.target. May 15 00:55:48.903406 systemd[1]: Mounting sysroot-usr.mount... May 15 00:55:48.904480 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. May 15 00:55:48.904508 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 15 00:55:48.904525 systemd[1]: Reached target ignition-diskful.target. May 15 00:55:48.906436 systemd[1]: Mounted sysroot-usr.mount. May 15 00:55:48.908453 systemd[1]: Starting initrd-setup-root.service... May 15 00:55:48.913003 initrd-setup-root[765]: cut: /sysroot/etc/passwd: No such file or directory May 15 00:55:48.915585 initrd-setup-root[773]: cut: /sysroot/etc/group: No such file or directory May 15 00:55:48.919274 initrd-setup-root[781]: cut: /sysroot/etc/shadow: No such file or directory May 15 00:55:48.923105 initrd-setup-root[789]: cut: /sysroot/etc/gshadow: No such file or directory May 15 00:55:48.945701 systemd[1]: Finished initrd-setup-root.service. May 15 00:55:48.945000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:48.946469 systemd[1]: Starting ignition-mount.service... May 15 00:55:48.948636 systemd[1]: Starting sysroot-boot.service... May 15 00:55:48.953089 bash[806]: umount: /sysroot/usr/share/oem: not mounted. May 15 00:55:48.961556 ignition[808]: INFO : Ignition 2.14.0 May 15 00:55:48.961556 ignition[808]: INFO : Stage: mount May 15 00:55:48.964311 ignition[808]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 00:55:48.964311 ignition[808]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 00:55:48.964311 ignition[808]: INFO : mount: mount passed May 15 00:55:48.964311 ignition[808]: INFO : Ignition finished successfully May 15 00:55:48.964000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:48.965000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:48.963010 systemd[1]: Finished ignition-mount.service. May 15 00:55:48.964452 systemd[1]: Finished sysroot-boot.service. May 15 00:55:49.624758 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 15 00:55:49.631573 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (816) May 15 00:55:49.631601 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 15 00:55:49.631611 kernel: BTRFS info (device vda6): using free space tree May 15 00:55:49.633188 kernel: BTRFS info (device vda6): has skinny extents May 15 00:55:49.636288 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 15 00:55:49.638654 systemd[1]: Starting ignition-files.service... May 15 00:55:49.651615 ignition[836]: INFO : Ignition 2.14.0 May 15 00:55:49.651615 ignition[836]: INFO : Stage: files May 15 00:55:49.653196 ignition[836]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 00:55:49.653196 ignition[836]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 00:55:49.656228 ignition[836]: DEBUG : files: compiled without relabeling support, skipping May 15 00:55:49.657687 ignition[836]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 15 00:55:49.657687 ignition[836]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 15 00:55:49.660886 ignition[836]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 15 00:55:49.660886 ignition[836]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 15 00:55:49.663879 ignition[836]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 15 00:55:49.663879 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 15 00:55:49.663879 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 May 15 00:55:49.661002 unknown[836]: wrote ssh authorized keys file for user: core May 15 00:55:49.838382 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 15 00:55:50.648442 systemd-networkd[720]: eth0: Gained IPv6LL May 15 00:55:51.572922 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 15 00:55:51.575242 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 15 00:55:51.575242 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 15 00:55:52.039116 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 15 00:55:52.131322 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 15 00:55:52.131322 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 15 00:55:52.134965 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 15 00:55:52.134965 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 15 00:55:52.134965 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 15 00:55:52.134965 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 00:55:52.134965 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 00:55:52.134965 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 00:55:52.134965 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 00:55:52.134965 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 15 00:55:52.134965 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 15 00:55:52.134965 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 15 00:55:52.134965 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 15 00:55:52.134965 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 15 00:55:52.134965 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 May 15 00:55:52.438514 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 15 00:55:52.830941 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 15 00:55:52.830941 ignition[836]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 15 00:55:52.835345 ignition[836]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 00:55:52.835345 ignition[836]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 00:55:52.835345 ignition[836]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 15 00:55:52.835345 ignition[836]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 15 00:55:52.835345 ignition[836]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 15 00:55:52.835345 ignition[836]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 15 00:55:52.835345 ignition[836]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 15 00:55:52.835345 ignition[836]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" May 15 00:55:52.835345 ignition[836]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" May 15 00:55:52.835345 ignition[836]: INFO : files: op(11): [started] setting preset to disabled for "coreos-metadata.service" May 15 00:55:52.835345 ignition[836]: INFO : files: op(11): op(12): [started] removing enablement symlink(s) for "coreos-metadata.service" May 15 00:55:52.860962 ignition[836]: INFO : files: op(11): op(12): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 15 00:55:52.862695 ignition[836]: INFO : files: op(11): [finished] setting preset to disabled for "coreos-metadata.service" May 15 00:55:52.864119 ignition[836]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 15 00:55:52.865845 ignition[836]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 15 00:55:52.867555 ignition[836]: INFO : files: files passed May 15 00:55:52.867555 ignition[836]: INFO : Ignition finished successfully May 15 00:55:52.870143 systemd[1]: Finished ignition-files.service. May 15 00:55:52.876110 kernel: kauditd_printk_skb: 25 callbacks suppressed May 15 00:55:52.876132 kernel: audit: type=1130 audit(1747270552.871:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:52.871000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:52.876134 systemd[1]: Starting initrd-setup-root-after-ignition.service... May 15 00:55:52.876403 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). May 15 00:55:52.876961 systemd[1]: Starting ignition-quench.service... May 15 00:55:52.887767 kernel: audit: type=1130 audit(1747270552.881:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:52.887781 kernel: audit: type=1131 audit(1747270552.881:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:52.881000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:52.881000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:52.879520 systemd[1]: ignition-quench.service: Deactivated successfully. May 15 00:55:52.879583 systemd[1]: Finished ignition-quench.service. May 15 00:55:52.892990 initrd-setup-root-after-ignition[862]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory May 15 00:55:52.895723 initrd-setup-root-after-ignition[864]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 15 00:55:52.897567 systemd[1]: Finished initrd-setup-root-after-ignition.service. May 15 00:55:52.899000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:52.899504 systemd[1]: Reached target ignition-complete.target. May 15 00:55:52.904144 kernel: audit: type=1130 audit(1747270552.899:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:52.904211 systemd[1]: Starting initrd-parse-etc.service... May 15 00:55:52.916124 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 15 00:55:52.916216 systemd[1]: Finished initrd-parse-etc.service. May 15 00:55:52.924474 kernel: audit: type=1130 audit(1747270552.917:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:52.924487 kernel: audit: type=1131 audit(1747270552.917:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:52.917000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:52.917000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:52.917357 systemd[1]: Reached target initrd-fs.target. May 15 00:55:52.925294 systemd[1]: Reached target initrd.target. May 15 00:55:52.926736 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. May 15 00:55:52.927434 systemd[1]: Starting dracut-pre-pivot.service... May 15 00:55:52.938676 systemd[1]: Finished dracut-pre-pivot.service. May 15 00:55:52.943422 kernel: audit: type=1130 audit(1747270552.938:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:52.938000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:52.943468 systemd[1]: Starting initrd-cleanup.service... May 15 00:55:52.951971 systemd[1]: Stopped target nss-lookup.target. May 15 00:55:52.952869 systemd[1]: Stopped target remote-cryptsetup.target. May 15 00:55:52.954485 systemd[1]: Stopped target timers.target. May 15 00:55:52.956109 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 15 00:55:52.961430 kernel: audit: type=1131 audit(1747270552.957:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:52.957000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:52.956202 systemd[1]: Stopped dracut-pre-pivot.service. May 15 00:55:52.957740 systemd[1]: Stopped target initrd.target. May 15 00:55:52.964746 systemd[1]: Stopped target basic.target. May 15 00:55:52.966329 systemd[1]: Stopped target ignition-complete.target. May 15 00:55:52.968139 systemd[1]: Stopped target ignition-diskful.target. May 15 00:55:52.969912 systemd[1]: Stopped target initrd-root-device.target. May 15 00:55:52.971744 systemd[1]: Stopped target remote-fs.target. May 15 00:55:52.973380 systemd[1]: Stopped target remote-fs-pre.target. May 15 00:55:52.975085 systemd[1]: Stopped target sysinit.target. May 15 00:55:52.976674 systemd[1]: Stopped target local-fs.target. May 15 00:55:52.978275 systemd[1]: Stopped target local-fs-pre.target. May 15 00:55:52.979924 systemd[1]: Stopped target swap.target. May 15 00:55:52.981407 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 15 00:55:52.982412 systemd[1]: Stopped dracut-pre-mount.service. May 15 00:55:52.983000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:52.984117 systemd[1]: Stopped target cryptsetup.target. May 15 00:55:52.988463 kernel: audit: type=1131 audit(1747270552.983:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:52.988502 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 15 00:55:52.989490 systemd[1]: Stopped dracut-initqueue.service. May 15 00:55:52.990000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:52.991153 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 15 00:55:52.994924 kernel: audit: type=1131 audit(1747270552.990:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:52.991255 systemd[1]: Stopped ignition-fetch-offline.service. May 15 00:55:52.996000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:52.996713 systemd[1]: Stopped target paths.target. May 15 00:55:52.998212 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 15 00:55:53.004213 systemd[1]: Stopped systemd-ask-password-console.path. May 15 00:55:53.006064 systemd[1]: Stopped target slices.target. May 15 00:55:53.007613 systemd[1]: Stopped target sockets.target. May 15 00:55:53.009197 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 15 00:55:53.010347 systemd[1]: Stopped initrd-setup-root-after-ignition.service. May 15 00:55:53.012000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:53.012354 systemd[1]: ignition-files.service: Deactivated successfully. May 15 00:55:53.013323 systemd[1]: Stopped ignition-files.service. May 15 00:55:53.014000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:53.015676 systemd[1]: Stopping ignition-mount.service... May 15 00:55:53.017289 systemd[1]: Stopping iscsid.service... May 15 00:55:53.018586 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 15 00:55:53.019697 iscsid[731]: iscsid shutting down. May 15 00:55:53.020000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:53.019312 systemd[1]: Stopped kmod-static-nodes.service. May 15 00:55:53.021421 systemd[1]: Stopping sysroot-boot.service... May 15 00:55:53.024000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:53.022639 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 15 00:55:53.025000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:53.022760 systemd[1]: Stopped systemd-udev-trigger.service. May 15 00:55:53.024361 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 15 00:55:53.027000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:53.024440 systemd[1]: Stopped dracut-pre-trigger.service. May 15 00:55:53.027149 systemd[1]: iscsid.service: Deactivated successfully. May 15 00:55:53.027324 systemd[1]: Stopped iscsid.service. May 15 00:55:53.028111 systemd[1]: iscsid.socket: Deactivated successfully. May 15 00:55:53.028189 systemd[1]: Closed iscsid.socket. May 15 00:55:53.035000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:53.035000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:53.030181 systemd[1]: Stopping iscsiuio.service... May 15 00:55:53.031628 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 15 00:55:53.037000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:53.031696 systemd[1]: Finished initrd-cleanup.service. May 15 00:55:53.039000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:53.037109 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 15 00:55:53.037477 systemd[1]: iscsiuio.service: Deactivated successfully. May 15 00:55:53.037552 systemd[1]: Stopped iscsiuio.service. May 15 00:55:53.038529 systemd[1]: sysroot-boot.service: Deactivated successfully. May 15 00:55:53.038596 systemd[1]: Stopped sysroot-boot.service. May 15 00:55:53.039407 systemd[1]: iscsiuio.socket: Deactivated successfully. May 15 00:55:53.039432 systemd[1]: Closed iscsiuio.socket. May 15 00:55:53.048356 ignition[877]: INFO : Ignition 2.14.0 May 15 00:55:53.048356 ignition[877]: INFO : Stage: umount May 15 00:55:53.049923 ignition[877]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 00:55:53.049923 ignition[877]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 00:55:53.052914 ignition[877]: INFO : umount: umount passed May 15 00:55:53.053714 ignition[877]: INFO : Ignition finished successfully May 15 00:55:53.055310 systemd[1]: ignition-mount.service: Deactivated successfully. May 15 00:55:53.055383 systemd[1]: Stopped ignition-mount.service. May 15 00:55:53.057000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:53.057772 systemd[1]: Stopped target network.target. May 15 00:55:53.058560 systemd[1]: ignition-disks.service: Deactivated successfully. May 15 00:55:53.059000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:53.058594 systemd[1]: Stopped ignition-disks.service. May 15 00:55:53.061000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:53.059340 systemd[1]: ignition-kargs.service: Deactivated successfully. May 15 00:55:53.063000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:53.059368 systemd[1]: Stopped ignition-kargs.service. May 15 00:55:53.064000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:53.061669 systemd[1]: ignition-setup.service: Deactivated successfully. May 15 00:55:53.061708 systemd[1]: Stopped ignition-setup.service. May 15 00:55:53.063266 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 15 00:55:53.063307 systemd[1]: Stopped initrd-setup-root.service. May 15 00:55:53.065162 systemd[1]: Stopping systemd-networkd.service... May 15 00:55:53.066576 systemd[1]: Stopping systemd-resolved.service... May 15 00:55:53.070076 systemd-networkd[720]: eth0: DHCPv6 lease lost May 15 00:55:53.074104 systemd[1]: systemd-networkd.service: Deactivated successfully. May 15 00:55:53.074226 systemd[1]: Stopped systemd-networkd.service. May 15 00:55:53.075000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:53.076225 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 15 00:55:53.076250 systemd[1]: Closed systemd-networkd.socket. May 15 00:55:53.078586 systemd[1]: Stopping network-cleanup.service... May 15 00:55:53.082000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:53.083000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:53.080483 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 15 00:55:53.080535 systemd[1]: Stopped parse-ip-for-networkd.service. May 15 00:55:53.085000 audit: BPF prog-id=9 op=UNLOAD May 15 00:55:53.082235 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 15 00:55:53.086000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:53.082276 systemd[1]: Stopped systemd-sysctl.service. May 15 00:55:53.085147 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 15 00:55:53.085928 systemd[1]: Stopped systemd-modules-load.service. May 15 00:55:53.086884 systemd[1]: Stopping systemd-udevd.service... May 15 00:55:53.092000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:53.090837 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 15 00:55:53.091254 systemd[1]: systemd-resolved.service: Deactivated successfully. May 15 00:55:53.091322 systemd[1]: Stopped systemd-resolved.service. May 15 00:55:53.096000 audit: BPF prog-id=6 op=UNLOAD May 15 00:55:53.096970 systemd[1]: network-cleanup.service: Deactivated successfully. May 15 00:55:53.097970 systemd[1]: Stopped network-cleanup.service. May 15 00:55:53.099000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:53.099750 systemd[1]: systemd-udevd.service: Deactivated successfully. May 15 00:55:53.100757 systemd[1]: Stopped systemd-udevd.service. May 15 00:55:53.102000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:53.102672 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 15 00:55:53.102714 systemd[1]: Closed systemd-udevd-control.socket. May 15 00:55:53.105452 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 15 00:55:53.105482 systemd[1]: Closed systemd-udevd-kernel.socket. May 15 00:55:53.107974 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 15 00:55:53.108011 systemd[1]: Stopped dracut-pre-udev.service. May 15 00:55:53.110000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:53.110461 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 15 00:55:53.110491 systemd[1]: Stopped dracut-cmdline.service. May 15 00:55:53.112000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:53.112832 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 15 00:55:53.112863 systemd[1]: Stopped dracut-cmdline-ask.service. May 15 00:55:53.114000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:53.116051 systemd[1]: Starting initrd-udevadm-cleanup-db.service... May 15 00:55:53.117796 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 00:55:53.117835 systemd[1]: Stopped systemd-vconsole-setup.service. May 15 00:55:53.120000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:53.120738 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 15 00:55:53.121823 systemd[1]: Finished initrd-udevadm-cleanup-db.service. May 15 00:55:53.123000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:53.123000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:53.123699 systemd[1]: Reached target initrd-switch-root.target. May 15 00:55:53.126001 systemd[1]: Starting initrd-switch-root.service... May 15 00:55:53.142599 systemd[1]: Switching root. May 15 00:55:53.164727 systemd-journald[198]: Journal stopped May 15 00:55:55.693545 systemd-journald[198]: Received SIGTERM from PID 1 (systemd). May 15 00:55:55.693594 kernel: SELinux: Class mctp_socket not defined in policy. May 15 00:55:55.693609 kernel: SELinux: Class anon_inode not defined in policy. May 15 00:55:55.693620 kernel: SELinux: the above unknown classes and permissions will be allowed May 15 00:55:55.693629 kernel: SELinux: policy capability network_peer_controls=1 May 15 00:55:55.693639 kernel: SELinux: policy capability open_perms=1 May 15 00:55:55.693650 kernel: SELinux: policy capability extended_socket_class=1 May 15 00:55:55.693659 kernel: SELinux: policy capability always_check_network=0 May 15 00:55:55.693669 kernel: SELinux: policy capability cgroup_seclabel=1 May 15 00:55:55.693678 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 15 00:55:55.693687 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 15 00:55:55.693696 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 15 00:55:55.693706 systemd[1]: Successfully loaded SELinux policy in 39.374ms. May 15 00:55:55.693726 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.542ms. May 15 00:55:55.693737 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 15 00:55:55.693749 systemd[1]: Detected virtualization kvm. May 15 00:55:55.693761 systemd[1]: Detected architecture x86-64. May 15 00:55:55.693771 systemd[1]: Detected first boot. May 15 00:55:55.693780 systemd[1]: Initializing machine ID from VM UUID. May 15 00:55:55.693794 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). May 15 00:55:55.693804 systemd[1]: Populated /etc with preset unit settings. May 15 00:55:55.693814 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 15 00:55:55.693827 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 15 00:55:55.693838 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 00:55:55.693849 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 15 00:55:55.693859 systemd[1]: Stopped initrd-switch-root.service. May 15 00:55:55.693868 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 15 00:55:55.693880 systemd[1]: Created slice system-addon\x2dconfig.slice. May 15 00:55:55.693890 systemd[1]: Created slice system-addon\x2drun.slice. May 15 00:55:55.693900 systemd[1]: Created slice system-getty.slice. May 15 00:55:55.693910 systemd[1]: Created slice system-modprobe.slice. May 15 00:55:55.693920 systemd[1]: Created slice system-serial\x2dgetty.slice. May 15 00:55:55.693930 systemd[1]: Created slice system-system\x2dcloudinit.slice. May 15 00:55:55.693940 systemd[1]: Created slice system-systemd\x2dfsck.slice. May 15 00:55:55.693949 systemd[1]: Created slice user.slice. May 15 00:55:55.693960 systemd[1]: Started systemd-ask-password-console.path. May 15 00:55:55.693979 systemd[1]: Started systemd-ask-password-wall.path. May 15 00:55:55.693989 systemd[1]: Set up automount boot.automount. May 15 00:55:55.694000 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. May 15 00:55:55.694010 systemd[1]: Stopped target initrd-switch-root.target. May 15 00:55:55.694023 systemd[1]: Stopped target initrd-fs.target. May 15 00:55:55.694032 systemd[1]: Stopped target initrd-root-fs.target. May 15 00:55:55.694042 systemd[1]: Reached target integritysetup.target. May 15 00:55:55.694053 systemd[1]: Reached target remote-cryptsetup.target. May 15 00:55:55.694063 systemd[1]: Reached target remote-fs.target. May 15 00:55:55.694074 systemd[1]: Reached target slices.target. May 15 00:55:55.694087 systemd[1]: Reached target swap.target. May 15 00:55:55.694099 systemd[1]: Reached target torcx.target. May 15 00:55:55.694111 systemd[1]: Reached target veritysetup.target. May 15 00:55:55.694124 systemd[1]: Listening on systemd-coredump.socket. May 15 00:55:55.694137 systemd[1]: Listening on systemd-initctl.socket. May 15 00:55:55.694149 systemd[1]: Listening on systemd-networkd.socket. May 15 00:55:55.694159 systemd[1]: Listening on systemd-udevd-control.socket. May 15 00:55:55.694182 systemd[1]: Listening on systemd-udevd-kernel.socket. May 15 00:55:55.694193 systemd[1]: Listening on systemd-userdbd.socket. May 15 00:55:55.694202 systemd[1]: Mounting dev-hugepages.mount... May 15 00:55:55.694212 systemd[1]: Mounting dev-mqueue.mount... May 15 00:55:55.694222 systemd[1]: Mounting media.mount... May 15 00:55:55.694232 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 00:55:55.694242 systemd[1]: Mounting sys-kernel-debug.mount... May 15 00:55:55.694252 systemd[1]: Mounting sys-kernel-tracing.mount... May 15 00:55:55.694262 systemd[1]: Mounting tmp.mount... May 15 00:55:55.694274 systemd[1]: Starting flatcar-tmpfiles.service... May 15 00:55:55.694284 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 15 00:55:55.694294 systemd[1]: Starting kmod-static-nodes.service... May 15 00:55:55.694304 systemd[1]: Starting modprobe@configfs.service... May 15 00:55:55.694314 systemd[1]: Starting modprobe@dm_mod.service... May 15 00:55:55.694324 systemd[1]: Starting modprobe@drm.service... May 15 00:55:55.694334 systemd[1]: Starting modprobe@efi_pstore.service... May 15 00:55:55.694344 systemd[1]: Starting modprobe@fuse.service... May 15 00:55:55.694353 systemd[1]: Starting modprobe@loop.service... May 15 00:55:55.694365 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 15 00:55:55.694375 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 15 00:55:55.694385 systemd[1]: Stopped systemd-fsck-root.service. May 15 00:55:55.694395 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 15 00:55:55.694405 kernel: loop: module loaded May 15 00:55:55.694414 systemd[1]: Stopped systemd-fsck-usr.service. May 15 00:55:55.694424 systemd[1]: Stopped systemd-journald.service. May 15 00:55:55.694434 kernel: fuse: init (API version 7.34) May 15 00:55:55.694443 systemd[1]: Starting systemd-journald.service... May 15 00:55:55.694454 systemd[1]: Starting systemd-modules-load.service... May 15 00:55:55.694464 systemd[1]: Starting systemd-network-generator.service... May 15 00:55:55.694475 systemd[1]: Starting systemd-remount-fs.service... May 15 00:55:55.694485 systemd[1]: Starting systemd-udev-trigger.service... May 15 00:55:55.694496 systemd-journald[995]: Journal started May 15 00:55:55.694533 systemd-journald[995]: Runtime Journal (/run/log/journal/ed2ced3b209e428b90d27fcea0a68792) is 6.0M, max 48.4M, 42.4M free. May 15 00:55:53.223000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 May 15 00:55:53.515000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 15 00:55:53.515000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 15 00:55:53.515000 audit: BPF prog-id=10 op=LOAD May 15 00:55:53.515000 audit: BPF prog-id=10 op=UNLOAD May 15 00:55:53.515000 audit: BPF prog-id=11 op=LOAD May 15 00:55:53.515000 audit: BPF prog-id=11 op=UNLOAD May 15 00:55:53.550000 audit[911]: AVC avc: denied { associate } for pid=911 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" May 15 00:55:53.550000 audit[911]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001878d2 a1=c00002ae40 a2=c000029100 a3=32 items=0 ppid=894 pid=911 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:55:53.550000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 15 00:55:53.552000 audit[911]: AVC avc: denied { associate } for pid=911 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 May 15 00:55:53.552000 audit[911]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001879a9 a2=1ed a3=0 items=2 ppid=894 pid=911 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:55:53.552000 audit: CWD cwd="/" May 15 00:55:53.552000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:53.552000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:53.552000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 15 00:55:55.569000 audit: BPF prog-id=12 op=LOAD May 15 00:55:55.569000 audit: BPF prog-id=3 op=UNLOAD May 15 00:55:55.569000 audit: BPF prog-id=13 op=LOAD May 15 00:55:55.569000 audit: BPF prog-id=14 op=LOAD May 15 00:55:55.569000 audit: BPF prog-id=4 op=UNLOAD May 15 00:55:55.569000 audit: BPF prog-id=5 op=UNLOAD May 15 00:55:55.570000 audit: BPF prog-id=15 op=LOAD May 15 00:55:55.570000 audit: BPF prog-id=12 op=UNLOAD May 15 00:55:55.570000 audit: BPF prog-id=16 op=LOAD May 15 00:55:55.570000 audit: BPF prog-id=17 op=LOAD May 15 00:55:55.570000 audit: BPF prog-id=13 op=UNLOAD May 15 00:55:55.570000 audit: BPF prog-id=14 op=UNLOAD May 15 00:55:55.571000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:55.574000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:55.574000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:55.579000 audit: BPF prog-id=15 op=UNLOAD May 15 00:55:55.673000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:55.676000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:55.678000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:55.678000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:55.679000 audit: BPF prog-id=18 op=LOAD May 15 00:55:55.679000 audit: BPF prog-id=19 op=LOAD May 15 00:55:55.679000 audit: BPF prog-id=20 op=LOAD May 15 00:55:55.679000 audit: BPF prog-id=16 op=UNLOAD May 15 00:55:55.679000 audit: BPF prog-id=17 op=UNLOAD May 15 00:55:55.692000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 15 00:55:55.692000 audit[995]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7ffdedd6d580 a2=4000 a3=7ffdedd6d61c items=0 ppid=1 pid=995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:55:55.692000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" May 15 00:55:53.548950 /usr/lib/systemd/system-generators/torcx-generator[911]: time="2025-05-15T00:55:53Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 15 00:55:55.567922 systemd[1]: Queued start job for default target multi-user.target. May 15 00:55:53.549219 /usr/lib/systemd/system-generators/torcx-generator[911]: time="2025-05-15T00:55:53Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 15 00:55:55.567932 systemd[1]: Unnecessary job was removed for dev-vda6.device. May 15 00:55:53.549234 /usr/lib/systemd/system-generators/torcx-generator[911]: time="2025-05-15T00:55:53Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 15 00:55:55.571028 systemd[1]: systemd-journald.service: Deactivated successfully. May 15 00:55:53.549262 /usr/lib/systemd/system-generators/torcx-generator[911]: time="2025-05-15T00:55:53Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" May 15 00:55:53.549270 /usr/lib/systemd/system-generators/torcx-generator[911]: time="2025-05-15T00:55:53Z" level=debug msg="skipped missing lower profile" missing profile=oem May 15 00:55:53.549297 /usr/lib/systemd/system-generators/torcx-generator[911]: time="2025-05-15T00:55:53Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" May 15 00:55:53.549308 /usr/lib/systemd/system-generators/torcx-generator[911]: time="2025-05-15T00:55:53Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= May 15 00:55:53.549490 /usr/lib/systemd/system-generators/torcx-generator[911]: time="2025-05-15T00:55:53Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack May 15 00:55:53.549521 /usr/lib/systemd/system-generators/torcx-generator[911]: time="2025-05-15T00:55:53Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 15 00:55:53.549532 /usr/lib/systemd/system-generators/torcx-generator[911]: time="2025-05-15T00:55:53Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 15 00:55:53.549915 /usr/lib/systemd/system-generators/torcx-generator[911]: time="2025-05-15T00:55:53Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 May 15 00:55:53.549952 /usr/lib/systemd/system-generators/torcx-generator[911]: time="2025-05-15T00:55:53Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl May 15 00:55:53.549972 /usr/lib/systemd/system-generators/torcx-generator[911]: time="2025-05-15T00:55:53Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.7: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.7 May 15 00:55:53.549986 /usr/lib/systemd/system-generators/torcx-generator[911]: time="2025-05-15T00:55:53Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store May 15 00:55:53.550005 /usr/lib/systemd/system-generators/torcx-generator[911]: time="2025-05-15T00:55:53Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.7: no such file or directory" path=/var/lib/torcx/store/3510.3.7 May 15 00:55:53.550028 /usr/lib/systemd/system-generators/torcx-generator[911]: time="2025-05-15T00:55:53Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store May 15 00:55:55.696750 systemd[1]: verity-setup.service: Deactivated successfully. May 15 00:55:55.696771 systemd[1]: Stopped verity-setup.service. May 15 00:55:55.319608 /usr/lib/systemd/system-generators/torcx-generator[911]: time="2025-05-15T00:55:55Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 15 00:55:55.319854 /usr/lib/systemd/system-generators/torcx-generator[911]: time="2025-05-15T00:55:55Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 15 00:55:55.319950 /usr/lib/systemd/system-generators/torcx-generator[911]: time="2025-05-15T00:55:55Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 15 00:55:55.320112 /usr/lib/systemd/system-generators/torcx-generator[911]: time="2025-05-15T00:55:55Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 15 00:55:55.320157 /usr/lib/systemd/system-generators/torcx-generator[911]: time="2025-05-15T00:55:55Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= May 15 00:55:55.320232 /usr/lib/systemd/system-generators/torcx-generator[911]: time="2025-05-15T00:55:55Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx May 15 00:55:55.697000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:55.700208 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 00:55:55.702376 systemd[1]: Started systemd-journald.service. May 15 00:55:55.702000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:55.702938 systemd[1]: Mounted dev-hugepages.mount. May 15 00:55:55.703827 systemd[1]: Mounted dev-mqueue.mount. May 15 00:55:55.704691 systemd[1]: Mounted media.mount. May 15 00:55:55.705499 systemd[1]: Mounted sys-kernel-debug.mount. May 15 00:55:55.706412 systemd[1]: Mounted sys-kernel-tracing.mount. May 15 00:55:55.707343 systemd[1]: Mounted tmp.mount. May 15 00:55:55.708314 systemd[1]: Finished flatcar-tmpfiles.service. May 15 00:55:55.709000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:55.709448 systemd[1]: Finished kmod-static-nodes.service. May 15 00:55:55.710000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:55.710538 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 15 00:55:55.710719 systemd[1]: Finished modprobe@configfs.service. May 15 00:55:55.711000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:55.711000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:55.711817 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 00:55:55.711998 systemd[1]: Finished modprobe@dm_mod.service. May 15 00:55:55.712000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:55.712000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:55.713068 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 00:55:55.713234 systemd[1]: Finished modprobe@drm.service. May 15 00:55:55.713000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:55.713000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:55.714276 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 00:55:55.714411 systemd[1]: Finished modprobe@efi_pstore.service. May 15 00:55:55.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:55.715000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:55.715522 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 15 00:55:55.715679 systemd[1]: Finished modprobe@fuse.service. May 15 00:55:55.716000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:55.716000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:55.716700 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 00:55:55.716847 systemd[1]: Finished modprobe@loop.service. May 15 00:55:55.717000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:55.717000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:55.718152 systemd[1]: Finished systemd-modules-load.service. May 15 00:55:55.718000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:55.719503 systemd[1]: Finished systemd-network-generator.service. May 15 00:55:55.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:55.721115 systemd[1]: Finished systemd-remount-fs.service. May 15 00:55:55.722000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:55.722718 systemd[1]: Reached target network-pre.target. May 15 00:55:55.725102 systemd[1]: Mounting sys-fs-fuse-connections.mount... May 15 00:55:55.727464 systemd[1]: Mounting sys-kernel-config.mount... May 15 00:55:55.728584 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 15 00:55:55.730225 systemd[1]: Starting systemd-hwdb-update.service... May 15 00:55:55.732668 systemd[1]: Starting systemd-journal-flush.service... May 15 00:55:55.734033 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 00:55:55.735010 systemd[1]: Starting systemd-random-seed.service... May 15 00:55:55.736236 systemd-journald[995]: Time spent on flushing to /var/log/journal/ed2ced3b209e428b90d27fcea0a68792 is 13.864ms for 1163 entries. May 15 00:55:55.736236 systemd-journald[995]: System Journal (/var/log/journal/ed2ced3b209e428b90d27fcea0a68792) is 8.0M, max 195.6M, 187.6M free. May 15 00:55:55.769100 systemd-journald[995]: Received client request to flush runtime journal. May 15 00:55:55.744000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:55.748000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:55.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:55.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:55.736224 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 15 00:55:55.737062 systemd[1]: Starting systemd-sysctl.service... May 15 00:55:55.740857 systemd[1]: Starting systemd-sysusers.service... May 15 00:55:55.743402 systemd[1]: Finished systemd-udev-trigger.service. May 15 00:55:55.770086 udevadm[1015]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 15 00:55:55.744601 systemd[1]: Mounted sys-fs-fuse-connections.mount. May 15 00:55:55.745657 systemd[1]: Mounted sys-kernel-config.mount. May 15 00:55:55.746908 systemd[1]: Finished systemd-random-seed.service. May 15 00:55:55.748305 systemd[1]: Reached target first-boot-complete.target. May 15 00:55:55.750570 systemd[1]: Starting systemd-udev-settle.service... May 15 00:55:55.751761 systemd[1]: Finished systemd-sysctl.service. May 15 00:55:55.754847 systemd[1]: Finished systemd-sysusers.service. May 15 00:55:55.769697 systemd[1]: Finished systemd-journal-flush.service. May 15 00:55:55.770000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:56.158246 systemd[1]: Finished systemd-hwdb-update.service. May 15 00:55:56.159000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:56.159000 audit: BPF prog-id=21 op=LOAD May 15 00:55:56.159000 audit: BPF prog-id=22 op=LOAD May 15 00:55:56.159000 audit: BPF prog-id=7 op=UNLOAD May 15 00:55:56.159000 audit: BPF prog-id=8 op=UNLOAD May 15 00:55:56.160366 systemd[1]: Starting systemd-udevd.service... May 15 00:55:56.174835 systemd-udevd[1017]: Using default interface naming scheme 'v252'. May 15 00:55:56.186135 systemd[1]: Started systemd-udevd.service. May 15 00:55:56.186000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:56.188000 audit: BPF prog-id=23 op=LOAD May 15 00:55:56.189502 systemd[1]: Starting systemd-networkd.service... May 15 00:55:56.194000 audit: BPF prog-id=24 op=LOAD May 15 00:55:56.194000 audit: BPF prog-id=25 op=LOAD May 15 00:55:56.194000 audit: BPF prog-id=26 op=LOAD May 15 00:55:56.195646 systemd[1]: Starting systemd-userdbd.service... May 15 00:55:56.213907 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. May 15 00:55:56.224489 systemd[1]: Started systemd-userdbd.service. May 15 00:55:56.225000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:56.229504 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 15 00:55:56.249201 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 15 00:55:56.255195 kernel: ACPI: button: Power Button [PWRF] May 15 00:55:56.264790 systemd-networkd[1025]: lo: Link UP May 15 00:55:56.265013 systemd-networkd[1025]: lo: Gained carrier May 15 00:55:56.265452 systemd-networkd[1025]: Enumeration completed May 15 00:55:56.265611 systemd-networkd[1025]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 00:55:56.265620 systemd[1]: Started systemd-networkd.service. May 15 00:55:56.266000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:56.267564 systemd-networkd[1025]: eth0: Link UP May 15 00:55:56.267639 systemd-networkd[1025]: eth0: Gained carrier May 15 00:55:56.268000 audit[1036]: AVC avc: denied { confidentiality } for pid=1036 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 May 15 00:55:56.279317 systemd-networkd[1025]: eth0: DHCPv4 address 10.0.0.117/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 15 00:55:56.268000 audit[1036]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=5564c1a90d20 a1=338ac a2=7f0ddcb7bbc5 a3=5 items=110 ppid=1017 pid=1036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:55:56.268000 audit: CWD cwd="/" May 15 00:55:56.268000 audit: PATH item=0 name=(null) inode=51 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=1 name=(null) inode=13612 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=2 name=(null) inode=13612 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=3 name=(null) inode=13613 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=4 name=(null) inode=13612 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=5 name=(null) inode=13614 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=6 name=(null) inode=13612 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=7 name=(null) inode=13615 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=8 name=(null) inode=13615 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=9 name=(null) inode=13616 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=10 name=(null) inode=13615 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=11 name=(null) inode=13617 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=12 name=(null) inode=13615 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=13 name=(null) inode=13618 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=14 name=(null) inode=13615 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=15 name=(null) inode=13619 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=16 name=(null) inode=13615 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=17 name=(null) inode=13620 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=18 name=(null) inode=13612 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=19 name=(null) inode=13621 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=20 name=(null) inode=13621 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=21 name=(null) inode=13622 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=22 name=(null) inode=13621 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=23 name=(null) inode=13623 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=24 name=(null) inode=13621 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=25 name=(null) inode=13624 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=26 name=(null) inode=13621 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=27 name=(null) inode=13625 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=28 name=(null) inode=13621 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=29 name=(null) inode=13626 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=30 name=(null) inode=13612 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=31 name=(null) inode=13627 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=32 name=(null) inode=13627 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=33 name=(null) inode=13628 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=34 name=(null) inode=13627 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=35 name=(null) inode=13629 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=36 name=(null) inode=13627 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=37 name=(null) inode=13630 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=38 name=(null) inode=13627 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=39 name=(null) inode=13631 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=40 name=(null) inode=13627 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=41 name=(null) inode=13632 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=42 name=(null) inode=13612 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=43 name=(null) inode=13633 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=44 name=(null) inode=13633 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=45 name=(null) inode=13634 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=46 name=(null) inode=13633 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=47 name=(null) inode=13635 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=48 name=(null) inode=13633 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=49 name=(null) inode=13636 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=50 name=(null) inode=13633 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=51 name=(null) inode=13637 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=52 name=(null) inode=13633 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=53 name=(null) inode=13638 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=54 name=(null) inode=51 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=55 name=(null) inode=13639 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=56 name=(null) inode=13639 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=57 name=(null) inode=13640 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=58 name=(null) inode=13639 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=59 name=(null) inode=13641 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=60 name=(null) inode=13639 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=61 name=(null) inode=13642 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=62 name=(null) inode=13642 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=63 name=(null) inode=13643 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=64 name=(null) inode=13642 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=65 name=(null) inode=13644 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=66 name=(null) inode=13642 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=67 name=(null) inode=13645 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=68 name=(null) inode=13642 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=69 name=(null) inode=13646 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=70 name=(null) inode=13642 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=71 name=(null) inode=13647 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=72 name=(null) inode=13639 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=73 name=(null) inode=13648 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=74 name=(null) inode=13648 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=75 name=(null) inode=13649 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=76 name=(null) inode=13648 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=77 name=(null) inode=13650 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=78 name=(null) inode=13648 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=79 name=(null) inode=13651 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=80 name=(null) inode=13648 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=81 name=(null) inode=13652 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=82 name=(null) inode=13648 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=83 name=(null) inode=13653 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=84 name=(null) inode=13639 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=85 name=(null) inode=13654 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=86 name=(null) inode=13654 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=87 name=(null) inode=13655 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=88 name=(null) inode=13654 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=89 name=(null) inode=13656 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=90 name=(null) inode=13654 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=91 name=(null) inode=13657 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=92 name=(null) inode=13654 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=93 name=(null) inode=13658 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=94 name=(null) inode=13654 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=95 name=(null) inode=13659 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=96 name=(null) inode=13639 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=97 name=(null) inode=13660 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=98 name=(null) inode=13660 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=99 name=(null) inode=13661 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=100 name=(null) inode=13660 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=101 name=(null) inode=13662 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=102 name=(null) inode=13660 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=103 name=(null) inode=13663 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=104 name=(null) inode=13660 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=105 name=(null) inode=13664 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=106 name=(null) inode=13660 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=107 name=(null) inode=13665 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PATH item=109 name=(null) inode=13671 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:56.268000 audit: PROCTITLE proctitle="(udev-worker)" May 15 00:55:56.314193 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 15 00:55:56.318421 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device May 15 00:55:56.321447 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 15 00:55:56.321558 kernel: mousedev: PS/2 mouse device common for all mice May 15 00:55:56.321579 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) May 15 00:55:56.321686 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 15 00:55:56.359782 kernel: kvm: Nested Virtualization enabled May 15 00:55:56.359843 kernel: SVM: kvm: Nested Paging enabled May 15 00:55:56.359857 kernel: SVM: Virtual VMLOAD VMSAVE supported May 15 00:55:56.359869 kernel: SVM: Virtual GIF supported May 15 00:55:56.375200 kernel: EDAC MC: Ver: 3.0.0 May 15 00:55:56.395543 systemd[1]: Finished systemd-udev-settle.service. May 15 00:55:56.396000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:56.397521 systemd[1]: Starting lvm2-activation-early.service... May 15 00:55:56.405457 lvm[1053]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 15 00:55:56.434942 systemd[1]: Finished lvm2-activation-early.service. May 15 00:55:56.435000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:56.435997 systemd[1]: Reached target cryptsetup.target. May 15 00:55:56.438044 systemd[1]: Starting lvm2-activation.service... May 15 00:55:56.441346 lvm[1054]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 15 00:55:56.466301 systemd[1]: Finished lvm2-activation.service. May 15 00:55:56.466000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:56.467254 systemd[1]: Reached target local-fs-pre.target. May 15 00:55:56.468122 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 15 00:55:56.468145 systemd[1]: Reached target local-fs.target. May 15 00:55:56.468943 systemd[1]: Reached target machines.target. May 15 00:55:56.470758 systemd[1]: Starting ldconfig.service... May 15 00:55:56.471721 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 15 00:55:56.471765 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 15 00:55:56.472575 systemd[1]: Starting systemd-boot-update.service... May 15 00:55:56.474594 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... May 15 00:55:56.476364 systemd[1]: Starting systemd-machine-id-commit.service... May 15 00:55:56.478038 systemd[1]: Starting systemd-sysext.service... May 15 00:55:56.480803 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1056 (bootctl) May 15 00:55:56.481988 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... May 15 00:55:56.484624 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. May 15 00:55:56.485000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:56.489417 systemd[1]: Unmounting usr-share-oem.mount... May 15 00:55:56.492424 systemd[1]: usr-share-oem.mount: Deactivated successfully. May 15 00:55:56.492552 systemd[1]: Unmounted usr-share-oem.mount. May 15 00:55:56.500832 kernel: loop0: detected capacity change from 0 to 218376 May 15 00:55:56.526444 systemd-fsck[1064]: fsck.fat 4.2 (2021-01-31) May 15 00:55:56.526444 systemd-fsck[1064]: /dev/vda1: 791 files, 120710/258078 clusters May 15 00:55:56.528331 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. May 15 00:55:56.530000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:56.531712 systemd[1]: Mounting boot.mount... May 15 00:55:56.700230 systemd[1]: Mounted boot.mount. May 15 00:55:56.706196 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 15 00:55:56.711253 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 15 00:55:56.711952 systemd[1]: Finished systemd-machine-id-commit.service. May 15 00:55:56.713000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:56.714452 systemd[1]: Finished systemd-boot-update.service. May 15 00:55:56.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:56.722194 kernel: loop1: detected capacity change from 0 to 218376 May 15 00:55:56.726200 (sd-sysext)[1069]: Using extensions 'kubernetes'. May 15 00:55:56.726501 (sd-sysext)[1069]: Merged extensions into '/usr'. May 15 00:55:56.740840 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 00:55:56.742040 systemd[1]: Mounting usr-share-oem.mount... May 15 00:55:56.743047 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 15 00:55:56.744116 systemd[1]: Starting modprobe@dm_mod.service... May 15 00:55:56.745734 systemd[1]: Starting modprobe@efi_pstore.service... May 15 00:55:56.747381 systemd[1]: Starting modprobe@loop.service... May 15 00:55:56.748166 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 15 00:55:56.748283 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 15 00:55:56.748378 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 00:55:56.750614 systemd[1]: Mounted usr-share-oem.mount. May 15 00:55:56.751702 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 00:55:56.751801 systemd[1]: Finished modprobe@dm_mod.service. May 15 00:55:56.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:56.752000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:56.752932 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 00:55:56.753034 systemd[1]: Finished modprobe@efi_pstore.service. May 15 00:55:56.753000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:56.753000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:56.754213 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 00:55:56.754302 systemd[1]: Finished modprobe@loop.service. May 15 00:55:56.754000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:56.754000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:56.755519 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 00:55:56.755614 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 15 00:55:56.756430 systemd[1]: Finished systemd-sysext.service. May 15 00:55:56.757000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:56.758562 systemd[1]: Starting ensure-sysext.service... May 15 00:55:56.760059 ldconfig[1055]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 15 00:55:56.760169 systemd[1]: Starting systemd-tmpfiles-setup.service... May 15 00:55:56.765496 systemd[1]: Reloading. May 15 00:55:56.773874 systemd-tmpfiles[1077]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. May 15 00:55:56.775626 systemd-tmpfiles[1077]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 15 00:55:56.778320 systemd-tmpfiles[1077]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 15 00:55:56.819344 /usr/lib/systemd/system-generators/torcx-generator[1097]: time="2025-05-15T00:55:56Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 15 00:55:56.819688 /usr/lib/systemd/system-generators/torcx-generator[1097]: time="2025-05-15T00:55:56Z" level=info msg="torcx already run" May 15 00:55:56.882056 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 15 00:55:56.882073 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 15 00:55:56.899113 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 00:55:56.949000 audit: BPF prog-id=27 op=LOAD May 15 00:55:56.949000 audit: BPF prog-id=23 op=UNLOAD May 15 00:55:56.950000 audit: BPF prog-id=28 op=LOAD May 15 00:55:56.950000 audit: BPF prog-id=29 op=LOAD May 15 00:55:56.950000 audit: BPF prog-id=21 op=UNLOAD May 15 00:55:56.950000 audit: BPF prog-id=22 op=UNLOAD May 15 00:55:56.952000 audit: BPF prog-id=30 op=LOAD May 15 00:55:56.952000 audit: BPF prog-id=24 op=UNLOAD May 15 00:55:56.952000 audit: BPF prog-id=31 op=LOAD May 15 00:55:56.952000 audit: BPF prog-id=32 op=LOAD May 15 00:55:56.952000 audit: BPF prog-id=25 op=UNLOAD May 15 00:55:56.952000 audit: BPF prog-id=26 op=UNLOAD May 15 00:55:56.953000 audit: BPF prog-id=33 op=LOAD May 15 00:55:56.953000 audit: BPF prog-id=18 op=UNLOAD May 15 00:55:56.953000 audit: BPF prog-id=34 op=LOAD May 15 00:55:56.953000 audit: BPF prog-id=35 op=LOAD May 15 00:55:56.953000 audit: BPF prog-id=19 op=UNLOAD May 15 00:55:56.953000 audit: BPF prog-id=20 op=UNLOAD May 15 00:55:56.955484 systemd[1]: Finished ldconfig.service. May 15 00:55:56.956000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:56.957297 systemd[1]: Finished systemd-tmpfiles-setup.service. May 15 00:55:56.958000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:56.960558 systemd[1]: Starting audit-rules.service... May 15 00:55:56.962387 systemd[1]: Starting clean-ca-certificates.service... May 15 00:55:56.964334 systemd[1]: Starting systemd-journal-catalog-update.service... May 15 00:55:56.966000 audit: BPF prog-id=36 op=LOAD May 15 00:55:56.967036 systemd[1]: Starting systemd-resolved.service... May 15 00:55:56.968000 audit: BPF prog-id=37 op=LOAD May 15 00:55:56.969154 systemd[1]: Starting systemd-timesyncd.service... May 15 00:55:56.970769 systemd[1]: Starting systemd-update-utmp.service... May 15 00:55:56.972337 systemd[1]: Finished clean-ca-certificates.service. May 15 00:55:56.973000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:56.974000 audit[1150]: SYSTEM_BOOT pid=1150 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' May 15 00:55:56.979790 systemd[1]: Finished systemd-update-utmp.service. May 15 00:55:56.980000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 May 15 00:55:56.980000 audit[1159]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffeb6f60250 a2=420 a3=0 items=0 ppid=1139 pid=1159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:55:56.980000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 May 15 00:55:56.981187 augenrules[1159]: No rules May 15 00:55:56.981763 systemd[1]: Finished audit-rules.service. May 15 00:55:56.983980 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 00:55:56.984160 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 15 00:55:56.985388 systemd[1]: Starting modprobe@dm_mod.service... May 15 00:55:56.987270 systemd[1]: Starting modprobe@efi_pstore.service... May 15 00:55:56.989052 systemd[1]: Starting modprobe@loop.service... May 15 00:55:56.989880 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 15 00:55:56.989995 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 15 00:55:56.990078 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 15 00:55:56.990139 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 00:55:56.991311 systemd[1]: Finished systemd-journal-catalog-update.service. May 15 00:55:56.992640 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 00:55:56.992739 systemd[1]: Finished modprobe@dm_mod.service. May 15 00:55:56.993949 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 00:55:56.994042 systemd[1]: Finished modprobe@efi_pstore.service. May 15 00:55:56.995245 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 00:55:56.995339 systemd[1]: Finished modprobe@loop.service. May 15 00:55:56.996441 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 00:55:56.996528 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 15 00:55:56.997640 systemd[1]: Starting systemd-update-done.service... May 15 00:55:57.000828 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 00:55:57.001019 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 15 00:55:57.002395 systemd[1]: Starting modprobe@dm_mod.service... May 15 00:55:57.004182 systemd[1]: Starting modprobe@drm.service... May 15 00:55:57.006123 systemd[1]: Starting modprobe@efi_pstore.service... May 15 00:55:57.008122 systemd[1]: Starting modprobe@loop.service... May 15 00:55:57.009004 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 15 00:55:57.009100 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 15 00:55:57.010234 systemd[1]: Starting systemd-networkd-wait-online.service... May 15 00:55:57.011351 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 15 00:55:57.011450 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 00:55:57.012620 systemd[1]: Finished systemd-update-done.service. May 15 00:55:57.014206 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 00:55:57.014302 systemd[1]: Finished modprobe@dm_mod.service. May 15 00:55:57.015494 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 00:55:57.015588 systemd[1]: Finished modprobe@drm.service. May 15 00:55:57.016733 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 00:55:57.016826 systemd[1]: Finished modprobe@efi_pstore.service. May 15 00:55:57.018076 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 00:55:57.018191 systemd[1]: Finished modprobe@loop.service. May 15 00:55:57.019781 systemd-resolved[1148]: Positive Trust Anchors: May 15 00:55:57.019794 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 00:55:57.019871 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 15 00:55:57.020091 systemd-resolved[1148]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 00:55:57.020211 systemd-resolved[1148]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 15 00:55:57.020919 systemd[1]: Finished ensure-sysext.service. May 15 00:55:57.023569 systemd[1]: Started systemd-timesyncd.service. May 15 00:55:57.025020 systemd-timesyncd[1149]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 15 00:55:57.025026 systemd[1]: Reached target time-set.target. May 15 00:55:57.025062 systemd-timesyncd[1149]: Initial clock synchronization to Thu 2025-05-15 00:55:57.229794 UTC. May 15 00:55:57.027473 systemd-resolved[1148]: Defaulting to hostname 'linux'. May 15 00:55:57.028749 systemd[1]: Started systemd-resolved.service. May 15 00:55:57.029706 systemd[1]: Reached target network.target. May 15 00:55:57.030565 systemd[1]: Reached target nss-lookup.target. May 15 00:55:57.031504 systemd[1]: Reached target sysinit.target. May 15 00:55:57.032417 systemd[1]: Started motdgen.path. May 15 00:55:57.033189 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. May 15 00:55:57.034445 systemd[1]: Started logrotate.timer. May 15 00:55:57.035290 systemd[1]: Started mdadm.timer. May 15 00:55:57.036017 systemd[1]: Started systemd-tmpfiles-clean.timer. May 15 00:55:57.036929 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 15 00:55:57.036960 systemd[1]: Reached target paths.target. May 15 00:55:57.037779 systemd[1]: Reached target timers.target. May 15 00:55:57.038852 systemd[1]: Listening on dbus.socket. May 15 00:55:57.040469 systemd[1]: Starting docker.socket... May 15 00:55:57.043063 systemd[1]: Listening on sshd.socket. May 15 00:55:57.043971 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 15 00:55:57.044292 systemd[1]: Listening on docker.socket. May 15 00:55:57.045145 systemd[1]: Reached target sockets.target. May 15 00:55:57.045997 systemd[1]: Reached target basic.target. May 15 00:55:57.046841 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. May 15 00:55:57.046864 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. May 15 00:55:57.047640 systemd[1]: Starting containerd.service... May 15 00:55:57.049213 systemd[1]: Starting dbus.service... May 15 00:55:57.050742 systemd[1]: Starting enable-oem-cloudinit.service... May 15 00:55:57.052488 systemd[1]: Starting extend-filesystems.service... May 15 00:55:57.053486 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). May 15 00:55:57.054337 jq[1178]: false May 15 00:55:57.054516 systemd[1]: Starting motdgen.service... May 15 00:55:57.056284 systemd[1]: Starting prepare-helm.service... May 15 00:55:57.058235 systemd[1]: Starting ssh-key-proc-cmdline.service... May 15 00:55:57.060068 systemd[1]: Starting sshd-keygen.service... May 15 00:55:57.063112 systemd[1]: Starting systemd-logind.service... May 15 00:55:57.064067 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 15 00:55:57.065124 dbus-daemon[1177]: [system] SELinux support is enabled May 15 00:55:57.064113 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 15 00:55:57.064446 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 15 00:55:57.064961 systemd[1]: Starting update-engine.service... May 15 00:55:57.067020 systemd[1]: Starting update-ssh-keys-after-ignition.service... May 15 00:55:57.069201 systemd[1]: Started dbus.service. May 15 00:55:57.071701 jq[1197]: true May 15 00:55:57.072345 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 15 00:55:57.074712 extend-filesystems[1179]: Found loop1 May 15 00:55:57.074712 extend-filesystems[1179]: Found sr0 May 15 00:55:57.074712 extend-filesystems[1179]: Found vda May 15 00:55:57.074712 extend-filesystems[1179]: Found vda1 May 15 00:55:57.074712 extend-filesystems[1179]: Found vda2 May 15 00:55:57.074712 extend-filesystems[1179]: Found vda3 May 15 00:55:57.074712 extend-filesystems[1179]: Found usr May 15 00:55:57.074712 extend-filesystems[1179]: Found vda4 May 15 00:55:57.074712 extend-filesystems[1179]: Found vda6 May 15 00:55:57.074712 extend-filesystems[1179]: Found vda7 May 15 00:55:57.074712 extend-filesystems[1179]: Found vda9 May 15 00:55:57.074712 extend-filesystems[1179]: Checking size of /dev/vda9 May 15 00:55:57.072482 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. May 15 00:55:57.100674 extend-filesystems[1179]: Resized partition /dev/vda9 May 15 00:55:57.101714 tar[1199]: linux-amd64/LICENSE May 15 00:55:57.101714 tar[1199]: linux-amd64/helm May 15 00:55:57.073195 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 15 00:55:57.102012 extend-filesystems[1227]: resize2fs 1.46.5 (30-Dec-2021) May 15 00:55:57.073318 systemd[1]: Finished ssh-key-proc-cmdline.service. May 15 00:55:57.103584 jq[1203]: true May 15 00:55:57.076076 systemd[1]: motdgen.service: Deactivated successfully. May 15 00:55:57.076218 systemd[1]: Finished motdgen.service. May 15 00:55:57.078752 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 15 00:55:57.078777 systemd[1]: Reached target system-config.target. May 15 00:55:57.080017 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 15 00:55:57.080031 systemd[1]: Reached target user-config.target. May 15 00:55:57.106546 env[1204]: time="2025-05-15T00:55:57.106500036Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 May 15 00:55:57.108198 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 15 00:55:57.110585 update_engine[1195]: I0515 00:55:57.110342 1195 main.cc:92] Flatcar Update Engine starting May 15 00:55:57.111987 systemd[1]: Started update-engine.service. May 15 00:55:57.117211 update_engine[1195]: I0515 00:55:57.112020 1195 update_check_scheduler.cc:74] Next update check in 7m23s May 15 00:55:57.114420 systemd[1]: Started locksmithd.service. May 15 00:55:57.132361 env[1204]: time="2025-05-15T00:55:57.132168511Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 15 00:55:57.132963 env[1204]: time="2025-05-15T00:55:57.132930060Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 15 00:55:57.133038 systemd-logind[1192]: Watching system buttons on /dev/input/event1 (Power Button) May 15 00:55:57.133059 systemd-logind[1192]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 15 00:55:57.134685 env[1204]: time="2025-05-15T00:55:57.134150731Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.181-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 15 00:55:57.134685 env[1204]: time="2025-05-15T00:55:57.134199562Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 15 00:55:57.134685 env[1204]: time="2025-05-15T00:55:57.134382766Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 15 00:55:57.134685 env[1204]: time="2025-05-15T00:55:57.134397514Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 15 00:55:57.134685 env[1204]: time="2025-05-15T00:55:57.134410568Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" May 15 00:55:57.134685 env[1204]: time="2025-05-15T00:55:57.134419575Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 15 00:55:57.134685 env[1204]: time="2025-05-15T00:55:57.134480519Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 15 00:55:57.134685 env[1204]: time="2025-05-15T00:55:57.134660407Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 15 00:55:57.134224 systemd-logind[1192]: New seat seat0. May 15 00:55:57.135042 env[1204]: time="2025-05-15T00:55:57.135022697Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 15 00:55:57.135118 env[1204]: time="2025-05-15T00:55:57.135099741Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 15 00:55:57.135324 env[1204]: time="2025-05-15T00:55:57.135308493Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" May 15 00:55:57.135395 env[1204]: time="2025-05-15T00:55:57.135377442Z" level=info msg="metadata content store policy set" policy=shared May 15 00:55:57.137213 bash[1224]: Updated "/home/core/.ssh/authorized_keys" May 15 00:55:57.137966 systemd[1]: Finished update-ssh-keys-after-ignition.service. May 15 00:55:57.142051 systemd[1]: Started systemd-logind.service. May 15 00:55:57.147199 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 15 00:55:57.166814 locksmithd[1233]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 15 00:55:57.178614 extend-filesystems[1227]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 15 00:55:57.178614 extend-filesystems[1227]: old_desc_blocks = 1, new_desc_blocks = 1 May 15 00:55:57.178614 extend-filesystems[1227]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 15 00:55:57.183575 extend-filesystems[1179]: Resized filesystem in /dev/vda9 May 15 00:55:57.184570 env[1204]: time="2025-05-15T00:55:57.182738733Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 15 00:55:57.184570 env[1204]: time="2025-05-15T00:55:57.182800488Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 15 00:55:57.184570 env[1204]: time="2025-05-15T00:55:57.182817039Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 15 00:55:57.184570 env[1204]: time="2025-05-15T00:55:57.182848048Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 15 00:55:57.184570 env[1204]: time="2025-05-15T00:55:57.182883775Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 15 00:55:57.184570 env[1204]: time="2025-05-15T00:55:57.182897480Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 15 00:55:57.184570 env[1204]: time="2025-05-15T00:55:57.182908351Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 15 00:55:57.184570 env[1204]: time="2025-05-15T00:55:57.182986527Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 15 00:55:57.184570 env[1204]: time="2025-05-15T00:55:57.183013147Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 May 15 00:55:57.184570 env[1204]: time="2025-05-15T00:55:57.183027765Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 15 00:55:57.184570 env[1204]: time="2025-05-15T00:55:57.183039747Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 15 00:55:57.184570 env[1204]: time="2025-05-15T00:55:57.183052281Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 15 00:55:57.184570 env[1204]: time="2025-05-15T00:55:57.183214205Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 15 00:55:57.184570 env[1204]: time="2025-05-15T00:55:57.183297050Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 15 00:55:57.179203 systemd[1]: extend-filesystems.service: Deactivated successfully. May 15 00:55:57.184908 env[1204]: time="2025-05-15T00:55:57.183593226Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 15 00:55:57.184908 env[1204]: time="2025-05-15T00:55:57.183615718Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 15 00:55:57.184908 env[1204]: time="2025-05-15T00:55:57.183629384Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 15 00:55:57.184908 env[1204]: time="2025-05-15T00:55:57.183686301Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 15 00:55:57.184908 env[1204]: time="2025-05-15T00:55:57.183698273Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 15 00:55:57.184908 env[1204]: time="2025-05-15T00:55:57.183709925Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 15 00:55:57.184908 env[1204]: time="2025-05-15T00:55:57.183721156Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 15 00:55:57.184908 env[1204]: time="2025-05-15T00:55:57.183748467Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 15 00:55:57.184908 env[1204]: time="2025-05-15T00:55:57.183759929Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 15 00:55:57.184908 env[1204]: time="2025-05-15T00:55:57.183770198Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 15 00:55:57.184908 env[1204]: time="2025-05-15T00:55:57.183779926Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 15 00:55:57.184908 env[1204]: time="2025-05-15T00:55:57.183845068Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 15 00:55:57.184908 env[1204]: time="2025-05-15T00:55:57.183981454Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 15 00:55:57.184908 env[1204]: time="2025-05-15T00:55:57.183996022Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 15 00:55:57.184908 env[1204]: time="2025-05-15T00:55:57.184007193Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 15 00:55:57.179362 systemd[1]: Finished extend-filesystems.service. May 15 00:55:57.185248 env[1204]: time="2025-05-15T00:55:57.184031809Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 15 00:55:57.185248 env[1204]: time="2025-05-15T00:55:57.184048230Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 May 15 00:55:57.185248 env[1204]: time="2025-05-15T00:55:57.184058218Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 15 00:55:57.185248 env[1204]: time="2025-05-15T00:55:57.184076383Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" May 15 00:55:57.185248 env[1204]: time="2025-05-15T00:55:57.184120565Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 15 00:55:57.185355 env[1204]: time="2025-05-15T00:55:57.184325410Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 15 00:55:57.185355 env[1204]: time="2025-05-15T00:55:57.184386144Z" level=info msg="Connect containerd service" May 15 00:55:57.185355 env[1204]: time="2025-05-15T00:55:57.184437059Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 15 00:55:57.185355 env[1204]: time="2025-05-15T00:55:57.185006528Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 15 00:55:57.186065 env[1204]: time="2025-05-15T00:55:57.185711100Z" level=info msg="Start subscribing containerd event" May 15 00:55:57.186065 env[1204]: time="2025-05-15T00:55:57.185841835Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 15 00:55:57.186065 env[1204]: time="2025-05-15T00:55:57.185901537Z" level=info msg=serving... address=/run/containerd/containerd.sock May 15 00:55:57.186008 systemd[1]: Started containerd.service. May 15 00:55:57.186240 env[1204]: time="2025-05-15T00:55:57.186221868Z" level=info msg="Start recovering state" May 15 00:55:57.186385 env[1204]: time="2025-05-15T00:55:57.186361701Z" level=info msg="Start event monitor" May 15 00:55:57.186475 env[1204]: time="2025-05-15T00:55:57.186454775Z" level=info msg="Start snapshots syncer" May 15 00:55:57.186594 env[1204]: time="2025-05-15T00:55:57.186578858Z" level=info msg="Start cni network conf syncer for default" May 15 00:55:57.186754 env[1204]: time="2025-05-15T00:55:57.186705185Z" level=info msg="Start streaming server" May 15 00:55:57.187160 env[1204]: time="2025-05-15T00:55:57.187145672Z" level=info msg="containerd successfully booted in 0.081111s" May 15 00:55:57.518357 tar[1199]: linux-amd64/README.md May 15 00:55:57.522719 systemd[1]: Finished prepare-helm.service. May 15 00:55:57.560316 systemd-networkd[1025]: eth0: Gained IPv6LL May 15 00:55:57.562104 systemd[1]: Finished systemd-networkd-wait-online.service. May 15 00:55:57.575578 systemd[1]: Reached target network-online.target. May 15 00:55:57.577680 systemd[1]: Starting kubelet.service... May 15 00:55:57.963395 sshd_keygen[1196]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 15 00:55:57.981341 systemd[1]: Finished sshd-keygen.service. May 15 00:55:57.983594 systemd[1]: Starting issuegen.service... May 15 00:55:57.989113 systemd[1]: issuegen.service: Deactivated successfully. May 15 00:55:57.989292 systemd[1]: Finished issuegen.service. May 15 00:55:57.991161 systemd[1]: Starting systemd-user-sessions.service... May 15 00:55:57.996195 systemd[1]: Finished systemd-user-sessions.service. May 15 00:55:57.998156 systemd[1]: Started getty@tty1.service. May 15 00:55:57.999981 systemd[1]: Started serial-getty@ttyS0.service. May 15 00:55:58.001204 systemd[1]: Reached target getty.target. May 15 00:55:58.199512 systemd[1]: Started kubelet.service. May 15 00:55:58.200730 systemd[1]: Reached target multi-user.target. May 15 00:55:58.202619 systemd[1]: Starting systemd-update-utmp-runlevel.service... May 15 00:55:58.209754 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. May 15 00:55:58.209873 systemd[1]: Finished systemd-update-utmp-runlevel.service. May 15 00:55:58.210963 systemd[1]: Startup finished in 585ms (kernel) + 6.493s (initrd) + 5.028s (userspace) = 12.107s. May 15 00:55:58.616393 kubelet[1259]: E0515 00:55:58.616320 1259 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 00:55:58.618443 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 00:55:58.618567 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 00:55:59.132991 systemd[1]: Created slice system-sshd.slice. May 15 00:55:59.133968 systemd[1]: Started sshd@0-10.0.0.117:22-10.0.0.1:43264.service. May 15 00:55:59.175011 sshd[1268]: Accepted publickey for core from 10.0.0.1 port 43264 ssh2: RSA SHA256:Iwoz1L9/QgXQ9OpXvCPQYapJE0cmIk+lKxZdEnPdReQ May 15 00:55:59.176548 sshd[1268]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 00:55:59.183808 systemd[1]: Created slice user-500.slice. May 15 00:55:59.184876 systemd[1]: Starting user-runtime-dir@500.service... May 15 00:55:59.186975 systemd-logind[1192]: New session 1 of user core. May 15 00:55:59.192843 systemd[1]: Finished user-runtime-dir@500.service. May 15 00:55:59.194123 systemd[1]: Starting user@500.service... May 15 00:55:59.197313 (systemd)[1271]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 15 00:55:59.275004 systemd[1271]: Queued start job for default target default.target. May 15 00:55:59.275482 systemd[1271]: Reached target paths.target. May 15 00:55:59.275505 systemd[1271]: Reached target sockets.target. May 15 00:55:59.275517 systemd[1271]: Reached target timers.target. May 15 00:55:59.275528 systemd[1271]: Reached target basic.target. May 15 00:55:59.275562 systemd[1271]: Reached target default.target. May 15 00:55:59.275584 systemd[1271]: Startup finished in 72ms. May 15 00:55:59.275728 systemd[1]: Started user@500.service. May 15 00:55:59.276728 systemd[1]: Started session-1.scope. May 15 00:55:59.328428 systemd[1]: Started sshd@1-10.0.0.117:22-10.0.0.1:43278.service. May 15 00:55:59.367013 sshd[1280]: Accepted publickey for core from 10.0.0.1 port 43278 ssh2: RSA SHA256:Iwoz1L9/QgXQ9OpXvCPQYapJE0cmIk+lKxZdEnPdReQ May 15 00:55:59.367975 sshd[1280]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 00:55:59.372169 systemd-logind[1192]: New session 2 of user core. May 15 00:55:59.373170 systemd[1]: Started session-2.scope. May 15 00:55:59.427066 sshd[1280]: pam_unix(sshd:session): session closed for user core May 15 00:55:59.430000 systemd[1]: sshd@1-10.0.0.117:22-10.0.0.1:43278.service: Deactivated successfully. May 15 00:55:59.430558 systemd[1]: session-2.scope: Deactivated successfully. May 15 00:55:59.431047 systemd-logind[1192]: Session 2 logged out. Waiting for processes to exit. May 15 00:55:59.432272 systemd[1]: Started sshd@2-10.0.0.117:22-10.0.0.1:43290.service. May 15 00:55:59.432874 systemd-logind[1192]: Removed session 2. May 15 00:55:59.469283 sshd[1286]: Accepted publickey for core from 10.0.0.1 port 43290 ssh2: RSA SHA256:Iwoz1L9/QgXQ9OpXvCPQYapJE0cmIk+lKxZdEnPdReQ May 15 00:55:59.470281 sshd[1286]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 00:55:59.473280 systemd-logind[1192]: New session 3 of user core. May 15 00:55:59.473987 systemd[1]: Started session-3.scope. May 15 00:55:59.523583 sshd[1286]: pam_unix(sshd:session): session closed for user core May 15 00:55:59.526547 systemd[1]: sshd@2-10.0.0.117:22-10.0.0.1:43290.service: Deactivated successfully. May 15 00:55:59.527032 systemd[1]: session-3.scope: Deactivated successfully. May 15 00:55:59.527537 systemd-logind[1192]: Session 3 logged out. Waiting for processes to exit. May 15 00:55:59.528557 systemd[1]: Started sshd@3-10.0.0.117:22-10.0.0.1:43296.service. May 15 00:55:59.529120 systemd-logind[1192]: Removed session 3. May 15 00:55:59.566261 sshd[1292]: Accepted publickey for core from 10.0.0.1 port 43296 ssh2: RSA SHA256:Iwoz1L9/QgXQ9OpXvCPQYapJE0cmIk+lKxZdEnPdReQ May 15 00:55:59.567329 sshd[1292]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 00:55:59.570760 systemd-logind[1192]: New session 4 of user core. May 15 00:55:59.571536 systemd[1]: Started session-4.scope. May 15 00:55:59.624190 sshd[1292]: pam_unix(sshd:session): session closed for user core May 15 00:55:59.627175 systemd[1]: sshd@3-10.0.0.117:22-10.0.0.1:43296.service: Deactivated successfully. May 15 00:55:59.627686 systemd[1]: session-4.scope: Deactivated successfully. May 15 00:55:59.628142 systemd-logind[1192]: Session 4 logged out. Waiting for processes to exit. May 15 00:55:59.629112 systemd[1]: Started sshd@4-10.0.0.117:22-10.0.0.1:43312.service. May 15 00:55:59.629884 systemd-logind[1192]: Removed session 4. May 15 00:55:59.666705 sshd[1299]: Accepted publickey for core from 10.0.0.1 port 43312 ssh2: RSA SHA256:Iwoz1L9/QgXQ9OpXvCPQYapJE0cmIk+lKxZdEnPdReQ May 15 00:55:59.667627 sshd[1299]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 00:55:59.670905 systemd-logind[1192]: New session 5 of user core. May 15 00:55:59.671688 systemd[1]: Started session-5.scope. May 15 00:55:59.726319 sudo[1302]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 15 00:55:59.726514 sudo[1302]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 15 00:55:59.745639 systemd[1]: Starting docker.service... May 15 00:55:59.778253 env[1314]: time="2025-05-15T00:55:59.778209292Z" level=info msg="Starting up" May 15 00:55:59.779432 env[1314]: time="2025-05-15T00:55:59.779388178Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 15 00:55:59.779432 env[1314]: time="2025-05-15T00:55:59.779413858Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 15 00:55:59.779432 env[1314]: time="2025-05-15T00:55:59.779437345Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 15 00:55:59.779610 env[1314]: time="2025-05-15T00:55:59.779447957Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 15 00:55:59.781024 env[1314]: time="2025-05-15T00:55:59.780989874Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 15 00:55:59.781024 env[1314]: time="2025-05-15T00:55:59.781014468Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 15 00:55:59.781096 env[1314]: time="2025-05-15T00:55:59.781034145Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 15 00:55:59.781096 env[1314]: time="2025-05-15T00:55:59.781043856Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 15 00:55:59.786258 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1608039363-merged.mount: Deactivated successfully. May 15 00:56:00.332927 env[1314]: time="2025-05-15T00:56:00.332892487Z" level=info msg="Loading containers: start." May 15 00:56:00.445247 kernel: Initializing XFRM netlink socket May 15 00:56:00.471311 env[1314]: time="2025-05-15T00:56:00.471269054Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" May 15 00:56:00.518473 systemd-networkd[1025]: docker0: Link UP May 15 00:56:00.534579 env[1314]: time="2025-05-15T00:56:00.534524402Z" level=info msg="Loading containers: done." May 15 00:56:00.544015 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck38096837-merged.mount: Deactivated successfully. May 15 00:56:00.546947 env[1314]: time="2025-05-15T00:56:00.546902381Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 15 00:56:00.547102 env[1314]: time="2025-05-15T00:56:00.547074005Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 May 15 00:56:00.547203 env[1314]: time="2025-05-15T00:56:00.547167281Z" level=info msg="Daemon has completed initialization" May 15 00:56:00.568380 systemd[1]: Started docker.service. May 15 00:56:00.572933 env[1314]: time="2025-05-15T00:56:00.572876857Z" level=info msg="API listen on /run/docker.sock" May 15 00:56:01.362713 env[1204]: time="2025-05-15T00:56:01.362634529Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\"" May 15 00:56:02.018938 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2126046338.mount: Deactivated successfully. May 15 00:56:03.964406 env[1204]: time="2025-05-15T00:56:03.964350141Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.32.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:04.029295 env[1204]: time="2025-05-15T00:56:04.029267036Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:04.073504 env[1204]: time="2025-05-15T00:56:04.073463794Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.32.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:04.111713 env[1204]: time="2025-05-15T00:56:04.111689968Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:04.112327 env[1204]: time="2025-05-15T00:56:04.112302038Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\" returns image reference \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\"" May 15 00:56:04.112883 env[1204]: time="2025-05-15T00:56:04.112850171Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\"" May 15 00:56:05.782618 env[1204]: time="2025-05-15T00:56:05.782562668Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.32.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:05.784514 env[1204]: time="2025-05-15T00:56:05.784460253Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:05.786025 env[1204]: time="2025-05-15T00:56:05.785996508Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.32.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:05.787857 env[1204]: time="2025-05-15T00:56:05.787817205Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:05.788577 env[1204]: time="2025-05-15T00:56:05.788538981Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\" returns image reference \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\"" May 15 00:56:05.789126 env[1204]: time="2025-05-15T00:56:05.789093360Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\"" May 15 00:56:07.914447 env[1204]: time="2025-05-15T00:56:07.914394086Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.32.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:07.916393 env[1204]: time="2025-05-15T00:56:07.916364103Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:07.918394 env[1204]: time="2025-05-15T00:56:07.918358957Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.32.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:07.919964 env[1204]: time="2025-05-15T00:56:07.919935507Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:07.920989 env[1204]: time="2025-05-15T00:56:07.920952873Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\" returns image reference \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\"" May 15 00:56:07.921620 env[1204]: time="2025-05-15T00:56:07.921587442Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" May 15 00:56:08.869305 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 15 00:56:08.869486 systemd[1]: Stopped kubelet.service. May 15 00:56:08.870629 systemd[1]: Starting kubelet.service... May 15 00:56:08.939154 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount499103047.mount: Deactivated successfully. May 15 00:56:08.951054 systemd[1]: Started kubelet.service. May 15 00:56:09.177027 kubelet[1447]: E0515 00:56:09.176920 1447 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 00:56:09.179849 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 00:56:09.179969 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 00:56:10.130885 env[1204]: time="2025-05-15T00:56:10.130812577Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.32.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:10.133259 env[1204]: time="2025-05-15T00:56:10.133199732Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:10.135058 env[1204]: time="2025-05-15T00:56:10.135002990Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.32.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:10.136850 env[1204]: time="2025-05-15T00:56:10.136819219Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:10.138227 env[1204]: time="2025-05-15T00:56:10.137669337Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\"" May 15 00:56:10.139605 env[1204]: time="2025-05-15T00:56:10.138591056Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 15 00:56:10.843688 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2856572111.mount: Deactivated successfully. May 15 00:56:11.770331 env[1204]: time="2025-05-15T00:56:11.770274696Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:11.772470 env[1204]: time="2025-05-15T00:56:11.772414868Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:11.774402 env[1204]: time="2025-05-15T00:56:11.774368523Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:11.775920 env[1204]: time="2025-05-15T00:56:11.775889326Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:11.776529 env[1204]: time="2025-05-15T00:56:11.776495320Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 15 00:56:11.777005 env[1204]: time="2025-05-15T00:56:11.776972747Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 15 00:56:12.531766 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4142646707.mount: Deactivated successfully. May 15 00:56:12.538098 env[1204]: time="2025-05-15T00:56:12.538054942Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:12.539870 env[1204]: time="2025-05-15T00:56:12.539842121Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:12.541202 env[1204]: time="2025-05-15T00:56:12.541166696Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:12.543028 env[1204]: time="2025-05-15T00:56:12.543002287Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:12.543565 env[1204]: time="2025-05-15T00:56:12.543522001Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 15 00:56:12.544090 env[1204]: time="2025-05-15T00:56:12.544064328Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 15 00:56:13.203734 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3703889927.mount: Deactivated successfully. May 15 00:56:15.660364 env[1204]: time="2025-05-15T00:56:15.660283825Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:15.662693 env[1204]: time="2025-05-15T00:56:15.662641116Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:15.664762 env[1204]: time="2025-05-15T00:56:15.664733270Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:15.666785 env[1204]: time="2025-05-15T00:56:15.666757318Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:15.667740 env[1204]: time="2025-05-15T00:56:15.667687321Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" May 15 00:56:18.248834 systemd[1]: Stopped kubelet.service. May 15 00:56:18.250754 systemd[1]: Starting kubelet.service... May 15 00:56:18.268423 systemd[1]: Reloading. May 15 00:56:18.323880 /usr/lib/systemd/system-generators/torcx-generator[1503]: time="2025-05-15T00:56:18Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 15 00:56:18.323907 /usr/lib/systemd/system-generators/torcx-generator[1503]: time="2025-05-15T00:56:18Z" level=info msg="torcx already run" May 15 00:56:18.701345 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 15 00:56:18.701359 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 15 00:56:18.717891 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 00:56:18.790750 systemd[1]: Started kubelet.service. May 15 00:56:18.793779 systemd[1]: Stopping kubelet.service... May 15 00:56:18.794246 systemd[1]: kubelet.service: Deactivated successfully. May 15 00:56:18.794437 systemd[1]: Stopped kubelet.service. May 15 00:56:18.795770 systemd[1]: Starting kubelet.service... May 15 00:56:18.872444 systemd[1]: Started kubelet.service. May 15 00:56:18.906601 kubelet[1555]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 00:56:18.906601 kubelet[1555]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 15 00:56:18.906601 kubelet[1555]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 00:56:18.907026 kubelet[1555]: I0515 00:56:18.906656 1555 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 00:56:19.132834 kubelet[1555]: I0515 00:56:19.132720 1555 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 15 00:56:19.132834 kubelet[1555]: I0515 00:56:19.132755 1555 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 00:56:19.133085 kubelet[1555]: I0515 00:56:19.133061 1555 server.go:954] "Client rotation is on, will bootstrap in background" May 15 00:56:19.158979 kubelet[1555]: E0515 00:56:19.158932 1555 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.117:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" May 15 00:56:19.161247 kubelet[1555]: I0515 00:56:19.161222 1555 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 00:56:19.168803 kubelet[1555]: E0515 00:56:19.168780 1555 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 15 00:56:19.168859 kubelet[1555]: I0515 00:56:19.168804 1555 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 15 00:56:19.172703 kubelet[1555]: I0515 00:56:19.172678 1555 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 00:56:19.173692 kubelet[1555]: I0515 00:56:19.173660 1555 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 00:56:19.173835 kubelet[1555]: I0515 00:56:19.173688 1555 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 15 00:56:19.173924 kubelet[1555]: I0515 00:56:19.173837 1555 topology_manager.go:138] "Creating topology manager with none policy" May 15 00:56:19.173924 kubelet[1555]: I0515 00:56:19.173845 1555 container_manager_linux.go:304] "Creating device plugin manager" May 15 00:56:19.173972 kubelet[1555]: I0515 00:56:19.173941 1555 state_mem.go:36] "Initialized new in-memory state store" May 15 00:56:19.176262 kubelet[1555]: I0515 00:56:19.176251 1555 kubelet.go:446] "Attempting to sync node with API server" May 15 00:56:19.176325 kubelet[1555]: I0515 00:56:19.176266 1555 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 00:56:19.176325 kubelet[1555]: I0515 00:56:19.176281 1555 kubelet.go:352] "Adding apiserver pod source" May 15 00:56:19.176325 kubelet[1555]: I0515 00:56:19.176289 1555 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 00:56:19.196701 kubelet[1555]: I0515 00:56:19.196672 1555 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 15 00:56:19.197085 kubelet[1555]: I0515 00:56:19.197061 1555 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 00:56:19.206039 kubelet[1555]: W0515 00:56:19.206021 1555 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 15 00:56:19.206841 kubelet[1555]: W0515 00:56:19.206797 1555 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.117:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused May 15 00:56:19.206894 kubelet[1555]: E0515 00:56:19.206849 1555 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.117:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" May 15 00:56:19.208456 kubelet[1555]: W0515 00:56:19.208418 1555 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.117:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused May 15 00:56:19.208507 kubelet[1555]: E0515 00:56:19.208457 1555 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.117:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" May 15 00:56:19.209219 kubelet[1555]: I0515 00:56:19.209203 1555 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 15 00:56:19.209252 kubelet[1555]: I0515 00:56:19.209239 1555 server.go:1287] "Started kubelet" May 15 00:56:19.209314 kubelet[1555]: I0515 00:56:19.209286 1555 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 15 00:56:19.210038 kubelet[1555]: I0515 00:56:19.209868 1555 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 00:56:19.210191 kubelet[1555]: I0515 00:56:19.210160 1555 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 00:56:19.211792 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). May 15 00:56:19.211845 kubelet[1555]: I0515 00:56:19.210963 1555 server.go:490] "Adding debug handlers to kubelet server" May 15 00:56:19.211845 kubelet[1555]: I0515 00:56:19.211786 1555 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 00:56:19.212042 kubelet[1555]: I0515 00:56:19.212024 1555 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 15 00:56:19.213769 kubelet[1555]: E0515 00:56:19.213643 1555 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:56:19.213769 kubelet[1555]: I0515 00:56:19.213670 1555 volume_manager.go:297] "Starting Kubelet Volume Manager" May 15 00:56:19.213857 kubelet[1555]: I0515 00:56:19.213831 1555 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 15 00:56:19.213884 kubelet[1555]: I0515 00:56:19.213868 1555 reconciler.go:26] "Reconciler: start to sync state" May 15 00:56:19.213995 kubelet[1555]: E0515 00:56:19.213967 1555 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 00:56:19.214101 kubelet[1555]: E0515 00:56:19.212961 1555 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.117:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.117:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183f8d5598419f2b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-15 00:56:19.209215787 +0000 UTC m=+0.333185171,LastTimestamp:2025-05-15 00:56:19.209215787 +0000 UTC m=+0.333185171,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 15 00:56:19.214234 kubelet[1555]: I0515 00:56:19.214193 1555 factory.go:221] Registration of the systemd container factory successfully May 15 00:56:19.214403 kubelet[1555]: I0515 00:56:19.214389 1555 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 00:56:19.214525 kubelet[1555]: E0515 00:56:19.214498 1555 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.117:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.117:6443: connect: connection refused" interval="200ms" May 15 00:56:19.214785 kubelet[1555]: W0515 00:56:19.214751 1555 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.117:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused May 15 00:56:19.214833 kubelet[1555]: E0515 00:56:19.214796 1555 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.117:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" May 15 00:56:19.215294 kubelet[1555]: I0515 00:56:19.215280 1555 factory.go:221] Registration of the containerd container factory successfully May 15 00:56:19.224319 kubelet[1555]: I0515 00:56:19.224290 1555 cpu_manager.go:221] "Starting CPU manager" policy="none" May 15 00:56:19.224319 kubelet[1555]: I0515 00:56:19.224312 1555 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 15 00:56:19.224319 kubelet[1555]: I0515 00:56:19.224328 1555 state_mem.go:36] "Initialized new in-memory state store" May 15 00:56:19.227719 kubelet[1555]: I0515 00:56:19.227690 1555 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 00:56:19.228653 kubelet[1555]: I0515 00:56:19.228637 1555 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 00:56:19.228722 kubelet[1555]: I0515 00:56:19.228671 1555 status_manager.go:227] "Starting to sync pod status with apiserver" May 15 00:56:19.228722 kubelet[1555]: I0515 00:56:19.228695 1555 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 15 00:56:19.228722 kubelet[1555]: I0515 00:56:19.228705 1555 kubelet.go:2388] "Starting kubelet main sync loop" May 15 00:56:19.228884 kubelet[1555]: E0515 00:56:19.228863 1555 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 00:56:19.229414 kubelet[1555]: W0515 00:56:19.229357 1555 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.117:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused May 15 00:56:19.229458 kubelet[1555]: E0515 00:56:19.229408 1555 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.117:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" May 15 00:56:19.314433 kubelet[1555]: E0515 00:56:19.314414 1555 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:56:19.329650 kubelet[1555]: E0515 00:56:19.329621 1555 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 15 00:56:19.415779 kubelet[1555]: E0515 00:56:19.415204 1555 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:56:19.416696 kubelet[1555]: E0515 00:56:19.416588 1555 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.117:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.117:6443: connect: connection refused" interval="400ms" May 15 00:56:19.516268 kubelet[1555]: E0515 00:56:19.516211 1555 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:56:19.526747 kubelet[1555]: I0515 00:56:19.526706 1555 policy_none.go:49] "None policy: Start" May 15 00:56:19.526747 kubelet[1555]: I0515 00:56:19.526742 1555 memory_manager.go:186] "Starting memorymanager" policy="None" May 15 00:56:19.526847 kubelet[1555]: I0515 00:56:19.526759 1555 state_mem.go:35] "Initializing new in-memory state store" May 15 00:56:19.529745 kubelet[1555]: E0515 00:56:19.529678 1555 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 15 00:56:19.533377 systemd[1]: Created slice kubepods.slice. May 15 00:56:19.537133 systemd[1]: Created slice kubepods-burstable.slice. May 15 00:56:19.539424 systemd[1]: Created slice kubepods-besteffort.slice. May 15 00:56:19.547965 kubelet[1555]: I0515 00:56:19.547931 1555 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 00:56:19.548210 kubelet[1555]: I0515 00:56:19.548093 1555 eviction_manager.go:189] "Eviction manager: starting control loop" May 15 00:56:19.548210 kubelet[1555]: I0515 00:56:19.548103 1555 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 00:56:19.548847 kubelet[1555]: I0515 00:56:19.548315 1555 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 00:56:19.549143 kubelet[1555]: E0515 00:56:19.549099 1555 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 15 00:56:19.549222 kubelet[1555]: E0515 00:56:19.549162 1555 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 15 00:56:19.649523 kubelet[1555]: I0515 00:56:19.649486 1555 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 15 00:56:19.649870 kubelet[1555]: E0515 00:56:19.649845 1555 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.117:6443/api/v1/nodes\": dial tcp 10.0.0.117:6443: connect: connection refused" node="localhost" May 15 00:56:19.817871 kubelet[1555]: E0515 00:56:19.817714 1555 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.117:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.117:6443: connect: connection refused" interval="800ms" May 15 00:56:19.852195 kubelet[1555]: I0515 00:56:19.852143 1555 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 15 00:56:19.852666 kubelet[1555]: E0515 00:56:19.852614 1555 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.117:6443/api/v1/nodes\": dial tcp 10.0.0.117:6443: connect: connection refused" node="localhost" May 15 00:56:19.937129 systemd[1]: Created slice kubepods-burstable-pod5dfc0c82a15b90767c65ec38b58dbe5c.slice. May 15 00:56:19.945820 kubelet[1555]: E0515 00:56:19.945772 1555 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 00:56:19.947540 systemd[1]: Created slice kubepods-burstable-pod5386fe11ed933ab82453de11903c7f47.slice. May 15 00:56:19.955121 kubelet[1555]: E0515 00:56:19.955089 1555 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 00:56:19.957192 systemd[1]: Created slice kubepods-burstable-pod2980a8ab51edc665be10a02e33130e15.slice. May 15 00:56:19.958595 kubelet[1555]: E0515 00:56:19.958571 1555 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 00:56:20.019079 kubelet[1555]: I0515 00:56:20.019027 1555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5dfc0c82a15b90767c65ec38b58dbe5c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"5dfc0c82a15b90767c65ec38b58dbe5c\") " pod="kube-system/kube-apiserver-localhost" May 15 00:56:20.019079 kubelet[1555]: I0515 00:56:20.019069 1555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:56:20.019079 kubelet[1555]: I0515 00:56:20.019088 1555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:56:20.019079 kubelet[1555]: I0515 00:56:20.019105 1555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:56:20.019369 kubelet[1555]: I0515 00:56:20.019123 1555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:56:20.019369 kubelet[1555]: I0515 00:56:20.019138 1555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:56:20.019369 kubelet[1555]: I0515 00:56:20.019152 1555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2980a8ab51edc665be10a02e33130e15-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2980a8ab51edc665be10a02e33130e15\") " pod="kube-system/kube-scheduler-localhost" May 15 00:56:20.019369 kubelet[1555]: I0515 00:56:20.019170 1555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5dfc0c82a15b90767c65ec38b58dbe5c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"5dfc0c82a15b90767c65ec38b58dbe5c\") " pod="kube-system/kube-apiserver-localhost" May 15 00:56:20.019369 kubelet[1555]: I0515 00:56:20.019206 1555 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5dfc0c82a15b90767c65ec38b58dbe5c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"5dfc0c82a15b90767c65ec38b58dbe5c\") " pod="kube-system/kube-apiserver-localhost" May 15 00:56:20.100820 kubelet[1555]: W0515 00:56:20.100670 1555 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.117:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused May 15 00:56:20.100820 kubelet[1555]: E0515 00:56:20.100761 1555 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.117:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" May 15 00:56:20.134093 kubelet[1555]: W0515 00:56:20.134059 1555 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.117:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused May 15 00:56:20.134141 kubelet[1555]: E0515 00:56:20.134094 1555 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.117:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" May 15 00:56:20.246690 kubelet[1555]: E0515 00:56:20.246666 1555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:20.247190 env[1204]: time="2025-05-15T00:56:20.247138711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:5dfc0c82a15b90767c65ec38b58dbe5c,Namespace:kube-system,Attempt:0,}" May 15 00:56:20.254047 kubelet[1555]: I0515 00:56:20.254020 1555 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 15 00:56:20.254363 kubelet[1555]: E0515 00:56:20.254339 1555 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.117:6443/api/v1/nodes\": dial tcp 10.0.0.117:6443: connect: connection refused" node="localhost" May 15 00:56:20.255491 kubelet[1555]: E0515 00:56:20.255464 1555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:20.255792 env[1204]: time="2025-05-15T00:56:20.255770010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5386fe11ed933ab82453de11903c7f47,Namespace:kube-system,Attempt:0,}" May 15 00:56:20.259020 kubelet[1555]: E0515 00:56:20.258998 1555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:20.259389 env[1204]: time="2025-05-15T00:56:20.259342674Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2980a8ab51edc665be10a02e33130e15,Namespace:kube-system,Attempt:0,}" May 15 00:56:20.417450 kubelet[1555]: W0515 00:56:20.417280 1555 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.117:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused May 15 00:56:20.417450 kubelet[1555]: E0515 00:56:20.417389 1555 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.117:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" May 15 00:56:20.474424 kubelet[1555]: W0515 00:56:20.474364 1555 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.117:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused May 15 00:56:20.474424 kubelet[1555]: E0515 00:56:20.474427 1555 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.117:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" May 15 00:56:20.618992 kubelet[1555]: E0515 00:56:20.618934 1555 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.117:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.117:6443: connect: connection refused" interval="1.6s" May 15 00:56:21.056021 kubelet[1555]: I0515 00:56:21.055965 1555 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 15 00:56:21.056423 kubelet[1555]: E0515 00:56:21.056348 1555 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.117:6443/api/v1/nodes\": dial tcp 10.0.0.117:6443: connect: connection refused" node="localhost" May 15 00:56:21.224238 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2028626840.mount: Deactivated successfully. May 15 00:56:21.231388 env[1204]: time="2025-05-15T00:56:21.231338113Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:21.234268 env[1204]: time="2025-05-15T00:56:21.234238337Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:21.235297 env[1204]: time="2025-05-15T00:56:21.235250093Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:21.236245 env[1204]: time="2025-05-15T00:56:21.236215297Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:21.238736 env[1204]: time="2025-05-15T00:56:21.238681192Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:21.239723 env[1204]: time="2025-05-15T00:56:21.239700801Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:21.240975 env[1204]: time="2025-05-15T00:56:21.240942742Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:21.242157 env[1204]: time="2025-05-15T00:56:21.242118189Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:21.243354 env[1204]: time="2025-05-15T00:56:21.243324280Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:21.246904 env[1204]: time="2025-05-15T00:56:21.246873449Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:21.247974 env[1204]: time="2025-05-15T00:56:21.247942882Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:21.249611 env[1204]: time="2025-05-15T00:56:21.249577955Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:21.271000 kubelet[1555]: E0515 00:56:21.270870 1555 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.117:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" May 15 00:56:21.276001 env[1204]: time="2025-05-15T00:56:21.275930009Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:56:21.276136 env[1204]: time="2025-05-15T00:56:21.276019173Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:56:21.276136 env[1204]: time="2025-05-15T00:56:21.276034539Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:56:21.276307 env[1204]: time="2025-05-15T00:56:21.276257933Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d2c425c95ceb83d082c1dd8aadec1c7bbf4688c2094b3e041ae5b1f5ab4ab1ee pid=1597 runtime=io.containerd.runc.v2 May 15 00:56:21.285224 env[1204]: time="2025-05-15T00:56:21.285149671Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:56:21.285355 env[1204]: time="2025-05-15T00:56:21.285209314Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:56:21.285355 env[1204]: time="2025-05-15T00:56:21.285221701Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:56:21.285355 env[1204]: time="2025-05-15T00:56:21.285333784Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9f7f16731a00111bf38e1f79a8dbe227ba553e4990a7fe08ab6d7423abaaee8d pid=1615 runtime=io.containerd.runc.v2 May 15 00:56:21.292467 env[1204]: time="2025-05-15T00:56:21.292401430Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:56:21.292536 env[1204]: time="2025-05-15T00:56:21.292461625Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:56:21.292536 env[1204]: time="2025-05-15T00:56:21.292478315Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:56:21.293348 env[1204]: time="2025-05-15T00:56:21.293092846Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6211825d75956f6604b3db8455f317925bd2540c91390605c0476761c35d2c78 pid=1635 runtime=io.containerd.runc.v2 May 15 00:56:21.295914 systemd[1]: Started cri-containerd-d2c425c95ceb83d082c1dd8aadec1c7bbf4688c2094b3e041ae5b1f5ab4ab1ee.scope. May 15 00:56:21.301772 systemd[1]: Started cri-containerd-9f7f16731a00111bf38e1f79a8dbe227ba553e4990a7fe08ab6d7423abaaee8d.scope. May 15 00:56:21.313057 systemd[1]: Started cri-containerd-6211825d75956f6604b3db8455f317925bd2540c91390605c0476761c35d2c78.scope. May 15 00:56:21.339886 env[1204]: time="2025-05-15T00:56:21.339828272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2980a8ab51edc665be10a02e33130e15,Namespace:kube-system,Attempt:0,} returns sandbox id \"d2c425c95ceb83d082c1dd8aadec1c7bbf4688c2094b3e041ae5b1f5ab4ab1ee\"" May 15 00:56:21.340910 kubelet[1555]: E0515 00:56:21.340877 1555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:21.345296 env[1204]: time="2025-05-15T00:56:21.345057312Z" level=info msg="CreateContainer within sandbox \"d2c425c95ceb83d082c1dd8aadec1c7bbf4688c2094b3e041ae5b1f5ab4ab1ee\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 15 00:56:21.348055 env[1204]: time="2025-05-15T00:56:21.348022445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5386fe11ed933ab82453de11903c7f47,Namespace:kube-system,Attempt:0,} returns sandbox id \"9f7f16731a00111bf38e1f79a8dbe227ba553e4990a7fe08ab6d7423abaaee8d\"" May 15 00:56:21.349144 kubelet[1555]: E0515 00:56:21.348946 1555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:21.350603 env[1204]: time="2025-05-15T00:56:21.350569067Z" level=info msg="CreateContainer within sandbox \"9f7f16731a00111bf38e1f79a8dbe227ba553e4990a7fe08ab6d7423abaaee8d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 15 00:56:21.352557 env[1204]: time="2025-05-15T00:56:21.352526587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:5dfc0c82a15b90767c65ec38b58dbe5c,Namespace:kube-system,Attempt:0,} returns sandbox id \"6211825d75956f6604b3db8455f317925bd2540c91390605c0476761c35d2c78\"" May 15 00:56:21.353122 kubelet[1555]: E0515 00:56:21.353092 1555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:21.354867 env[1204]: time="2025-05-15T00:56:21.354832494Z" level=info msg="CreateContainer within sandbox \"6211825d75956f6604b3db8455f317925bd2540c91390605c0476761c35d2c78\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 15 00:56:21.370491 env[1204]: time="2025-05-15T00:56:21.370446491Z" level=info msg="CreateContainer within sandbox \"d2c425c95ceb83d082c1dd8aadec1c7bbf4688c2094b3e041ae5b1f5ab4ab1ee\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e520d23561fd64c73172c1d12e02f9ddac17cb0bb9b542968b47b032699af531\"" May 15 00:56:21.371426 env[1204]: time="2025-05-15T00:56:21.371404372Z" level=info msg="StartContainer for \"e520d23561fd64c73172c1d12e02f9ddac17cb0bb9b542968b47b032699af531\"" May 15 00:56:21.378910 env[1204]: time="2025-05-15T00:56:21.378858691Z" level=info msg="CreateContainer within sandbox \"6211825d75956f6604b3db8455f317925bd2540c91390605c0476761c35d2c78\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b65e0a10603fd103caa2ef1fc9e2a7af9e8f8256860e89c622eabe59de60efdc\"" May 15 00:56:21.379655 env[1204]: time="2025-05-15T00:56:21.379611745Z" level=info msg="StartContainer for \"b65e0a10603fd103caa2ef1fc9e2a7af9e8f8256860e89c622eabe59de60efdc\"" May 15 00:56:21.383926 env[1204]: time="2025-05-15T00:56:21.383870908Z" level=info msg="CreateContainer within sandbox \"9f7f16731a00111bf38e1f79a8dbe227ba553e4990a7fe08ab6d7423abaaee8d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"134dd21b9d5503551d9026892cc5e68720eac1c6e8517acc1de3e6b7033a5d53\"" May 15 00:56:21.385345 env[1204]: time="2025-05-15T00:56:21.385300522Z" level=info msg="StartContainer for \"134dd21b9d5503551d9026892cc5e68720eac1c6e8517acc1de3e6b7033a5d53\"" May 15 00:56:21.385928 systemd[1]: Started cri-containerd-e520d23561fd64c73172c1d12e02f9ddac17cb0bb9b542968b47b032699af531.scope. May 15 00:56:21.398112 systemd[1]: Started cri-containerd-b65e0a10603fd103caa2ef1fc9e2a7af9e8f8256860e89c622eabe59de60efdc.scope. May 15 00:56:21.405365 systemd[1]: Started cri-containerd-134dd21b9d5503551d9026892cc5e68720eac1c6e8517acc1de3e6b7033a5d53.scope. May 15 00:56:21.451997 env[1204]: time="2025-05-15T00:56:21.450619352Z" level=info msg="StartContainer for \"e520d23561fd64c73172c1d12e02f9ddac17cb0bb9b542968b47b032699af531\" returns successfully" May 15 00:56:21.527393 env[1204]: time="2025-05-15T00:56:21.527332084Z" level=info msg="StartContainer for \"134dd21b9d5503551d9026892cc5e68720eac1c6e8517acc1de3e6b7033a5d53\" returns successfully" May 15 00:56:21.527525 env[1204]: time="2025-05-15T00:56:21.527370201Z" level=info msg="StartContainer for \"b65e0a10603fd103caa2ef1fc9e2a7af9e8f8256860e89c622eabe59de60efdc\" returns successfully" May 15 00:56:22.234674 kubelet[1555]: E0515 00:56:22.234637 1555 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 00:56:22.235019 kubelet[1555]: E0515 00:56:22.234738 1555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:22.235998 kubelet[1555]: E0515 00:56:22.235977 1555 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 00:56:22.236070 kubelet[1555]: E0515 00:56:22.236051 1555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:22.237136 kubelet[1555]: E0515 00:56:22.237115 1555 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 00:56:22.237214 kubelet[1555]: E0515 00:56:22.237194 1555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:22.460724 kubelet[1555]: E0515 00:56:22.460683 1555 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 15 00:56:22.657619 kubelet[1555]: I0515 00:56:22.657502 1555 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 15 00:56:22.696906 kubelet[1555]: I0515 00:56:22.696867 1555 kubelet_node_status.go:79] "Successfully registered node" node="localhost" May 15 00:56:22.696906 kubelet[1555]: E0515 00:56:22.696909 1555 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 15 00:56:22.699677 kubelet[1555]: E0515 00:56:22.699630 1555 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:56:22.815302 kubelet[1555]: I0515 00:56:22.815255 1555 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 15 00:56:22.819387 kubelet[1555]: E0515 00:56:22.819349 1555 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 15 00:56:22.819387 kubelet[1555]: I0515 00:56:22.819374 1555 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 15 00:56:22.820543 kubelet[1555]: E0515 00:56:22.820522 1555 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" May 15 00:56:22.820543 kubelet[1555]: I0515 00:56:22.820538 1555 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 15 00:56:22.821516 kubelet[1555]: E0515 00:56:22.821495 1555 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" May 15 00:56:23.194515 kubelet[1555]: I0515 00:56:23.194478 1555 apiserver.go:52] "Watching apiserver" May 15 00:56:23.214116 kubelet[1555]: I0515 00:56:23.214083 1555 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 15 00:56:23.237774 kubelet[1555]: I0515 00:56:23.237750 1555 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 15 00:56:23.238093 kubelet[1555]: I0515 00:56:23.237824 1555 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 15 00:56:23.239290 kubelet[1555]: E0515 00:56:23.239266 1555 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" May 15 00:56:23.239290 kubelet[1555]: E0515 00:56:23.239276 1555 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 15 00:56:23.239413 kubelet[1555]: E0515 00:56:23.239398 1555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:23.239452 kubelet[1555]: E0515 00:56:23.239401 1555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:24.634161 kubelet[1555]: I0515 00:56:24.634121 1555 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 15 00:56:24.639316 kubelet[1555]: E0515 00:56:24.639294 1555 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:24.813576 systemd[1]: Reloading. May 15 00:56:24.879354 /usr/lib/systemd/system-generators/torcx-generator[1859]: time="2025-05-15T00:56:24Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 15 00:56:24.879382 /usr/lib/systemd/system-generators/torcx-generator[1859]: time="2025-05-15T00:56:24Z" level=info msg="torcx already run" May 15 00:56:24.936766 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 15 00:56:24.936783 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 15 00:56:24.953568 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 00:56:25.040468 systemd[1]: Stopping kubelet.service... May 15 00:56:25.063516 systemd[1]: kubelet.service: Deactivated successfully. May 15 00:56:25.063666 systemd[1]: Stopped kubelet.service. May 15 00:56:25.065092 systemd[1]: Starting kubelet.service... May 15 00:56:25.149205 systemd[1]: Started kubelet.service. May 15 00:56:25.186384 kubelet[1903]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 00:56:25.186384 kubelet[1903]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 15 00:56:25.186384 kubelet[1903]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 00:56:25.186716 kubelet[1903]: I0515 00:56:25.186457 1903 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 00:56:25.192330 kubelet[1903]: I0515 00:56:25.192256 1903 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 15 00:56:25.192330 kubelet[1903]: I0515 00:56:25.192280 1903 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 00:56:25.192623 kubelet[1903]: I0515 00:56:25.192592 1903 server.go:954] "Client rotation is on, will bootstrap in background" May 15 00:56:25.193739 kubelet[1903]: I0515 00:56:25.193715 1903 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 15 00:56:25.195550 kubelet[1903]: I0515 00:56:25.195524 1903 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 00:56:25.201595 kubelet[1903]: E0515 00:56:25.201547 1903 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 15 00:56:25.201595 kubelet[1903]: I0515 00:56:25.201582 1903 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 15 00:56:25.205729 kubelet[1903]: I0515 00:56:25.205704 1903 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 00:56:25.205967 kubelet[1903]: I0515 00:56:25.205921 1903 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 00:56:25.206204 kubelet[1903]: I0515 00:56:25.205962 1903 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 15 00:56:25.206319 kubelet[1903]: I0515 00:56:25.206208 1903 topology_manager.go:138] "Creating topology manager with none policy" May 15 00:56:25.206319 kubelet[1903]: I0515 00:56:25.206218 1903 container_manager_linux.go:304] "Creating device plugin manager" May 15 00:56:25.206319 kubelet[1903]: I0515 00:56:25.206254 1903 state_mem.go:36] "Initialized new in-memory state store" May 15 00:56:25.206392 kubelet[1903]: I0515 00:56:25.206386 1903 kubelet.go:446] "Attempting to sync node with API server" May 15 00:56:25.206416 kubelet[1903]: I0515 00:56:25.206397 1903 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 00:56:25.206416 kubelet[1903]: I0515 00:56:25.206414 1903 kubelet.go:352] "Adding apiserver pod source" May 15 00:56:25.206459 kubelet[1903]: I0515 00:56:25.206424 1903 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 00:56:25.208213 kubelet[1903]: I0515 00:56:25.207345 1903 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 15 00:56:25.208213 kubelet[1903]: I0515 00:56:25.207776 1903 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 00:56:25.208290 kubelet[1903]: I0515 00:56:25.208223 1903 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 15 00:56:25.208290 kubelet[1903]: I0515 00:56:25.208256 1903 server.go:1287] "Started kubelet" May 15 00:56:25.209608 kubelet[1903]: I0515 00:56:25.209548 1903 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 00:56:25.209884 kubelet[1903]: I0515 00:56:25.209872 1903 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 00:56:25.210014 kubelet[1903]: I0515 00:56:25.209995 1903 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 15 00:56:25.211021 kubelet[1903]: I0515 00:56:25.211010 1903 server.go:490] "Adding debug handlers to kubelet server" May 15 00:56:25.220627 kubelet[1903]: I0515 00:56:25.220599 1903 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 00:56:25.223899 kubelet[1903]: I0515 00:56:25.223875 1903 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 15 00:56:25.225024 kubelet[1903]: E0515 00:56:25.224955 1903 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:56:25.225069 kubelet[1903]: I0515 00:56:25.225033 1903 volume_manager.go:297] "Starting Kubelet Volume Manager" May 15 00:56:25.225252 kubelet[1903]: I0515 00:56:25.225227 1903 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 15 00:56:25.225360 kubelet[1903]: I0515 00:56:25.225339 1903 reconciler.go:26] "Reconciler: start to sync state" May 15 00:56:25.225878 kubelet[1903]: E0515 00:56:25.225852 1903 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 00:56:25.226032 kubelet[1903]: I0515 00:56:25.226002 1903 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 00:56:25.228863 kubelet[1903]: I0515 00:56:25.228522 1903 factory.go:221] Registration of the containerd container factory successfully May 15 00:56:25.228863 kubelet[1903]: I0515 00:56:25.228538 1903 factory.go:221] Registration of the systemd container factory successfully May 15 00:56:25.239113 kubelet[1903]: I0515 00:56:25.239055 1903 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 00:56:25.240239 kubelet[1903]: I0515 00:56:25.240208 1903 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 00:56:25.240292 kubelet[1903]: I0515 00:56:25.240247 1903 status_manager.go:227] "Starting to sync pod status with apiserver" May 15 00:56:25.240333 kubelet[1903]: I0515 00:56:25.240317 1903 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 15 00:56:25.240333 kubelet[1903]: I0515 00:56:25.240332 1903 kubelet.go:2388] "Starting kubelet main sync loop" May 15 00:56:25.240439 kubelet[1903]: E0515 00:56:25.240412 1903 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 00:56:25.257706 kubelet[1903]: I0515 00:56:25.257678 1903 cpu_manager.go:221] "Starting CPU manager" policy="none" May 15 00:56:25.257706 kubelet[1903]: I0515 00:56:25.257697 1903 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 15 00:56:25.257818 kubelet[1903]: I0515 00:56:25.257716 1903 state_mem.go:36] "Initialized new in-memory state store" May 15 00:56:25.257844 kubelet[1903]: I0515 00:56:25.257835 1903 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 15 00:56:25.257868 kubelet[1903]: I0515 00:56:25.257845 1903 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 15 00:56:25.257868 kubelet[1903]: I0515 00:56:25.257861 1903 policy_none.go:49] "None policy: Start" May 15 00:56:25.257910 kubelet[1903]: I0515 00:56:25.257869 1903 memory_manager.go:186] "Starting memorymanager" policy="None" May 15 00:56:25.257910 kubelet[1903]: I0515 00:56:25.257879 1903 state_mem.go:35] "Initializing new in-memory state store" May 15 00:56:25.258012 kubelet[1903]: I0515 00:56:25.257965 1903 state_mem.go:75] "Updated machine memory state" May 15 00:56:25.260841 kubelet[1903]: I0515 00:56:25.260823 1903 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 00:56:25.261023 kubelet[1903]: I0515 00:56:25.261011 1903 eviction_manager.go:189] "Eviction manager: starting control loop" May 15 00:56:25.261120 kubelet[1903]: I0515 00:56:25.261087 1903 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 00:56:25.261649 kubelet[1903]: I0515 00:56:25.261636 1903 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 00:56:25.262469 kubelet[1903]: E0515 00:56:25.262450 1903 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 15 00:56:25.341256 kubelet[1903]: I0515 00:56:25.341223 1903 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 15 00:56:25.341256 kubelet[1903]: I0515 00:56:25.341244 1903 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 15 00:56:25.341419 kubelet[1903]: I0515 00:56:25.341292 1903 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 15 00:56:25.346614 kubelet[1903]: E0515 00:56:25.346594 1903 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 15 00:56:25.364438 kubelet[1903]: I0515 00:56:25.364394 1903 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 15 00:56:25.370717 kubelet[1903]: I0515 00:56:25.370697 1903 kubelet_node_status.go:125] "Node was previously registered" node="localhost" May 15 00:56:25.370787 kubelet[1903]: I0515 00:56:25.370750 1903 kubelet_node_status.go:79] "Successfully registered node" node="localhost" May 15 00:56:25.527096 kubelet[1903]: I0515 00:56:25.526981 1903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5dfc0c82a15b90767c65ec38b58dbe5c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"5dfc0c82a15b90767c65ec38b58dbe5c\") " pod="kube-system/kube-apiserver-localhost" May 15 00:56:25.527096 kubelet[1903]: I0515 00:56:25.527015 1903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5dfc0c82a15b90767c65ec38b58dbe5c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"5dfc0c82a15b90767c65ec38b58dbe5c\") " pod="kube-system/kube-apiserver-localhost" May 15 00:56:25.527096 kubelet[1903]: I0515 00:56:25.527036 1903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:56:25.527096 kubelet[1903]: I0515 00:56:25.527050 1903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5dfc0c82a15b90767c65ec38b58dbe5c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"5dfc0c82a15b90767c65ec38b58dbe5c\") " pod="kube-system/kube-apiserver-localhost" May 15 00:56:25.527096 kubelet[1903]: I0515 00:56:25.527067 1903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:56:25.527387 kubelet[1903]: I0515 00:56:25.527080 1903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:56:25.527387 kubelet[1903]: I0515 00:56:25.527094 1903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:56:25.527387 kubelet[1903]: I0515 00:56:25.527110 1903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:56:25.527387 kubelet[1903]: I0515 00:56:25.527126 1903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2980a8ab51edc665be10a02e33130e15-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2980a8ab51edc665be10a02e33130e15\") " pod="kube-system/kube-scheduler-localhost" May 15 00:56:25.645863 kubelet[1903]: E0515 00:56:25.645817 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:25.646711 kubelet[1903]: E0515 00:56:25.646681 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:25.646933 kubelet[1903]: E0515 00:56:25.646909 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:25.809434 sudo[1939]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 15 00:56:25.809625 sudo[1939]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) May 15 00:56:26.207540 kubelet[1903]: I0515 00:56:26.207415 1903 apiserver.go:52] "Watching apiserver" May 15 00:56:26.226257 kubelet[1903]: I0515 00:56:26.226204 1903 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 15 00:56:26.250713 kubelet[1903]: E0515 00:56:26.250689 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:26.250836 kubelet[1903]: I0515 00:56:26.250807 1903 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 15 00:56:26.251041 kubelet[1903]: E0515 00:56:26.251022 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:26.253127 sudo[1939]: pam_unix(sudo:session): session closed for user root May 15 00:56:26.256093 kubelet[1903]: E0515 00:56:26.255900 1903 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 15 00:56:26.256093 kubelet[1903]: E0515 00:56:26.255990 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:26.274860 kubelet[1903]: I0515 00:56:26.274759 1903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.27474359 podStartE2EDuration="2.27474359s" podCreationTimestamp="2025-05-15 00:56:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:56:26.26912298 +0000 UTC m=+1.116452758" watchObservedRunningTime="2025-05-15 00:56:26.27474359 +0000 UTC m=+1.122073368" May 15 00:56:26.281994 kubelet[1903]: I0515 00:56:26.281943 1903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.281929307 podStartE2EDuration="1.281929307s" podCreationTimestamp="2025-05-15 00:56:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:56:26.281750363 +0000 UTC m=+1.129080142" watchObservedRunningTime="2025-05-15 00:56:26.281929307 +0000 UTC m=+1.129259085" May 15 00:56:26.281994 kubelet[1903]: I0515 00:56:26.282006 1903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.282002319 podStartE2EDuration="1.282002319s" podCreationTimestamp="2025-05-15 00:56:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:56:26.275124687 +0000 UTC m=+1.122454455" watchObservedRunningTime="2025-05-15 00:56:26.282002319 +0000 UTC m=+1.129332097" May 15 00:56:27.252132 kubelet[1903]: E0515 00:56:27.252100 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:27.252585 kubelet[1903]: E0515 00:56:27.252254 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:27.594631 sudo[1302]: pam_unix(sudo:session): session closed for user root May 15 00:56:27.595851 sshd[1299]: pam_unix(sshd:session): session closed for user core May 15 00:56:27.598259 systemd[1]: sshd@4-10.0.0.117:22-10.0.0.1:43312.service: Deactivated successfully. May 15 00:56:27.599016 systemd[1]: session-5.scope: Deactivated successfully. May 15 00:56:27.599141 systemd[1]: session-5.scope: Consumed 4.222s CPU time. May 15 00:56:27.599598 systemd-logind[1192]: Session 5 logged out. Waiting for processes to exit. May 15 00:56:27.600290 systemd-logind[1192]: Removed session 5. May 15 00:56:29.723588 kubelet[1903]: E0515 00:56:29.723553 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:31.693715 kubelet[1903]: I0515 00:56:31.693684 1903 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 15 00:56:31.694044 env[1204]: time="2025-05-15T00:56:31.693928146Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 15 00:56:31.694223 kubelet[1903]: I0515 00:56:31.694088 1903 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 15 00:56:32.390330 kubelet[1903]: W0515 00:56:32.390291 1903 reflector.go:569] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object May 15 00:56:32.390576 kubelet[1903]: E0515 00:56:32.390551 1903 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" May 15 00:56:32.390669 kubelet[1903]: W0515 00:56:32.390435 1903 reflector.go:569] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object May 15 00:56:32.390757 kubelet[1903]: E0515 00:56:32.390737 1903 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" May 15 00:56:32.390844 kubelet[1903]: W0515 00:56:32.390483 1903 reflector.go:569] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object May 15 00:56:32.390844 kubelet[1903]: E0515 00:56:32.390835 1903 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" May 15 00:56:32.391352 systemd[1]: Created slice kubepods-besteffort-pod25fa7f53_e303_4612_8413_37db4fc4d98d.slice. May 15 00:56:32.398462 systemd[1]: Created slice kubepods-burstable-pod50bd9f10_710d_4b99_b10c_69c74603eef5.slice. May 15 00:56:32.468288 kubelet[1903]: I0515 00:56:32.468244 1903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/50bd9f10-710d-4b99-b10c-69c74603eef5-host-proc-sys-net\") pod \"cilium-24r8h\" (UID: \"50bd9f10-710d-4b99-b10c-69c74603eef5\") " pod="kube-system/cilium-24r8h" May 15 00:56:32.468288 kubelet[1903]: I0515 00:56:32.468281 1903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/50bd9f10-710d-4b99-b10c-69c74603eef5-hostproc\") pod \"cilium-24r8h\" (UID: \"50bd9f10-710d-4b99-b10c-69c74603eef5\") " pod="kube-system/cilium-24r8h" May 15 00:56:32.468288 kubelet[1903]: I0515 00:56:32.468299 1903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/50bd9f10-710d-4b99-b10c-69c74603eef5-lib-modules\") pod \"cilium-24r8h\" (UID: \"50bd9f10-710d-4b99-b10c-69c74603eef5\") " pod="kube-system/cilium-24r8h" May 15 00:56:32.468489 kubelet[1903]: I0515 00:56:32.468312 1903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/50bd9f10-710d-4b99-b10c-69c74603eef5-etc-cni-netd\") pod \"cilium-24r8h\" (UID: \"50bd9f10-710d-4b99-b10c-69c74603eef5\") " pod="kube-system/cilium-24r8h" May 15 00:56:32.468489 kubelet[1903]: I0515 00:56:32.468327 1903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/50bd9f10-710d-4b99-b10c-69c74603eef5-hubble-tls\") pod \"cilium-24r8h\" (UID: \"50bd9f10-710d-4b99-b10c-69c74603eef5\") " pod="kube-system/cilium-24r8h" May 15 00:56:32.468489 kubelet[1903]: I0515 00:56:32.468340 1903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/50bd9f10-710d-4b99-b10c-69c74603eef5-clustermesh-secrets\") pod \"cilium-24r8h\" (UID: \"50bd9f10-710d-4b99-b10c-69c74603eef5\") " pod="kube-system/cilium-24r8h" May 15 00:56:32.468489 kubelet[1903]: I0515 00:56:32.468354 1903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/50bd9f10-710d-4b99-b10c-69c74603eef5-cilium-run\") pod \"cilium-24r8h\" (UID: \"50bd9f10-710d-4b99-b10c-69c74603eef5\") " pod="kube-system/cilium-24r8h" May 15 00:56:32.468489 kubelet[1903]: I0515 00:56:32.468365 1903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/50bd9f10-710d-4b99-b10c-69c74603eef5-cilium-cgroup\") pod \"cilium-24r8h\" (UID: \"50bd9f10-710d-4b99-b10c-69c74603eef5\") " pod="kube-system/cilium-24r8h" May 15 00:56:32.468489 kubelet[1903]: I0515 00:56:32.468382 1903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/50bd9f10-710d-4b99-b10c-69c74603eef5-cni-path\") pod \"cilium-24r8h\" (UID: \"50bd9f10-710d-4b99-b10c-69c74603eef5\") " pod="kube-system/cilium-24r8h" May 15 00:56:32.468622 kubelet[1903]: I0515 00:56:32.468396 1903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/25fa7f53-e303-4612-8413-37db4fc4d98d-xtables-lock\") pod \"kube-proxy-75h2m\" (UID: \"25fa7f53-e303-4612-8413-37db4fc4d98d\") " pod="kube-system/kube-proxy-75h2m" May 15 00:56:32.468622 kubelet[1903]: I0515 00:56:32.468413 1903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8k2x\" (UniqueName: \"kubernetes.io/projected/25fa7f53-e303-4612-8413-37db4fc4d98d-kube-api-access-j8k2x\") pod \"kube-proxy-75h2m\" (UID: \"25fa7f53-e303-4612-8413-37db4fc4d98d\") " pod="kube-system/kube-proxy-75h2m" May 15 00:56:32.468622 kubelet[1903]: I0515 00:56:32.468428 1903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/25fa7f53-e303-4612-8413-37db4fc4d98d-kube-proxy\") pod \"kube-proxy-75h2m\" (UID: \"25fa7f53-e303-4612-8413-37db4fc4d98d\") " pod="kube-system/kube-proxy-75h2m" May 15 00:56:32.468622 kubelet[1903]: I0515 00:56:32.468440 1903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/50bd9f10-710d-4b99-b10c-69c74603eef5-cilium-config-path\") pod \"cilium-24r8h\" (UID: \"50bd9f10-710d-4b99-b10c-69c74603eef5\") " pod="kube-system/cilium-24r8h" May 15 00:56:32.468622 kubelet[1903]: I0515 00:56:32.468451 1903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/50bd9f10-710d-4b99-b10c-69c74603eef5-xtables-lock\") pod \"cilium-24r8h\" (UID: \"50bd9f10-710d-4b99-b10c-69c74603eef5\") " pod="kube-system/cilium-24r8h" May 15 00:56:32.468747 kubelet[1903]: I0515 00:56:32.468463 1903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/25fa7f53-e303-4612-8413-37db4fc4d98d-lib-modules\") pod \"kube-proxy-75h2m\" (UID: \"25fa7f53-e303-4612-8413-37db4fc4d98d\") " pod="kube-system/kube-proxy-75h2m" May 15 00:56:32.468747 kubelet[1903]: I0515 00:56:32.468475 1903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/50bd9f10-710d-4b99-b10c-69c74603eef5-bpf-maps\") pod \"cilium-24r8h\" (UID: \"50bd9f10-710d-4b99-b10c-69c74603eef5\") " pod="kube-system/cilium-24r8h" May 15 00:56:32.468747 kubelet[1903]: I0515 00:56:32.468487 1903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/50bd9f10-710d-4b99-b10c-69c74603eef5-host-proc-sys-kernel\") pod \"cilium-24r8h\" (UID: \"50bd9f10-710d-4b99-b10c-69c74603eef5\") " pod="kube-system/cilium-24r8h" May 15 00:56:32.468747 kubelet[1903]: I0515 00:56:32.468506 1903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fss7d\" (UniqueName: \"kubernetes.io/projected/50bd9f10-710d-4b99-b10c-69c74603eef5-kube-api-access-fss7d\") pod \"cilium-24r8h\" (UID: \"50bd9f10-710d-4b99-b10c-69c74603eef5\") " pod="kube-system/cilium-24r8h" May 15 00:56:32.482019 kubelet[1903]: E0515 00:56:32.481988 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:32.573949 kubelet[1903]: I0515 00:56:32.573910 1903 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" May 15 00:56:32.681413 systemd[1]: Created slice kubepods-besteffort-pode8cc16a3_4608_44da_8397_f10c8967f356.slice. May 15 00:56:32.696249 kubelet[1903]: E0515 00:56:32.696207 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:32.696907 env[1204]: time="2025-05-15T00:56:32.696861427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-75h2m,Uid:25fa7f53-e303-4612-8413-37db4fc4d98d,Namespace:kube-system,Attempt:0,}" May 15 00:56:32.711798 env[1204]: time="2025-05-15T00:56:32.711741765Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:56:32.711798 env[1204]: time="2025-05-15T00:56:32.711773307Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:56:32.711798 env[1204]: time="2025-05-15T00:56:32.711782859Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:56:32.711984 env[1204]: time="2025-05-15T00:56:32.711898753Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3c7e27cc29577ad0c14ed284e35b54aba5dd42b5d8e0fa37de3f4427631f01be pid=1997 runtime=io.containerd.runc.v2 May 15 00:56:32.723501 systemd[1]: Started cri-containerd-3c7e27cc29577ad0c14ed284e35b54aba5dd42b5d8e0fa37de3f4427631f01be.scope. May 15 00:56:32.743617 env[1204]: time="2025-05-15T00:56:32.743565766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-75h2m,Uid:25fa7f53-e303-4612-8413-37db4fc4d98d,Namespace:kube-system,Attempt:0,} returns sandbox id \"3c7e27cc29577ad0c14ed284e35b54aba5dd42b5d8e0fa37de3f4427631f01be\"" May 15 00:56:32.744298 kubelet[1903]: E0515 00:56:32.744267 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:32.746239 env[1204]: time="2025-05-15T00:56:32.746207286Z" level=info msg="CreateContainer within sandbox \"3c7e27cc29577ad0c14ed284e35b54aba5dd42b5d8e0fa37de3f4427631f01be\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 15 00:56:32.770812 kubelet[1903]: I0515 00:56:32.770745 1903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqkvv\" (UniqueName: \"kubernetes.io/projected/e8cc16a3-4608-44da-8397-f10c8967f356-kube-api-access-nqkvv\") pod \"cilium-operator-6c4d7847fc-27mjh\" (UID: \"e8cc16a3-4608-44da-8397-f10c8967f356\") " pod="kube-system/cilium-operator-6c4d7847fc-27mjh" May 15 00:56:32.770812 kubelet[1903]: I0515 00:56:32.770793 1903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e8cc16a3-4608-44da-8397-f10c8967f356-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-27mjh\" (UID: \"e8cc16a3-4608-44da-8397-f10c8967f356\") " pod="kube-system/cilium-operator-6c4d7847fc-27mjh" May 15 00:56:32.794691 env[1204]: time="2025-05-15T00:56:32.794631067Z" level=info msg="CreateContainer within sandbox \"3c7e27cc29577ad0c14ed284e35b54aba5dd42b5d8e0fa37de3f4427631f01be\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f8f92d452a4ee229a1b6dae46c041dbfb169dd327904fb12003cc87f2173ccdd\"" May 15 00:56:32.795201 env[1204]: time="2025-05-15T00:56:32.795144615Z" level=info msg="StartContainer for \"f8f92d452a4ee229a1b6dae46c041dbfb169dd327904fb12003cc87f2173ccdd\"" May 15 00:56:32.809129 systemd[1]: Started cri-containerd-f8f92d452a4ee229a1b6dae46c041dbfb169dd327904fb12003cc87f2173ccdd.scope. May 15 00:56:32.835004 env[1204]: time="2025-05-15T00:56:32.834954055Z" level=info msg="StartContainer for \"f8f92d452a4ee229a1b6dae46c041dbfb169dd327904fb12003cc87f2173ccdd\" returns successfully" May 15 00:56:33.267388 kubelet[1903]: E0515 00:56:33.267357 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:33.267388 kubelet[1903]: E0515 00:56:33.267389 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:33.569364 kubelet[1903]: E0515 00:56:33.569263 1903 projected.go:263] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition May 15 00:56:33.569364 kubelet[1903]: E0515 00:56:33.569299 1903 projected.go:194] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-24r8h: failed to sync secret cache: timed out waiting for the condition May 15 00:56:33.569512 kubelet[1903]: E0515 00:56:33.569383 1903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/50bd9f10-710d-4b99-b10c-69c74603eef5-hubble-tls podName:50bd9f10-710d-4b99-b10c-69c74603eef5 nodeName:}" failed. No retries permitted until 2025-05-15 00:56:34.069355631 +0000 UTC m=+8.916685439 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/50bd9f10-710d-4b99-b10c-69c74603eef5-hubble-tls") pod "cilium-24r8h" (UID: "50bd9f10-710d-4b99-b10c-69c74603eef5") : failed to sync secret cache: timed out waiting for the condition May 15 00:56:33.585512 kubelet[1903]: E0515 00:56:33.585484 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:33.585868 env[1204]: time="2025-05-15T00:56:33.585836912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-27mjh,Uid:e8cc16a3-4608-44da-8397-f10c8967f356,Namespace:kube-system,Attempt:0,}" May 15 00:56:33.602137 env[1204]: time="2025-05-15T00:56:33.602068611Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:56:33.602137 env[1204]: time="2025-05-15T00:56:33.602110906Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:56:33.602137 env[1204]: time="2025-05-15T00:56:33.602121840Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:56:33.602331 env[1204]: time="2025-05-15T00:56:33.602282482Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7f3e4dda7c9bfd1eea92e33c1158d836883f1354d6057bbfb1b7141ca68d2ab3 pid=2201 runtime=io.containerd.runc.v2 May 15 00:56:33.617076 systemd[1]: Started cri-containerd-7f3e4dda7c9bfd1eea92e33c1158d836883f1354d6057bbfb1b7141ca68d2ab3.scope. May 15 00:56:33.618642 kubelet[1903]: E0515 00:56:33.618437 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:33.631218 kubelet[1903]: I0515 00:56:33.630853 1903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-75h2m" podStartSLOduration=1.630834114 podStartE2EDuration="1.630834114s" podCreationTimestamp="2025-05-15 00:56:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:56:33.284552411 +0000 UTC m=+8.131882189" watchObservedRunningTime="2025-05-15 00:56:33.630834114 +0000 UTC m=+8.478163892" May 15 00:56:33.648573 env[1204]: time="2025-05-15T00:56:33.648538752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-27mjh,Uid:e8cc16a3-4608-44da-8397-f10c8967f356,Namespace:kube-system,Attempt:0,} returns sandbox id \"7f3e4dda7c9bfd1eea92e33c1158d836883f1354d6057bbfb1b7141ca68d2ab3\"" May 15 00:56:33.649361 kubelet[1903]: E0515 00:56:33.649332 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:33.650416 env[1204]: time="2025-05-15T00:56:33.650380262Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 15 00:56:34.201281 kubelet[1903]: E0515 00:56:34.201233 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:34.201709 env[1204]: time="2025-05-15T00:56:34.201668093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-24r8h,Uid:50bd9f10-710d-4b99-b10c-69c74603eef5,Namespace:kube-system,Attempt:0,}" May 15 00:56:34.217380 env[1204]: time="2025-05-15T00:56:34.217258757Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:56:34.217380 env[1204]: time="2025-05-15T00:56:34.217313349Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:56:34.217380 env[1204]: time="2025-05-15T00:56:34.217323291Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:56:34.217766 env[1204]: time="2025-05-15T00:56:34.217730260Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0ef6484e9f600a37f74b2690edf13812077b4b3adfed4b35c74db294780c499c pid=2242 runtime=io.containerd.runc.v2 May 15 00:56:34.228327 systemd[1]: Started cri-containerd-0ef6484e9f600a37f74b2690edf13812077b4b3adfed4b35c74db294780c499c.scope. May 15 00:56:34.249874 env[1204]: time="2025-05-15T00:56:34.249832907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-24r8h,Uid:50bd9f10-710d-4b99-b10c-69c74603eef5,Namespace:kube-system,Attempt:0,} returns sandbox id \"0ef6484e9f600a37f74b2690edf13812077b4b3adfed4b35c74db294780c499c\"" May 15 00:56:34.251902 kubelet[1903]: E0515 00:56:34.251874 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:34.270085 kubelet[1903]: E0515 00:56:34.270044 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:35.084036 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4158866202.mount: Deactivated successfully. May 15 00:56:35.271258 kubelet[1903]: E0515 00:56:35.271169 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:35.957588 env[1204]: time="2025-05-15T00:56:35.957527317Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:35.959395 env[1204]: time="2025-05-15T00:56:35.959327272Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:35.961087 env[1204]: time="2025-05-15T00:56:35.961046177Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:35.961614 env[1204]: time="2025-05-15T00:56:35.961582485Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 15 00:56:35.962771 env[1204]: time="2025-05-15T00:56:35.962734806Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 15 00:56:35.964768 env[1204]: time="2025-05-15T00:56:35.964727818Z" level=info msg="CreateContainer within sandbox \"7f3e4dda7c9bfd1eea92e33c1158d836883f1354d6057bbfb1b7141ca68d2ab3\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 15 00:56:35.976872 env[1204]: time="2025-05-15T00:56:35.976823968Z" level=info msg="CreateContainer within sandbox \"7f3e4dda7c9bfd1eea92e33c1158d836883f1354d6057bbfb1b7141ca68d2ab3\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"33f37ed734ac9bc8c62706e03b1d1356cdd05880dcfa248105a74581559b011d\"" May 15 00:56:35.977300 env[1204]: time="2025-05-15T00:56:35.977259522Z" level=info msg="StartContainer for \"33f37ed734ac9bc8c62706e03b1d1356cdd05880dcfa248105a74581559b011d\"" May 15 00:56:35.995189 systemd[1]: Started cri-containerd-33f37ed734ac9bc8c62706e03b1d1356cdd05880dcfa248105a74581559b011d.scope. May 15 00:56:36.018392 env[1204]: time="2025-05-15T00:56:36.018348118Z" level=info msg="StartContainer for \"33f37ed734ac9bc8c62706e03b1d1356cdd05880dcfa248105a74581559b011d\" returns successfully" May 15 00:56:36.274001 kubelet[1903]: E0515 00:56:36.273850 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:37.275148 kubelet[1903]: E0515 00:56:37.275115 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:39.729263 kubelet[1903]: E0515 00:56:39.729225 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:39.771341 kubelet[1903]: I0515 00:56:39.771207 1903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-27mjh" podStartSLOduration=5.458624433 podStartE2EDuration="7.771189498s" podCreationTimestamp="2025-05-15 00:56:32 +0000 UTC" firstStartedPulling="2025-05-15 00:56:33.65000684 +0000 UTC m=+8.497336618" lastFinishedPulling="2025-05-15 00:56:35.962571905 +0000 UTC m=+10.809901683" observedRunningTime="2025-05-15 00:56:36.410068989 +0000 UTC m=+11.257398767" watchObservedRunningTime="2025-05-15 00:56:39.771189498 +0000 UTC m=+14.618519276" May 15 00:56:40.288222 kubelet[1903]: E0515 00:56:40.288195 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:42.104291 update_engine[1195]: I0515 00:56:42.104240 1195 update_attempter.cc:509] Updating boot flags... May 15 00:56:42.622807 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2606152687.mount: Deactivated successfully. May 15 00:56:46.066549 env[1204]: time="2025-05-15T00:56:46.066477771Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:46.068729 env[1204]: time="2025-05-15T00:56:46.068700707Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:46.070724 env[1204]: time="2025-05-15T00:56:46.070672824Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:46.071755 env[1204]: time="2025-05-15T00:56:46.071701551Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 15 00:56:46.078850 env[1204]: time="2025-05-15T00:56:46.078811272Z" level=info msg="CreateContainer within sandbox \"0ef6484e9f600a37f74b2690edf13812077b4b3adfed4b35c74db294780c499c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 15 00:56:46.090448 env[1204]: time="2025-05-15T00:56:46.090402649Z" level=info msg="CreateContainer within sandbox \"0ef6484e9f600a37f74b2690edf13812077b4b3adfed4b35c74db294780c499c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"adaa3f00926880513c699c9b631587a072c81ab75ffcdc7b7109dd3cdbad3ca8\"" May 15 00:56:46.090787 env[1204]: time="2025-05-15T00:56:46.090753805Z" level=info msg="StartContainer for \"adaa3f00926880513c699c9b631587a072c81ab75ffcdc7b7109dd3cdbad3ca8\"" May 15 00:56:46.105862 systemd[1]: Started cri-containerd-adaa3f00926880513c699c9b631587a072c81ab75ffcdc7b7109dd3cdbad3ca8.scope. May 15 00:56:46.137547 systemd[1]: cri-containerd-adaa3f00926880513c699c9b631587a072c81ab75ffcdc7b7109dd3cdbad3ca8.scope: Deactivated successfully. May 15 00:56:46.224316 env[1204]: time="2025-05-15T00:56:46.224248081Z" level=info msg="StartContainer for \"adaa3f00926880513c699c9b631587a072c81ab75ffcdc7b7109dd3cdbad3ca8\" returns successfully" May 15 00:56:46.300169 kubelet[1903]: E0515 00:56:46.300134 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:46.661771 env[1204]: time="2025-05-15T00:56:46.661720721Z" level=info msg="shim disconnected" id=adaa3f00926880513c699c9b631587a072c81ab75ffcdc7b7109dd3cdbad3ca8 May 15 00:56:46.661771 env[1204]: time="2025-05-15T00:56:46.661768640Z" level=warning msg="cleaning up after shim disconnected" id=adaa3f00926880513c699c9b631587a072c81ab75ffcdc7b7109dd3cdbad3ca8 namespace=k8s.io May 15 00:56:46.661989 env[1204]: time="2025-05-15T00:56:46.661781556Z" level=info msg="cleaning up dead shim" May 15 00:56:46.668546 env[1204]: time="2025-05-15T00:56:46.668491160Z" level=warning msg="cleanup warnings time=\"2025-05-15T00:56:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2379 runtime=io.containerd.runc.v2\n" May 15 00:56:47.086783 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-adaa3f00926880513c699c9b631587a072c81ab75ffcdc7b7109dd3cdbad3ca8-rootfs.mount: Deactivated successfully. May 15 00:56:47.302647 kubelet[1903]: E0515 00:56:47.302614 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:47.305458 env[1204]: time="2025-05-15T00:56:47.305417023Z" level=info msg="CreateContainer within sandbox \"0ef6484e9f600a37f74b2690edf13812077b4b3adfed4b35c74db294780c499c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 15 00:56:47.319583 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2827445142.mount: Deactivated successfully. May 15 00:56:47.320552 env[1204]: time="2025-05-15T00:56:47.320492802Z" level=info msg="CreateContainer within sandbox \"0ef6484e9f600a37f74b2690edf13812077b4b3adfed4b35c74db294780c499c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2757892416af1c3145e7df42d8bfef468f24911b38a58be969808b836672cc69\"" May 15 00:56:47.321066 env[1204]: time="2025-05-15T00:56:47.321033554Z" level=info msg="StartContainer for \"2757892416af1c3145e7df42d8bfef468f24911b38a58be969808b836672cc69\"" May 15 00:56:47.338097 systemd[1]: Started cri-containerd-2757892416af1c3145e7df42d8bfef468f24911b38a58be969808b836672cc69.scope. May 15 00:56:47.358507 env[1204]: time="2025-05-15T00:56:47.358452723Z" level=info msg="StartContainer for \"2757892416af1c3145e7df42d8bfef468f24911b38a58be969808b836672cc69\" returns successfully" May 15 00:56:47.367248 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 15 00:56:47.367446 systemd[1]: Stopped systemd-sysctl.service. May 15 00:56:47.367601 systemd[1]: Stopping systemd-sysctl.service... May 15 00:56:47.368933 systemd[1]: Starting systemd-sysctl.service... May 15 00:56:47.369150 systemd[1]: cri-containerd-2757892416af1c3145e7df42d8bfef468f24911b38a58be969808b836672cc69.scope: Deactivated successfully. May 15 00:56:47.379987 systemd[1]: Finished systemd-sysctl.service. May 15 00:56:47.394869 env[1204]: time="2025-05-15T00:56:47.394818232Z" level=info msg="shim disconnected" id=2757892416af1c3145e7df42d8bfef468f24911b38a58be969808b836672cc69 May 15 00:56:47.395073 env[1204]: time="2025-05-15T00:56:47.394876883Z" level=warning msg="cleaning up after shim disconnected" id=2757892416af1c3145e7df42d8bfef468f24911b38a58be969808b836672cc69 namespace=k8s.io May 15 00:56:47.395073 env[1204]: time="2025-05-15T00:56:47.394886913Z" level=info msg="cleaning up dead shim" May 15 00:56:47.400586 env[1204]: time="2025-05-15T00:56:47.400556816Z" level=warning msg="cleanup warnings time=\"2025-05-15T00:56:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2443 runtime=io.containerd.runc.v2\n" May 15 00:56:47.618871 systemd[1]: Started sshd@5-10.0.0.117:22-10.0.0.1:55576.service. May 15 00:56:47.660186 sshd[2456]: Accepted publickey for core from 10.0.0.1 port 55576 ssh2: RSA SHA256:Iwoz1L9/QgXQ9OpXvCPQYapJE0cmIk+lKxZdEnPdReQ May 15 00:56:47.661513 sshd[2456]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 00:56:47.664848 systemd-logind[1192]: New session 6 of user core. May 15 00:56:47.665522 systemd[1]: Started session-6.scope. May 15 00:56:47.775481 sshd[2456]: pam_unix(sshd:session): session closed for user core May 15 00:56:47.777789 systemd[1]: sshd@5-10.0.0.117:22-10.0.0.1:55576.service: Deactivated successfully. May 15 00:56:47.778418 systemd[1]: session-6.scope: Deactivated successfully. May 15 00:56:47.778955 systemd-logind[1192]: Session 6 logged out. Waiting for processes to exit. May 15 00:56:47.779619 systemd-logind[1192]: Removed session 6. May 15 00:56:48.086986 systemd[1]: run-containerd-runc-k8s.io-2757892416af1c3145e7df42d8bfef468f24911b38a58be969808b836672cc69-runc.4iZ3zq.mount: Deactivated successfully. May 15 00:56:48.087105 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2757892416af1c3145e7df42d8bfef468f24911b38a58be969808b836672cc69-rootfs.mount: Deactivated successfully. May 15 00:56:48.306045 kubelet[1903]: E0515 00:56:48.306017 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:48.307452 env[1204]: time="2025-05-15T00:56:48.307419121Z" level=info msg="CreateContainer within sandbox \"0ef6484e9f600a37f74b2690edf13812077b4b3adfed4b35c74db294780c499c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 15 00:56:48.346872 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1517033879.mount: Deactivated successfully. May 15 00:56:48.349767 env[1204]: time="2025-05-15T00:56:48.349726041Z" level=info msg="CreateContainer within sandbox \"0ef6484e9f600a37f74b2690edf13812077b4b3adfed4b35c74db294780c499c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"0181ca6dd42c7f9f24edf5ecaf008d6191fb7ec23e52d344b68e7952bd11cebb\"" May 15 00:56:48.350200 env[1204]: time="2025-05-15T00:56:48.350147545Z" level=info msg="StartContainer for \"0181ca6dd42c7f9f24edf5ecaf008d6191fb7ec23e52d344b68e7952bd11cebb\"" May 15 00:56:48.367653 systemd[1]: Started cri-containerd-0181ca6dd42c7f9f24edf5ecaf008d6191fb7ec23e52d344b68e7952bd11cebb.scope. May 15 00:56:48.392331 systemd[1]: cri-containerd-0181ca6dd42c7f9f24edf5ecaf008d6191fb7ec23e52d344b68e7952bd11cebb.scope: Deactivated successfully. May 15 00:56:48.454448 env[1204]: time="2025-05-15T00:56:48.454383510Z" level=info msg="StartContainer for \"0181ca6dd42c7f9f24edf5ecaf008d6191fb7ec23e52d344b68e7952bd11cebb\" returns successfully" May 15 00:56:48.602159 env[1204]: time="2025-05-15T00:56:48.602032253Z" level=info msg="shim disconnected" id=0181ca6dd42c7f9f24edf5ecaf008d6191fb7ec23e52d344b68e7952bd11cebb May 15 00:56:48.602159 env[1204]: time="2025-05-15T00:56:48.602090522Z" level=warning msg="cleaning up after shim disconnected" id=0181ca6dd42c7f9f24edf5ecaf008d6191fb7ec23e52d344b68e7952bd11cebb namespace=k8s.io May 15 00:56:48.602159 env[1204]: time="2025-05-15T00:56:48.602099501Z" level=info msg="cleaning up dead shim" May 15 00:56:48.608604 env[1204]: time="2025-05-15T00:56:48.608563345Z" level=warning msg="cleanup warnings time=\"2025-05-15T00:56:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2513 runtime=io.containerd.runc.v2\n" May 15 00:56:49.086899 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0181ca6dd42c7f9f24edf5ecaf008d6191fb7ec23e52d344b68e7952bd11cebb-rootfs.mount: Deactivated successfully. May 15 00:56:49.310186 kubelet[1903]: E0515 00:56:49.310149 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:49.311863 env[1204]: time="2025-05-15T00:56:49.311822457Z" level=info msg="CreateContainer within sandbox \"0ef6484e9f600a37f74b2690edf13812077b4b3adfed4b35c74db294780c499c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 15 00:56:49.358886 env[1204]: time="2025-05-15T00:56:49.358770049Z" level=info msg="CreateContainer within sandbox \"0ef6484e9f600a37f74b2690edf13812077b4b3adfed4b35c74db294780c499c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ef1ce1c911991ac23c275876d070195e9b103759a2c36b796f6d5d9ec9501f3f\"" May 15 00:56:49.359291 env[1204]: time="2025-05-15T00:56:49.359255711Z" level=info msg="StartContainer for \"ef1ce1c911991ac23c275876d070195e9b103759a2c36b796f6d5d9ec9501f3f\"" May 15 00:56:49.378564 systemd[1]: Started cri-containerd-ef1ce1c911991ac23c275876d070195e9b103759a2c36b796f6d5d9ec9501f3f.scope. May 15 00:56:49.400899 systemd[1]: cri-containerd-ef1ce1c911991ac23c275876d070195e9b103759a2c36b796f6d5d9ec9501f3f.scope: Deactivated successfully. May 15 00:56:49.402390 env[1204]: time="2025-05-15T00:56:49.402352666Z" level=info msg="StartContainer for \"ef1ce1c911991ac23c275876d070195e9b103759a2c36b796f6d5d9ec9501f3f\" returns successfully" May 15 00:56:49.423350 env[1204]: time="2025-05-15T00:56:49.423302390Z" level=info msg="shim disconnected" id=ef1ce1c911991ac23c275876d070195e9b103759a2c36b796f6d5d9ec9501f3f May 15 00:56:49.423350 env[1204]: time="2025-05-15T00:56:49.423352582Z" level=warning msg="cleaning up after shim disconnected" id=ef1ce1c911991ac23c275876d070195e9b103759a2c36b796f6d5d9ec9501f3f namespace=k8s.io May 15 00:56:49.423571 env[1204]: time="2025-05-15T00:56:49.423361501Z" level=info msg="cleaning up dead shim" May 15 00:56:49.429551 env[1204]: time="2025-05-15T00:56:49.429512486Z" level=warning msg="cleanup warnings time=\"2025-05-15T00:56:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2567 runtime=io.containerd.runc.v2\n" May 15 00:56:50.087288 systemd[1]: run-containerd-runc-k8s.io-ef1ce1c911991ac23c275876d070195e9b103759a2c36b796f6d5d9ec9501f3f-runc.XwRVEU.mount: Deactivated successfully. May 15 00:56:50.087389 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ef1ce1c911991ac23c275876d070195e9b103759a2c36b796f6d5d9ec9501f3f-rootfs.mount: Deactivated successfully. May 15 00:56:50.313499 kubelet[1903]: E0515 00:56:50.313455 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:50.315240 env[1204]: time="2025-05-15T00:56:50.315193774Z" level=info msg="CreateContainer within sandbox \"0ef6484e9f600a37f74b2690edf13812077b4b3adfed4b35c74db294780c499c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 15 00:56:50.333571 env[1204]: time="2025-05-15T00:56:50.333532472Z" level=info msg="CreateContainer within sandbox \"0ef6484e9f600a37f74b2690edf13812077b4b3adfed4b35c74db294780c499c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2f3e2496f75e872d82c5ac4ab150957eb87e8877a4e84c74772405ee5c4aa430\"" May 15 00:56:50.334281 env[1204]: time="2025-05-15T00:56:50.334226164Z" level=info msg="StartContainer for \"2f3e2496f75e872d82c5ac4ab150957eb87e8877a4e84c74772405ee5c4aa430\"" May 15 00:56:50.350021 systemd[1]: Started cri-containerd-2f3e2496f75e872d82c5ac4ab150957eb87e8877a4e84c74772405ee5c4aa430.scope. May 15 00:56:50.374275 env[1204]: time="2025-05-15T00:56:50.374141970Z" level=info msg="StartContainer for \"2f3e2496f75e872d82c5ac4ab150957eb87e8877a4e84c74772405ee5c4aa430\" returns successfully" May 15 00:56:50.467203 kubelet[1903]: I0515 00:56:50.466028 1903 kubelet_node_status.go:502] "Fast updating node status as it just became ready" May 15 00:56:50.684340 systemd[1]: Created slice kubepods-burstable-podaa9e1560_caef_4555_bdb8_12d6e0d65453.slice. May 15 00:56:50.690108 systemd[1]: Created slice kubepods-burstable-poda6a81cb4_8421_4d17_9a6c_623bb7b006ca.slice. May 15 00:56:50.696306 kubelet[1903]: I0515 00:56:50.696243 1903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqqqq\" (UniqueName: \"kubernetes.io/projected/aa9e1560-caef-4555-bdb8-12d6e0d65453-kube-api-access-lqqqq\") pod \"coredns-668d6bf9bc-f8wrc\" (UID: \"aa9e1560-caef-4555-bdb8-12d6e0d65453\") " pod="kube-system/coredns-668d6bf9bc-f8wrc" May 15 00:56:50.696459 kubelet[1903]: I0515 00:56:50.696310 1903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a6a81cb4-8421-4d17-9a6c-623bb7b006ca-config-volume\") pod \"coredns-668d6bf9bc-jw4mn\" (UID: \"a6a81cb4-8421-4d17-9a6c-623bb7b006ca\") " pod="kube-system/coredns-668d6bf9bc-jw4mn" May 15 00:56:50.696459 kubelet[1903]: I0515 00:56:50.696335 1903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ddkb\" (UniqueName: \"kubernetes.io/projected/a6a81cb4-8421-4d17-9a6c-623bb7b006ca-kube-api-access-4ddkb\") pod \"coredns-668d6bf9bc-jw4mn\" (UID: \"a6a81cb4-8421-4d17-9a6c-623bb7b006ca\") " pod="kube-system/coredns-668d6bf9bc-jw4mn" May 15 00:56:50.696459 kubelet[1903]: I0515 00:56:50.696356 1903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aa9e1560-caef-4555-bdb8-12d6e0d65453-config-volume\") pod \"coredns-668d6bf9bc-f8wrc\" (UID: \"aa9e1560-caef-4555-bdb8-12d6e0d65453\") " pod="kube-system/coredns-668d6bf9bc-f8wrc" May 15 00:56:50.991128 kubelet[1903]: E0515 00:56:50.990998 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:50.991714 env[1204]: time="2025-05-15T00:56:50.991675468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-f8wrc,Uid:aa9e1560-caef-4555-bdb8-12d6e0d65453,Namespace:kube-system,Attempt:0,}" May 15 00:56:50.992843 kubelet[1903]: E0515 00:56:50.992821 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:50.993154 env[1204]: time="2025-05-15T00:56:50.993112152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jw4mn,Uid:a6a81cb4-8421-4d17-9a6c-623bb7b006ca,Namespace:kube-system,Attempt:0,}" May 15 00:56:51.317967 kubelet[1903]: E0515 00:56:51.317933 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:51.338984 kubelet[1903]: I0515 00:56:51.338906 1903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-24r8h" podStartSLOduration=7.514883757 podStartE2EDuration="19.338874765s" podCreationTimestamp="2025-05-15 00:56:32 +0000 UTC" firstStartedPulling="2025-05-15 00:56:34.252463902 +0000 UTC m=+9.099793680" lastFinishedPulling="2025-05-15 00:56:46.07645491 +0000 UTC m=+20.923784688" observedRunningTime="2025-05-15 00:56:51.337795886 +0000 UTC m=+26.185125664" watchObservedRunningTime="2025-05-15 00:56:51.338874765 +0000 UTC m=+26.186204543" May 15 00:56:52.319115 kubelet[1903]: E0515 00:56:52.319063 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:52.424496 systemd-networkd[1025]: cilium_host: Link UP May 15 00:56:52.424661 systemd-networkd[1025]: cilium_net: Link UP May 15 00:56:52.427127 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready May 15 00:56:52.427277 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready May 15 00:56:52.428020 systemd-networkd[1025]: cilium_net: Gained carrier May 15 00:56:52.428241 systemd-networkd[1025]: cilium_host: Gained carrier May 15 00:56:52.503703 systemd-networkd[1025]: cilium_vxlan: Link UP May 15 00:56:52.503713 systemd-networkd[1025]: cilium_vxlan: Gained carrier May 15 00:56:52.684212 kernel: NET: Registered PF_ALG protocol family May 15 00:56:52.780351 systemd[1]: Started sshd@6-10.0.0.117:22-10.0.0.1:55590.service. May 15 00:56:52.819992 sshd[2859]: Accepted publickey for core from 10.0.0.1 port 55590 ssh2: RSA SHA256:Iwoz1L9/QgXQ9OpXvCPQYapJE0cmIk+lKxZdEnPdReQ May 15 00:56:52.821258 sshd[2859]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 00:56:52.825223 systemd-logind[1192]: New session 7 of user core. May 15 00:56:52.825877 systemd[1]: Started session-7.scope. May 15 00:56:52.950370 sshd[2859]: pam_unix(sshd:session): session closed for user core May 15 00:56:52.953268 systemd[1]: sshd@6-10.0.0.117:22-10.0.0.1:55590.service: Deactivated successfully. May 15 00:56:52.953908 systemd[1]: session-7.scope: Deactivated successfully. May 15 00:56:52.954546 systemd-logind[1192]: Session 7 logged out. Waiting for processes to exit. May 15 00:56:52.955241 systemd-logind[1192]: Removed session 7. May 15 00:56:52.984352 systemd-networkd[1025]: cilium_host: Gained IPv6LL May 15 00:56:53.212047 systemd-networkd[1025]: lxc_health: Link UP May 15 00:56:53.221023 systemd-networkd[1025]: lxc_health: Gained carrier May 15 00:56:53.221194 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 15 00:56:53.241370 systemd-networkd[1025]: cilium_net: Gained IPv6LL May 15 00:56:53.322156 kubelet[1903]: E0515 00:56:53.321856 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:53.567210 systemd-networkd[1025]: lxc71a137008d94: Link UP May 15 00:56:53.581199 kernel: eth0: renamed from tmp1da1a May 15 00:56:53.589476 systemd-networkd[1025]: lxc31ff6e8e7204: Link UP May 15 00:56:53.599479 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 15 00:56:53.599578 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc71a137008d94: link becomes ready May 15 00:56:53.600189 kernel: eth0: renamed from tmp410a3 May 15 00:56:53.605620 systemd-networkd[1025]: lxc71a137008d94: Gained carrier May 15 00:56:53.606868 systemd-networkd[1025]: lxc31ff6e8e7204: Gained carrier May 15 00:56:53.608034 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc31ff6e8e7204: link becomes ready May 15 00:56:53.816349 systemd-networkd[1025]: cilium_vxlan: Gained IPv6LL May 15 00:56:54.322987 kubelet[1903]: E0515 00:56:54.322959 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:55.098370 systemd-networkd[1025]: lxc_health: Gained IPv6LL May 15 00:56:55.160267 systemd-networkd[1025]: lxc31ff6e8e7204: Gained IPv6LL May 15 00:56:55.288359 systemd-networkd[1025]: lxc71a137008d94: Gained IPv6LL May 15 00:56:55.324410 kubelet[1903]: E0515 00:56:55.324386 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:56.326143 kubelet[1903]: E0515 00:56:56.326107 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:56.978091 env[1204]: time="2025-05-15T00:56:56.977996955Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:56:56.978091 env[1204]: time="2025-05-15T00:56:56.978044169Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:56:56.978091 env[1204]: time="2025-05-15T00:56:56.978054561Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:56:56.978558 env[1204]: time="2025-05-15T00:56:56.978237787Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1da1a54926fed273e8e21e1f24b12bbc2220dc44510a30d5ed159c2e0455b869 pid=3149 runtime=io.containerd.runc.v2 May 15 00:56:56.995155 systemd[1]: run-containerd-runc-k8s.io-1da1a54926fed273e8e21e1f24b12bbc2220dc44510a30d5ed159c2e0455b869-runc.Kc4zDd.mount: Deactivated successfully. May 15 00:56:56.997992 systemd[1]: Started cri-containerd-1da1a54926fed273e8e21e1f24b12bbc2220dc44510a30d5ed159c2e0455b869.scope. May 15 00:56:57.006464 systemd-resolved[1148]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 00:56:57.028296 env[1204]: time="2025-05-15T00:56:57.028260924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jw4mn,Uid:a6a81cb4-8421-4d17-9a6c-623bb7b006ca,Namespace:kube-system,Attempt:0,} returns sandbox id \"1da1a54926fed273e8e21e1f24b12bbc2220dc44510a30d5ed159c2e0455b869\"" May 15 00:56:57.029152 kubelet[1903]: E0515 00:56:57.029129 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:57.031432 env[1204]: time="2025-05-15T00:56:57.031403757Z" level=info msg="CreateContainer within sandbox \"1da1a54926fed273e8e21e1f24b12bbc2220dc44510a30d5ed159c2e0455b869\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 00:56:57.033376 env[1204]: time="2025-05-15T00:56:57.033311434Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:56:57.033463 env[1204]: time="2025-05-15T00:56:57.033371674Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:56:57.033463 env[1204]: time="2025-05-15T00:56:57.033388277Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:56:57.033656 env[1204]: time="2025-05-15T00:56:57.033614739Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/410a35b2ff1083af9cf5afa260149561c10662e3e38814e3a932af8f18aa00ac pid=3188 runtime=io.containerd.runc.v2 May 15 00:56:57.047839 systemd[1]: Started cri-containerd-410a35b2ff1083af9cf5afa260149561c10662e3e38814e3a932af8f18aa00ac.scope. May 15 00:56:57.061042 systemd-resolved[1148]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 00:56:57.083374 env[1204]: time="2025-05-15T00:56:57.083313944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-f8wrc,Uid:aa9e1560-caef-4555-bdb8-12d6e0d65453,Namespace:kube-system,Attempt:0,} returns sandbox id \"410a35b2ff1083af9cf5afa260149561c10662e3e38814e3a932af8f18aa00ac\"" May 15 00:56:57.084010 kubelet[1903]: E0515 00:56:57.083986 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:57.085985 env[1204]: time="2025-05-15T00:56:57.085946097Z" level=info msg="CreateContainer within sandbox \"410a35b2ff1083af9cf5afa260149561c10662e3e38814e3a932af8f18aa00ac\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 00:56:57.691665 env[1204]: time="2025-05-15T00:56:57.691610059Z" level=info msg="CreateContainer within sandbox \"1da1a54926fed273e8e21e1f24b12bbc2220dc44510a30d5ed159c2e0455b869\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e3375d818d503e52e3c08654acd91a12004c3ab66ea508bb7c4bac62662b57fb\"" May 15 00:56:57.692218 env[1204]: time="2025-05-15T00:56:57.692192833Z" level=info msg="StartContainer for \"e3375d818d503e52e3c08654acd91a12004c3ab66ea508bb7c4bac62662b57fb\"" May 15 00:56:57.709206 systemd[1]: Started cri-containerd-e3375d818d503e52e3c08654acd91a12004c3ab66ea508bb7c4bac62662b57fb.scope. May 15 00:56:57.713764 env[1204]: time="2025-05-15T00:56:57.713720164Z" level=info msg="CreateContainer within sandbox \"410a35b2ff1083af9cf5afa260149561c10662e3e38814e3a932af8f18aa00ac\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e8b9600e0ac37d3ce08d134724a7a5b078344d18e64a54a63d250be7ed1faf08\"" May 15 00:56:57.714495 env[1204]: time="2025-05-15T00:56:57.714456524Z" level=info msg="StartContainer for \"e8b9600e0ac37d3ce08d134724a7a5b078344d18e64a54a63d250be7ed1faf08\"" May 15 00:56:57.734393 systemd[1]: Started cri-containerd-e8b9600e0ac37d3ce08d134724a7a5b078344d18e64a54a63d250be7ed1faf08.scope. May 15 00:56:57.749034 env[1204]: time="2025-05-15T00:56:57.748738150Z" level=info msg="StartContainer for \"e3375d818d503e52e3c08654acd91a12004c3ab66ea508bb7c4bac62662b57fb\" returns successfully" May 15 00:56:57.762768 env[1204]: time="2025-05-15T00:56:57.762725214Z" level=info msg="StartContainer for \"e8b9600e0ac37d3ce08d134724a7a5b078344d18e64a54a63d250be7ed1faf08\" returns successfully" May 15 00:56:57.954771 systemd[1]: Started sshd@7-10.0.0.117:22-10.0.0.1:58298.service. May 15 00:56:57.993531 sshd[3299]: Accepted publickey for core from 10.0.0.1 port 58298 ssh2: RSA SHA256:Iwoz1L9/QgXQ9OpXvCPQYapJE0cmIk+lKxZdEnPdReQ May 15 00:56:57.994589 sshd[3299]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 00:56:57.997582 systemd-logind[1192]: New session 8 of user core. May 15 00:56:57.998555 systemd[1]: Started session-8.scope. May 15 00:56:58.108644 sshd[3299]: pam_unix(sshd:session): session closed for user core May 15 00:56:58.110971 systemd[1]: sshd@7-10.0.0.117:22-10.0.0.1:58298.service: Deactivated successfully. May 15 00:56:58.111875 systemd[1]: session-8.scope: Deactivated successfully. May 15 00:56:58.112493 systemd-logind[1192]: Session 8 logged out. Waiting for processes to exit. May 15 00:56:58.113342 systemd-logind[1192]: Removed session 8. May 15 00:56:58.332849 kubelet[1903]: E0515 00:56:58.332810 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:58.334546 kubelet[1903]: E0515 00:56:58.334526 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:58.342436 kubelet[1903]: I0515 00:56:58.342285 1903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-f8wrc" podStartSLOduration=26.342270855 podStartE2EDuration="26.342270855s" podCreationTimestamp="2025-05-15 00:56:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:56:58.341566302 +0000 UTC m=+33.188896100" watchObservedRunningTime="2025-05-15 00:56:58.342270855 +0000 UTC m=+33.189600643" May 15 00:56:59.336350 kubelet[1903]: E0515 00:56:59.336306 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:59.336350 kubelet[1903]: E0515 00:56:59.336354 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:57:00.338343 kubelet[1903]: E0515 00:57:00.338316 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:57:00.338343 kubelet[1903]: E0515 00:57:00.338333 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:57:03.112600 systemd[1]: Started sshd@8-10.0.0.117:22-10.0.0.1:58306.service. May 15 00:57:03.150700 sshd[3322]: Accepted publickey for core from 10.0.0.1 port 58306 ssh2: RSA SHA256:Iwoz1L9/QgXQ9OpXvCPQYapJE0cmIk+lKxZdEnPdReQ May 15 00:57:03.151862 sshd[3322]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 00:57:03.155460 systemd-logind[1192]: New session 9 of user core. May 15 00:57:03.156221 systemd[1]: Started session-9.scope. May 15 00:57:03.384552 sshd[3322]: pam_unix(sshd:session): session closed for user core May 15 00:57:03.386691 systemd[1]: sshd@8-10.0.0.117:22-10.0.0.1:58306.service: Deactivated successfully. May 15 00:57:03.387321 systemd[1]: session-9.scope: Deactivated successfully. May 15 00:57:03.387793 systemd-logind[1192]: Session 9 logged out. Waiting for processes to exit. May 15 00:57:03.388492 systemd-logind[1192]: Removed session 9. May 15 00:57:08.388831 systemd[1]: Started sshd@9-10.0.0.117:22-10.0.0.1:47544.service. May 15 00:57:08.433146 sshd[3336]: Accepted publickey for core from 10.0.0.1 port 47544 ssh2: RSA SHA256:Iwoz1L9/QgXQ9OpXvCPQYapJE0cmIk+lKxZdEnPdReQ May 15 00:57:08.434062 sshd[3336]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 00:57:08.437310 systemd-logind[1192]: New session 10 of user core. May 15 00:57:08.438065 systemd[1]: Started session-10.scope. May 15 00:57:08.543400 sshd[3336]: pam_unix(sshd:session): session closed for user core May 15 00:57:08.546295 systemd[1]: sshd@9-10.0.0.117:22-10.0.0.1:47544.service: Deactivated successfully. May 15 00:57:08.546811 systemd[1]: session-10.scope: Deactivated successfully. May 15 00:57:08.547341 systemd-logind[1192]: Session 10 logged out. Waiting for processes to exit. May 15 00:57:08.548330 systemd[1]: Started sshd@10-10.0.0.117:22-10.0.0.1:47546.service. May 15 00:57:08.549010 systemd-logind[1192]: Removed session 10. May 15 00:57:08.585964 sshd[3350]: Accepted publickey for core from 10.0.0.1 port 47546 ssh2: RSA SHA256:Iwoz1L9/QgXQ9OpXvCPQYapJE0cmIk+lKxZdEnPdReQ May 15 00:57:08.587010 sshd[3350]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 00:57:08.590184 systemd-logind[1192]: New session 11 of user core. May 15 00:57:08.591071 systemd[1]: Started session-11.scope. May 15 00:57:08.737655 sshd[3350]: pam_unix(sshd:session): session closed for user core May 15 00:57:08.741433 systemd[1]: sshd@10-10.0.0.117:22-10.0.0.1:47546.service: Deactivated successfully. May 15 00:57:08.742150 systemd[1]: session-11.scope: Deactivated successfully. May 15 00:57:08.744362 systemd[1]: Started sshd@11-10.0.0.117:22-10.0.0.1:47552.service. May 15 00:57:08.744980 systemd-logind[1192]: Session 11 logged out. Waiting for processes to exit. May 15 00:57:08.749542 systemd-logind[1192]: Removed session 11. May 15 00:57:08.789933 sshd[3362]: Accepted publickey for core from 10.0.0.1 port 47552 ssh2: RSA SHA256:Iwoz1L9/QgXQ9OpXvCPQYapJE0cmIk+lKxZdEnPdReQ May 15 00:57:08.791154 sshd[3362]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 00:57:08.794878 systemd-logind[1192]: New session 12 of user core. May 15 00:57:08.795825 systemd[1]: Started session-12.scope. May 15 00:57:08.905707 sshd[3362]: pam_unix(sshd:session): session closed for user core May 15 00:57:08.908030 systemd[1]: sshd@11-10.0.0.117:22-10.0.0.1:47552.service: Deactivated successfully. May 15 00:57:08.908701 systemd[1]: session-12.scope: Deactivated successfully. May 15 00:57:08.909151 systemd-logind[1192]: Session 12 logged out. Waiting for processes to exit. May 15 00:57:08.909788 systemd-logind[1192]: Removed session 12. May 15 00:57:13.910076 systemd[1]: Started sshd@12-10.0.0.117:22-10.0.0.1:47556.service. May 15 00:57:13.948879 sshd[3375]: Accepted publickey for core from 10.0.0.1 port 47556 ssh2: RSA SHA256:Iwoz1L9/QgXQ9OpXvCPQYapJE0cmIk+lKxZdEnPdReQ May 15 00:57:13.949902 sshd[3375]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 00:57:13.953290 systemd-logind[1192]: New session 13 of user core. May 15 00:57:13.954029 systemd[1]: Started session-13.scope. May 15 00:57:14.060786 sshd[3375]: pam_unix(sshd:session): session closed for user core May 15 00:57:14.062741 systemd[1]: sshd@12-10.0.0.117:22-10.0.0.1:47556.service: Deactivated successfully. May 15 00:57:14.063401 systemd[1]: session-13.scope: Deactivated successfully. May 15 00:57:14.064014 systemd-logind[1192]: Session 13 logged out. Waiting for processes to exit. May 15 00:57:14.064630 systemd-logind[1192]: Removed session 13. May 15 00:57:19.065250 systemd[1]: Started sshd@13-10.0.0.117:22-10.0.0.1:48618.service. May 15 00:57:19.102386 sshd[3390]: Accepted publickey for core from 10.0.0.1 port 48618 ssh2: RSA SHA256:Iwoz1L9/QgXQ9OpXvCPQYapJE0cmIk+lKxZdEnPdReQ May 15 00:57:19.103409 sshd[3390]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 00:57:19.106620 systemd-logind[1192]: New session 14 of user core. May 15 00:57:19.107342 systemd[1]: Started session-14.scope. May 15 00:57:19.208650 sshd[3390]: pam_unix(sshd:session): session closed for user core May 15 00:57:19.211509 systemd[1]: sshd@13-10.0.0.117:22-10.0.0.1:48618.service: Deactivated successfully. May 15 00:57:19.212048 systemd[1]: session-14.scope: Deactivated successfully. May 15 00:57:19.212571 systemd-logind[1192]: Session 14 logged out. Waiting for processes to exit. May 15 00:57:19.213663 systemd[1]: Started sshd@14-10.0.0.117:22-10.0.0.1:48624.service. May 15 00:57:19.214509 systemd-logind[1192]: Removed session 14. May 15 00:57:19.251327 sshd[3403]: Accepted publickey for core from 10.0.0.1 port 48624 ssh2: RSA SHA256:Iwoz1L9/QgXQ9OpXvCPQYapJE0cmIk+lKxZdEnPdReQ May 15 00:57:19.252448 sshd[3403]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 00:57:19.255598 systemd-logind[1192]: New session 15 of user core. May 15 00:57:19.256419 systemd[1]: Started session-15.scope. May 15 00:57:19.457832 sshd[3403]: pam_unix(sshd:session): session closed for user core May 15 00:57:19.462036 systemd[1]: Started sshd@15-10.0.0.117:22-10.0.0.1:48632.service. May 15 00:57:19.462556 systemd[1]: sshd@14-10.0.0.117:22-10.0.0.1:48624.service: Deactivated successfully. May 15 00:57:19.463270 systemd[1]: session-15.scope: Deactivated successfully. May 15 00:57:19.463801 systemd-logind[1192]: Session 15 logged out. Waiting for processes to exit. May 15 00:57:19.464668 systemd-logind[1192]: Removed session 15. May 15 00:57:19.501925 sshd[3413]: Accepted publickey for core from 10.0.0.1 port 48632 ssh2: RSA SHA256:Iwoz1L9/QgXQ9OpXvCPQYapJE0cmIk+lKxZdEnPdReQ May 15 00:57:19.503071 sshd[3413]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 00:57:19.506682 systemd-logind[1192]: New session 16 of user core. May 15 00:57:19.507647 systemd[1]: Started session-16.scope. May 15 00:57:20.231426 sshd[3413]: pam_unix(sshd:session): session closed for user core May 15 00:57:20.233575 systemd[1]: sshd@15-10.0.0.117:22-10.0.0.1:48632.service: Deactivated successfully. May 15 00:57:20.234351 systemd[1]: session-16.scope: Deactivated successfully. May 15 00:57:20.235585 systemd-logind[1192]: Session 16 logged out. Waiting for processes to exit. May 15 00:57:20.236596 systemd[1]: Started sshd@16-10.0.0.117:22-10.0.0.1:48640.service. May 15 00:57:20.237389 systemd-logind[1192]: Removed session 16. May 15 00:57:20.276598 sshd[3433]: Accepted publickey for core from 10.0.0.1 port 48640 ssh2: RSA SHA256:Iwoz1L9/QgXQ9OpXvCPQYapJE0cmIk+lKxZdEnPdReQ May 15 00:57:20.277789 sshd[3433]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 00:57:20.281024 systemd-logind[1192]: New session 17 of user core. May 15 00:57:20.281815 systemd[1]: Started session-17.scope. May 15 00:57:20.646804 sshd[3433]: pam_unix(sshd:session): session closed for user core May 15 00:57:20.650308 systemd[1]: Started sshd@17-10.0.0.117:22-10.0.0.1:48654.service. May 15 00:57:20.650768 systemd[1]: sshd@16-10.0.0.117:22-10.0.0.1:48640.service: Deactivated successfully. May 15 00:57:20.651923 systemd[1]: session-17.scope: Deactivated successfully. May 15 00:57:20.652500 systemd-logind[1192]: Session 17 logged out. Waiting for processes to exit. May 15 00:57:20.653484 systemd-logind[1192]: Removed session 17. May 15 00:57:20.687192 sshd[3443]: Accepted publickey for core from 10.0.0.1 port 48654 ssh2: RSA SHA256:Iwoz1L9/QgXQ9OpXvCPQYapJE0cmIk+lKxZdEnPdReQ May 15 00:57:20.688342 sshd[3443]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 00:57:20.691967 systemd-logind[1192]: New session 18 of user core. May 15 00:57:20.692715 systemd[1]: Started session-18.scope. May 15 00:57:20.801902 sshd[3443]: pam_unix(sshd:session): session closed for user core May 15 00:57:20.804035 systemd[1]: sshd@17-10.0.0.117:22-10.0.0.1:48654.service: Deactivated successfully. May 15 00:57:20.804700 systemd[1]: session-18.scope: Deactivated successfully. May 15 00:57:20.805383 systemd-logind[1192]: Session 18 logged out. Waiting for processes to exit. May 15 00:57:20.805987 systemd-logind[1192]: Removed session 18. May 15 00:57:25.806069 systemd[1]: Started sshd@18-10.0.0.117:22-10.0.0.1:48662.service. May 15 00:57:25.845213 sshd[3460]: Accepted publickey for core from 10.0.0.1 port 48662 ssh2: RSA SHA256:Iwoz1L9/QgXQ9OpXvCPQYapJE0cmIk+lKxZdEnPdReQ May 15 00:57:25.846162 sshd[3460]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 00:57:25.849357 systemd-logind[1192]: New session 19 of user core. May 15 00:57:25.850256 systemd[1]: Started session-19.scope. May 15 00:57:25.950132 sshd[3460]: pam_unix(sshd:session): session closed for user core May 15 00:57:25.952406 systemd[1]: sshd@18-10.0.0.117:22-10.0.0.1:48662.service: Deactivated successfully. May 15 00:57:25.953120 systemd[1]: session-19.scope: Deactivated successfully. May 15 00:57:25.953731 systemd-logind[1192]: Session 19 logged out. Waiting for processes to exit. May 15 00:57:25.954351 systemd-logind[1192]: Removed session 19. May 15 00:57:30.955992 systemd[1]: Started sshd@19-10.0.0.117:22-10.0.0.1:34238.service. May 15 00:57:30.994248 sshd[3475]: Accepted publickey for core from 10.0.0.1 port 34238 ssh2: RSA SHA256:Iwoz1L9/QgXQ9OpXvCPQYapJE0cmIk+lKxZdEnPdReQ May 15 00:57:30.995653 sshd[3475]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 00:57:30.999967 systemd-logind[1192]: New session 20 of user core. May 15 00:57:31.000738 systemd[1]: Started session-20.scope. May 15 00:57:31.098106 sshd[3475]: pam_unix(sshd:session): session closed for user core May 15 00:57:31.100302 systemd[1]: sshd@19-10.0.0.117:22-10.0.0.1:34238.service: Deactivated successfully. May 15 00:57:31.100944 systemd[1]: session-20.scope: Deactivated successfully. May 15 00:57:31.101454 systemd-logind[1192]: Session 20 logged out. Waiting for processes to exit. May 15 00:57:31.102033 systemd-logind[1192]: Removed session 20. May 15 00:57:36.102600 systemd[1]: Started sshd@20-10.0.0.117:22-10.0.0.1:34254.service. May 15 00:57:36.139472 sshd[3490]: Accepted publickey for core from 10.0.0.1 port 34254 ssh2: RSA SHA256:Iwoz1L9/QgXQ9OpXvCPQYapJE0cmIk+lKxZdEnPdReQ May 15 00:57:36.140423 sshd[3490]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 00:57:36.143636 systemd-logind[1192]: New session 21 of user core. May 15 00:57:36.144620 systemd[1]: Started session-21.scope. May 15 00:57:36.239061 sshd[3490]: pam_unix(sshd:session): session closed for user core May 15 00:57:36.240929 systemd[1]: sshd@20-10.0.0.117:22-10.0.0.1:34254.service: Deactivated successfully. May 15 00:57:36.241601 systemd[1]: session-21.scope: Deactivated successfully. May 15 00:57:36.242359 systemd-logind[1192]: Session 21 logged out. Waiting for processes to exit. May 15 00:57:36.243029 systemd-logind[1192]: Removed session 21. May 15 00:57:41.243013 systemd[1]: Started sshd@21-10.0.0.117:22-10.0.0.1:39670.service. May 15 00:57:41.281084 sshd[3503]: Accepted publickey for core from 10.0.0.1 port 39670 ssh2: RSA SHA256:Iwoz1L9/QgXQ9OpXvCPQYapJE0cmIk+lKxZdEnPdReQ May 15 00:57:41.282197 sshd[3503]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 00:57:41.285472 systemd-logind[1192]: New session 22 of user core. May 15 00:57:41.286370 systemd[1]: Started session-22.scope. May 15 00:57:41.386950 sshd[3503]: pam_unix(sshd:session): session closed for user core May 15 00:57:41.389988 systemd[1]: sshd@21-10.0.0.117:22-10.0.0.1:39670.service: Deactivated successfully. May 15 00:57:41.390619 systemd[1]: session-22.scope: Deactivated successfully. May 15 00:57:41.391326 systemd-logind[1192]: Session 22 logged out. Waiting for processes to exit. May 15 00:57:41.392521 systemd[1]: Started sshd@22-10.0.0.117:22-10.0.0.1:39678.service. May 15 00:57:41.393355 systemd-logind[1192]: Removed session 22. May 15 00:57:41.432548 sshd[3516]: Accepted publickey for core from 10.0.0.1 port 39678 ssh2: RSA SHA256:Iwoz1L9/QgXQ9OpXvCPQYapJE0cmIk+lKxZdEnPdReQ May 15 00:57:41.433614 sshd[3516]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 00:57:41.436993 systemd-logind[1192]: New session 23 of user core. May 15 00:57:41.437735 systemd[1]: Started session-23.scope. May 15 00:57:42.241310 kubelet[1903]: E0515 00:57:42.241254 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:57:42.917643 kubelet[1903]: I0515 00:57:42.917560 1903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-jw4mn" podStartSLOduration=70.917541411 podStartE2EDuration="1m10.917541411s" podCreationTimestamp="2025-05-15 00:56:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:56:58.369050081 +0000 UTC m=+33.216379859" watchObservedRunningTime="2025-05-15 00:57:42.917541411 +0000 UTC m=+77.764871199" May 15 00:57:42.926775 env[1204]: time="2025-05-15T00:57:42.926724144Z" level=info msg="StopContainer for \"33f37ed734ac9bc8c62706e03b1d1356cdd05880dcfa248105a74581559b011d\" with timeout 30 (s)" May 15 00:57:42.927223 env[1204]: time="2025-05-15T00:57:42.927055382Z" level=info msg="Stop container \"33f37ed734ac9bc8c62706e03b1d1356cdd05880dcfa248105a74581559b011d\" with signal terminated" May 15 00:57:42.939056 systemd[1]: cri-containerd-33f37ed734ac9bc8c62706e03b1d1356cdd05880dcfa248105a74581559b011d.scope: Deactivated successfully. May 15 00:57:42.958197 env[1204]: time="2025-05-15T00:57:42.958113188Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 15 00:57:42.958982 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-33f37ed734ac9bc8c62706e03b1d1356cdd05880dcfa248105a74581559b011d-rootfs.mount: Deactivated successfully. May 15 00:57:42.963830 env[1204]: time="2025-05-15T00:57:42.963795598Z" level=info msg="StopContainer for \"2f3e2496f75e872d82c5ac4ab150957eb87e8877a4e84c74772405ee5c4aa430\" with timeout 2 (s)" May 15 00:57:42.964233 env[1204]: time="2025-05-15T00:57:42.964158344Z" level=info msg="Stop container \"2f3e2496f75e872d82c5ac4ab150957eb87e8877a4e84c74772405ee5c4aa430\" with signal terminated" May 15 00:57:42.971161 systemd-networkd[1025]: lxc_health: Link DOWN May 15 00:57:42.971169 systemd-networkd[1025]: lxc_health: Lost carrier May 15 00:57:42.990492 env[1204]: time="2025-05-15T00:57:42.990445941Z" level=info msg="shim disconnected" id=33f37ed734ac9bc8c62706e03b1d1356cdd05880dcfa248105a74581559b011d May 15 00:57:42.990492 env[1204]: time="2025-05-15T00:57:42.990484644Z" level=warning msg="cleaning up after shim disconnected" id=33f37ed734ac9bc8c62706e03b1d1356cdd05880dcfa248105a74581559b011d namespace=k8s.io May 15 00:57:42.990492 env[1204]: time="2025-05-15T00:57:42.990493861Z" level=info msg="cleaning up dead shim" May 15 00:57:42.996638 env[1204]: time="2025-05-15T00:57:42.996587707Z" level=warning msg="cleanup warnings time=\"2025-05-15T00:57:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3573 runtime=io.containerd.runc.v2\n" May 15 00:57:42.997726 systemd[1]: cri-containerd-2f3e2496f75e872d82c5ac4ab150957eb87e8877a4e84c74772405ee5c4aa430.scope: Deactivated successfully. May 15 00:57:42.997972 systemd[1]: cri-containerd-2f3e2496f75e872d82c5ac4ab150957eb87e8877a4e84c74772405ee5c4aa430.scope: Consumed 5.909s CPU time. May 15 00:57:43.013272 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2f3e2496f75e872d82c5ac4ab150957eb87e8877a4e84c74772405ee5c4aa430-rootfs.mount: Deactivated successfully. May 15 00:57:43.016571 env[1204]: time="2025-05-15T00:57:43.016533342Z" level=info msg="StopContainer for \"33f37ed734ac9bc8c62706e03b1d1356cdd05880dcfa248105a74581559b011d\" returns successfully" May 15 00:57:43.017194 env[1204]: time="2025-05-15T00:57:43.017150974Z" level=info msg="StopPodSandbox for \"7f3e4dda7c9bfd1eea92e33c1158d836883f1354d6057bbfb1b7141ca68d2ab3\"" May 15 00:57:43.017273 env[1204]: time="2025-05-15T00:57:43.017242094Z" level=info msg="Container to stop \"33f37ed734ac9bc8c62706e03b1d1356cdd05880dcfa248105a74581559b011d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 00:57:43.019336 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7f3e4dda7c9bfd1eea92e33c1158d836883f1354d6057bbfb1b7141ca68d2ab3-shm.mount: Deactivated successfully. May 15 00:57:43.025428 systemd[1]: cri-containerd-7f3e4dda7c9bfd1eea92e33c1158d836883f1354d6057bbfb1b7141ca68d2ab3.scope: Deactivated successfully. May 15 00:57:43.027394 env[1204]: time="2025-05-15T00:57:43.027291979Z" level=info msg="shim disconnected" id=2f3e2496f75e872d82c5ac4ab150957eb87e8877a4e84c74772405ee5c4aa430 May 15 00:57:43.027560 env[1204]: time="2025-05-15T00:57:43.027394240Z" level=warning msg="cleaning up after shim disconnected" id=2f3e2496f75e872d82c5ac4ab150957eb87e8877a4e84c74772405ee5c4aa430 namespace=k8s.io May 15 00:57:43.027560 env[1204]: time="2025-05-15T00:57:43.027446899Z" level=info msg="cleaning up dead shim" May 15 00:57:43.034617 env[1204]: time="2025-05-15T00:57:43.034570045Z" level=warning msg="cleanup warnings time=\"2025-05-15T00:57:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3604 runtime=io.containerd.runc.v2\n" May 15 00:57:43.044283 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7f3e4dda7c9bfd1eea92e33c1158d836883f1354d6057bbfb1b7141ca68d2ab3-rootfs.mount: Deactivated successfully. May 15 00:57:43.066300 env[1204]: time="2025-05-15T00:57:43.066246067Z" level=info msg="StopContainer for \"2f3e2496f75e872d82c5ac4ab150957eb87e8877a4e84c74772405ee5c4aa430\" returns successfully" May 15 00:57:43.066901 env[1204]: time="2025-05-15T00:57:43.066863579Z" level=info msg="StopPodSandbox for \"0ef6484e9f600a37f74b2690edf13812077b4b3adfed4b35c74db294780c499c\"" May 15 00:57:43.066959 env[1204]: time="2025-05-15T00:57:43.066930744Z" level=info msg="Container to stop \"adaa3f00926880513c699c9b631587a072c81ab75ffcdc7b7109dd3cdbad3ca8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 00:57:43.066959 env[1204]: time="2025-05-15T00:57:43.066944670Z" level=info msg="Container to stop \"0181ca6dd42c7f9f24edf5ecaf008d6191fb7ec23e52d344b68e7952bd11cebb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 00:57:43.066959 env[1204]: time="2025-05-15T00:57:43.066954018Z" level=info msg="Container to stop \"ef1ce1c911991ac23c275876d070195e9b103759a2c36b796f6d5d9ec9501f3f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 00:57:43.067033 env[1204]: time="2025-05-15T00:57:43.066965970Z" level=info msg="Container to stop \"2757892416af1c3145e7df42d8bfef468f24911b38a58be969808b836672cc69\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 00:57:43.067033 env[1204]: time="2025-05-15T00:57:43.066975939Z" level=info msg="Container to stop \"2f3e2496f75e872d82c5ac4ab150957eb87e8877a4e84c74772405ee5c4aa430\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 00:57:43.072640 systemd[1]: cri-containerd-0ef6484e9f600a37f74b2690edf13812077b4b3adfed4b35c74db294780c499c.scope: Deactivated successfully. May 15 00:57:43.087535 env[1204]: time="2025-05-15T00:57:43.087478169Z" level=info msg="shim disconnected" id=7f3e4dda7c9bfd1eea92e33c1158d836883f1354d6057bbfb1b7141ca68d2ab3 May 15 00:57:43.087684 env[1204]: time="2025-05-15T00:57:43.087530187Z" level=warning msg="cleaning up after shim disconnected" id=7f3e4dda7c9bfd1eea92e33c1158d836883f1354d6057bbfb1b7141ca68d2ab3 namespace=k8s.io May 15 00:57:43.087684 env[1204]: time="2025-05-15T00:57:43.087561855Z" level=info msg="cleaning up dead shim" May 15 00:57:43.096498 env[1204]: time="2025-05-15T00:57:43.096431610Z" level=warning msg="cleanup warnings time=\"2025-05-15T00:57:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3647 runtime=io.containerd.runc.v2\n" May 15 00:57:43.096819 env[1204]: time="2025-05-15T00:57:43.096793634Z" level=info msg="TearDown network for sandbox \"7f3e4dda7c9bfd1eea92e33c1158d836883f1354d6057bbfb1b7141ca68d2ab3\" successfully" May 15 00:57:43.096819 env[1204]: time="2025-05-15T00:57:43.096816898Z" level=info msg="StopPodSandbox for \"7f3e4dda7c9bfd1eea92e33c1158d836883f1354d6057bbfb1b7141ca68d2ab3\" returns successfully" May 15 00:57:43.148234 env[1204]: time="2025-05-15T00:57:43.148166456Z" level=info msg="shim disconnected" id=0ef6484e9f600a37f74b2690edf13812077b4b3adfed4b35c74db294780c499c May 15 00:57:43.148234 env[1204]: time="2025-05-15T00:57:43.148234432Z" level=warning msg="cleaning up after shim disconnected" id=0ef6484e9f600a37f74b2690edf13812077b4b3adfed4b35c74db294780c499c namespace=k8s.io May 15 00:57:43.148446 env[1204]: time="2025-05-15T00:57:43.148245353Z" level=info msg="cleaning up dead shim" May 15 00:57:43.153982 env[1204]: time="2025-05-15T00:57:43.153949753Z" level=warning msg="cleanup warnings time=\"2025-05-15T00:57:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3660 runtime=io.containerd.runc.v2\n" May 15 00:57:43.154243 env[1204]: time="2025-05-15T00:57:43.154222602Z" level=info msg="TearDown network for sandbox \"0ef6484e9f600a37f74b2690edf13812077b4b3adfed4b35c74db294780c499c\" successfully" May 15 00:57:43.154279 env[1204]: time="2025-05-15T00:57:43.154244503Z" level=info msg="StopPodSandbox for \"0ef6484e9f600a37f74b2690edf13812077b4b3adfed4b35c74db294780c499c\" returns successfully" May 15 00:57:43.191851 kubelet[1903]: I0515 00:57:43.191290 1903 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/50bd9f10-710d-4b99-b10c-69c74603eef5-cilium-run\") pod \"50bd9f10-710d-4b99-b10c-69c74603eef5\" (UID: \"50bd9f10-710d-4b99-b10c-69c74603eef5\") " May 15 00:57:43.191851 kubelet[1903]: I0515 00:57:43.191352 1903 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fss7d\" (UniqueName: \"kubernetes.io/projected/50bd9f10-710d-4b99-b10c-69c74603eef5-kube-api-access-fss7d\") pod \"50bd9f10-710d-4b99-b10c-69c74603eef5\" (UID: \"50bd9f10-710d-4b99-b10c-69c74603eef5\") " May 15 00:57:43.191851 kubelet[1903]: I0515 00:57:43.191357 1903 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50bd9f10-710d-4b99-b10c-69c74603eef5-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "50bd9f10-710d-4b99-b10c-69c74603eef5" (UID: "50bd9f10-710d-4b99-b10c-69c74603eef5"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 00:57:43.191851 kubelet[1903]: I0515 00:57:43.191379 1903 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/50bd9f10-710d-4b99-b10c-69c74603eef5-cilium-config-path\") pod \"50bd9f10-710d-4b99-b10c-69c74603eef5\" (UID: \"50bd9f10-710d-4b99-b10c-69c74603eef5\") " May 15 00:57:43.191851 kubelet[1903]: I0515 00:57:43.191398 1903 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/50bd9f10-710d-4b99-b10c-69c74603eef5-hostproc\") pod \"50bd9f10-710d-4b99-b10c-69c74603eef5\" (UID: \"50bd9f10-710d-4b99-b10c-69c74603eef5\") " May 15 00:57:43.191851 kubelet[1903]: I0515 00:57:43.191423 1903 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/50bd9f10-710d-4b99-b10c-69c74603eef5-bpf-maps\") pod \"50bd9f10-710d-4b99-b10c-69c74603eef5\" (UID: \"50bd9f10-710d-4b99-b10c-69c74603eef5\") " May 15 00:57:43.192205 kubelet[1903]: I0515 00:57:43.191444 1903 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqkvv\" (UniqueName: \"kubernetes.io/projected/e8cc16a3-4608-44da-8397-f10c8967f356-kube-api-access-nqkvv\") pod \"e8cc16a3-4608-44da-8397-f10c8967f356\" (UID: \"e8cc16a3-4608-44da-8397-f10c8967f356\") " May 15 00:57:43.192205 kubelet[1903]: I0515 00:57:43.191463 1903 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/50bd9f10-710d-4b99-b10c-69c74603eef5-cni-path\") pod \"50bd9f10-710d-4b99-b10c-69c74603eef5\" (UID: \"50bd9f10-710d-4b99-b10c-69c74603eef5\") " May 15 00:57:43.192205 kubelet[1903]: I0515 00:57:43.191481 1903 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/50bd9f10-710d-4b99-b10c-69c74603eef5-xtables-lock\") pod \"50bd9f10-710d-4b99-b10c-69c74603eef5\" (UID: \"50bd9f10-710d-4b99-b10c-69c74603eef5\") " May 15 00:57:43.192205 kubelet[1903]: I0515 00:57:43.191498 1903 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/50bd9f10-710d-4b99-b10c-69c74603eef5-host-proc-sys-kernel\") pod \"50bd9f10-710d-4b99-b10c-69c74603eef5\" (UID: \"50bd9f10-710d-4b99-b10c-69c74603eef5\") " May 15 00:57:43.192205 kubelet[1903]: I0515 00:57:43.191521 1903 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/50bd9f10-710d-4b99-b10c-69c74603eef5-lib-modules\") pod \"50bd9f10-710d-4b99-b10c-69c74603eef5\" (UID: \"50bd9f10-710d-4b99-b10c-69c74603eef5\") " May 15 00:57:43.192205 kubelet[1903]: I0515 00:57:43.191537 1903 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/50bd9f10-710d-4b99-b10c-69c74603eef5-etc-cni-netd\") pod \"50bd9f10-710d-4b99-b10c-69c74603eef5\" (UID: \"50bd9f10-710d-4b99-b10c-69c74603eef5\") " May 15 00:57:43.192395 kubelet[1903]: I0515 00:57:43.191559 1903 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/50bd9f10-710d-4b99-b10c-69c74603eef5-clustermesh-secrets\") pod \"50bd9f10-710d-4b99-b10c-69c74603eef5\" (UID: \"50bd9f10-710d-4b99-b10c-69c74603eef5\") " May 15 00:57:43.192395 kubelet[1903]: I0515 00:57:43.191577 1903 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/50bd9f10-710d-4b99-b10c-69c74603eef5-host-proc-sys-net\") pod \"50bd9f10-710d-4b99-b10c-69c74603eef5\" (UID: \"50bd9f10-710d-4b99-b10c-69c74603eef5\") " May 15 00:57:43.192395 kubelet[1903]: I0515 00:57:43.191611 1903 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e8cc16a3-4608-44da-8397-f10c8967f356-cilium-config-path\") pod \"e8cc16a3-4608-44da-8397-f10c8967f356\" (UID: \"e8cc16a3-4608-44da-8397-f10c8967f356\") " May 15 00:57:43.192395 kubelet[1903]: I0515 00:57:43.191634 1903 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/50bd9f10-710d-4b99-b10c-69c74603eef5-hubble-tls\") pod \"50bd9f10-710d-4b99-b10c-69c74603eef5\" (UID: \"50bd9f10-710d-4b99-b10c-69c74603eef5\") " May 15 00:57:43.192395 kubelet[1903]: I0515 00:57:43.191653 1903 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/50bd9f10-710d-4b99-b10c-69c74603eef5-cilium-cgroup\") pod \"50bd9f10-710d-4b99-b10c-69c74603eef5\" (UID: \"50bd9f10-710d-4b99-b10c-69c74603eef5\") " May 15 00:57:43.192395 kubelet[1903]: I0515 00:57:43.191704 1903 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/50bd9f10-710d-4b99-b10c-69c74603eef5-cilium-run\") on node \"localhost\" DevicePath \"\"" May 15 00:57:43.192770 kubelet[1903]: I0515 00:57:43.191730 1903 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50bd9f10-710d-4b99-b10c-69c74603eef5-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "50bd9f10-710d-4b99-b10c-69c74603eef5" (UID: "50bd9f10-710d-4b99-b10c-69c74603eef5"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 00:57:43.192770 kubelet[1903]: I0515 00:57:43.191751 1903 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50bd9f10-710d-4b99-b10c-69c74603eef5-hostproc" (OuterVolumeSpecName: "hostproc") pod "50bd9f10-710d-4b99-b10c-69c74603eef5" (UID: "50bd9f10-710d-4b99-b10c-69c74603eef5"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 00:57:43.192770 kubelet[1903]: I0515 00:57:43.191775 1903 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50bd9f10-710d-4b99-b10c-69c74603eef5-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "50bd9f10-710d-4b99-b10c-69c74603eef5" (UID: "50bd9f10-710d-4b99-b10c-69c74603eef5"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 00:57:43.192770 kubelet[1903]: I0515 00:57:43.191987 1903 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50bd9f10-710d-4b99-b10c-69c74603eef5-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "50bd9f10-710d-4b99-b10c-69c74603eef5" (UID: "50bd9f10-710d-4b99-b10c-69c74603eef5"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 00:57:43.192770 kubelet[1903]: I0515 00:57:43.192015 1903 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50bd9f10-710d-4b99-b10c-69c74603eef5-cni-path" (OuterVolumeSpecName: "cni-path") pod "50bd9f10-710d-4b99-b10c-69c74603eef5" (UID: "50bd9f10-710d-4b99-b10c-69c74603eef5"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 00:57:43.192895 kubelet[1903]: I0515 00:57:43.192031 1903 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50bd9f10-710d-4b99-b10c-69c74603eef5-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "50bd9f10-710d-4b99-b10c-69c74603eef5" (UID: "50bd9f10-710d-4b99-b10c-69c74603eef5"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 00:57:43.192895 kubelet[1903]: I0515 00:57:43.192046 1903 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50bd9f10-710d-4b99-b10c-69c74603eef5-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "50bd9f10-710d-4b99-b10c-69c74603eef5" (UID: "50bd9f10-710d-4b99-b10c-69c74603eef5"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 00:57:43.192895 kubelet[1903]: I0515 00:57:43.192062 1903 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50bd9f10-710d-4b99-b10c-69c74603eef5-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "50bd9f10-710d-4b99-b10c-69c74603eef5" (UID: "50bd9f10-710d-4b99-b10c-69c74603eef5"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 00:57:43.192895 kubelet[1903]: I0515 00:57:43.192078 1903 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50bd9f10-710d-4b99-b10c-69c74603eef5-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "50bd9f10-710d-4b99-b10c-69c74603eef5" (UID: "50bd9f10-710d-4b99-b10c-69c74603eef5"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 00:57:43.194987 kubelet[1903]: I0515 00:57:43.194945 1903 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8cc16a3-4608-44da-8397-f10c8967f356-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e8cc16a3-4608-44da-8397-f10c8967f356" (UID: "e8cc16a3-4608-44da-8397-f10c8967f356"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 15 00:57:43.195557 kubelet[1903]: I0515 00:57:43.195529 1903 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50bd9f10-710d-4b99-b10c-69c74603eef5-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "50bd9f10-710d-4b99-b10c-69c74603eef5" (UID: "50bd9f10-710d-4b99-b10c-69c74603eef5"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 15 00:57:43.195924 kubelet[1903]: I0515 00:57:43.195874 1903 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50bd9f10-710d-4b99-b10c-69c74603eef5-kube-api-access-fss7d" (OuterVolumeSpecName: "kube-api-access-fss7d") pod "50bd9f10-710d-4b99-b10c-69c74603eef5" (UID: "50bd9f10-710d-4b99-b10c-69c74603eef5"). InnerVolumeSpecName "kube-api-access-fss7d". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 15 00:57:43.197060 kubelet[1903]: I0515 00:57:43.197034 1903 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8cc16a3-4608-44da-8397-f10c8967f356-kube-api-access-nqkvv" (OuterVolumeSpecName: "kube-api-access-nqkvv") pod "e8cc16a3-4608-44da-8397-f10c8967f356" (UID: "e8cc16a3-4608-44da-8397-f10c8967f356"). InnerVolumeSpecName "kube-api-access-nqkvv". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 15 00:57:43.197228 kubelet[1903]: I0515 00:57:43.197199 1903 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50bd9f10-710d-4b99-b10c-69c74603eef5-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "50bd9f10-710d-4b99-b10c-69c74603eef5" (UID: "50bd9f10-710d-4b99-b10c-69c74603eef5"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 15 00:57:43.197445 kubelet[1903]: I0515 00:57:43.197417 1903 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50bd9f10-710d-4b99-b10c-69c74603eef5-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "50bd9f10-710d-4b99-b10c-69c74603eef5" (UID: "50bd9f10-710d-4b99-b10c-69c74603eef5"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 15 00:57:43.247231 systemd[1]: Removed slice kubepods-besteffort-pode8cc16a3_4608_44da_8397_f10c8967f356.slice. May 15 00:57:43.248212 systemd[1]: Removed slice kubepods-burstable-pod50bd9f10_710d_4b99_b10c_69c74603eef5.slice. May 15 00:57:43.248302 systemd[1]: kubepods-burstable-pod50bd9f10_710d_4b99_b10c_69c74603eef5.slice: Consumed 5.990s CPU time. May 15 00:57:43.292545 kubelet[1903]: I0515 00:57:43.292492 1903 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/50bd9f10-710d-4b99-b10c-69c74603eef5-lib-modules\") on node \"localhost\" DevicePath \"\"" May 15 00:57:43.292545 kubelet[1903]: I0515 00:57:43.292544 1903 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/50bd9f10-710d-4b99-b10c-69c74603eef5-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 15 00:57:43.292545 kubelet[1903]: I0515 00:57:43.292554 1903 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/50bd9f10-710d-4b99-b10c-69c74603eef5-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 15 00:57:43.292982 kubelet[1903]: I0515 00:57:43.292564 1903 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/50bd9f10-710d-4b99-b10c-69c74603eef5-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 15 00:57:43.292982 kubelet[1903]: I0515 00:57:43.292572 1903 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/50bd9f10-710d-4b99-b10c-69c74603eef5-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 15 00:57:43.292982 kubelet[1903]: I0515 00:57:43.292579 1903 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/50bd9f10-710d-4b99-b10c-69c74603eef5-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 15 00:57:43.292982 kubelet[1903]: I0515 00:57:43.292586 1903 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/50bd9f10-710d-4b99-b10c-69c74603eef5-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 15 00:57:43.292982 kubelet[1903]: I0515 00:57:43.292603 1903 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e8cc16a3-4608-44da-8397-f10c8967f356-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 15 00:57:43.292982 kubelet[1903]: I0515 00:57:43.292611 1903 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/50bd9f10-710d-4b99-b10c-69c74603eef5-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 15 00:57:43.292982 kubelet[1903]: I0515 00:57:43.292619 1903 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fss7d\" (UniqueName: \"kubernetes.io/projected/50bd9f10-710d-4b99-b10c-69c74603eef5-kube-api-access-fss7d\") on node \"localhost\" DevicePath \"\"" May 15 00:57:43.292982 kubelet[1903]: I0515 00:57:43.292628 1903 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/50bd9f10-710d-4b99-b10c-69c74603eef5-hostproc\") on node \"localhost\" DevicePath \"\"" May 15 00:57:43.293232 kubelet[1903]: I0515 00:57:43.292635 1903 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/50bd9f10-710d-4b99-b10c-69c74603eef5-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 15 00:57:43.293232 kubelet[1903]: I0515 00:57:43.292642 1903 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/50bd9f10-710d-4b99-b10c-69c74603eef5-cni-path\") on node \"localhost\" DevicePath \"\"" May 15 00:57:43.293232 kubelet[1903]: I0515 00:57:43.292649 1903 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/50bd9f10-710d-4b99-b10c-69c74603eef5-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 15 00:57:43.293232 kubelet[1903]: I0515 00:57:43.292658 1903 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nqkvv\" (UniqueName: \"kubernetes.io/projected/e8cc16a3-4608-44da-8397-f10c8967f356-kube-api-access-nqkvv\") on node \"localhost\" DevicePath \"\"" May 15 00:57:43.410204 kubelet[1903]: I0515 00:57:43.410127 1903 scope.go:117] "RemoveContainer" containerID="2f3e2496f75e872d82c5ac4ab150957eb87e8877a4e84c74772405ee5c4aa430" May 15 00:57:43.411312 env[1204]: time="2025-05-15T00:57:43.411275821Z" level=info msg="RemoveContainer for \"2f3e2496f75e872d82c5ac4ab150957eb87e8877a4e84c74772405ee5c4aa430\"" May 15 00:57:43.417976 env[1204]: time="2025-05-15T00:57:43.417930694Z" level=info msg="RemoveContainer for \"2f3e2496f75e872d82c5ac4ab150957eb87e8877a4e84c74772405ee5c4aa430\" returns successfully" May 15 00:57:43.418253 kubelet[1903]: I0515 00:57:43.418220 1903 scope.go:117] "RemoveContainer" containerID="ef1ce1c911991ac23c275876d070195e9b103759a2c36b796f6d5d9ec9501f3f" May 15 00:57:43.419407 env[1204]: time="2025-05-15T00:57:43.419318924Z" level=info msg="RemoveContainer for \"ef1ce1c911991ac23c275876d070195e9b103759a2c36b796f6d5d9ec9501f3f\"" May 15 00:57:43.423248 env[1204]: time="2025-05-15T00:57:43.423168484Z" level=info msg="RemoveContainer for \"ef1ce1c911991ac23c275876d070195e9b103759a2c36b796f6d5d9ec9501f3f\" returns successfully" May 15 00:57:43.424282 kubelet[1903]: I0515 00:57:43.424240 1903 scope.go:117] "RemoveContainer" containerID="0181ca6dd42c7f9f24edf5ecaf008d6191fb7ec23e52d344b68e7952bd11cebb" May 15 00:57:43.426061 env[1204]: time="2025-05-15T00:57:43.425950522Z" level=info msg="RemoveContainer for \"0181ca6dd42c7f9f24edf5ecaf008d6191fb7ec23e52d344b68e7952bd11cebb\"" May 15 00:57:43.429675 env[1204]: time="2025-05-15T00:57:43.429639534Z" level=info msg="RemoveContainer for \"0181ca6dd42c7f9f24edf5ecaf008d6191fb7ec23e52d344b68e7952bd11cebb\" returns successfully" May 15 00:57:43.429861 kubelet[1903]: I0515 00:57:43.429843 1903 scope.go:117] "RemoveContainer" containerID="2757892416af1c3145e7df42d8bfef468f24911b38a58be969808b836672cc69" May 15 00:57:43.430854 env[1204]: time="2025-05-15T00:57:43.430830965Z" level=info msg="RemoveContainer for \"2757892416af1c3145e7df42d8bfef468f24911b38a58be969808b836672cc69\"" May 15 00:57:43.434430 env[1204]: time="2025-05-15T00:57:43.434368555Z" level=info msg="RemoveContainer for \"2757892416af1c3145e7df42d8bfef468f24911b38a58be969808b836672cc69\" returns successfully" May 15 00:57:43.434586 kubelet[1903]: I0515 00:57:43.434562 1903 scope.go:117] "RemoveContainer" containerID="adaa3f00926880513c699c9b631587a072c81ab75ffcdc7b7109dd3cdbad3ca8" May 15 00:57:43.435453 env[1204]: time="2025-05-15T00:57:43.435429944Z" level=info msg="RemoveContainer for \"adaa3f00926880513c699c9b631587a072c81ab75ffcdc7b7109dd3cdbad3ca8\"" May 15 00:57:43.438461 env[1204]: time="2025-05-15T00:57:43.438433416Z" level=info msg="RemoveContainer for \"adaa3f00926880513c699c9b631587a072c81ab75ffcdc7b7109dd3cdbad3ca8\" returns successfully" May 15 00:57:43.438621 kubelet[1903]: I0515 00:57:43.438598 1903 scope.go:117] "RemoveContainer" containerID="2f3e2496f75e872d82c5ac4ab150957eb87e8877a4e84c74772405ee5c4aa430" May 15 00:57:43.438861 env[1204]: time="2025-05-15T00:57:43.438797275Z" level=error msg="ContainerStatus for \"2f3e2496f75e872d82c5ac4ab150957eb87e8877a4e84c74772405ee5c4aa430\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2f3e2496f75e872d82c5ac4ab150957eb87e8877a4e84c74772405ee5c4aa430\": not found" May 15 00:57:43.438992 kubelet[1903]: E0515 00:57:43.438963 1903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2f3e2496f75e872d82c5ac4ab150957eb87e8877a4e84c74772405ee5c4aa430\": not found" containerID="2f3e2496f75e872d82c5ac4ab150957eb87e8877a4e84c74772405ee5c4aa430" May 15 00:57:43.439065 kubelet[1903]: I0515 00:57:43.438995 1903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2f3e2496f75e872d82c5ac4ab150957eb87e8877a4e84c74772405ee5c4aa430"} err="failed to get container status \"2f3e2496f75e872d82c5ac4ab150957eb87e8877a4e84c74772405ee5c4aa430\": rpc error: code = NotFound desc = an error occurred when try to find container \"2f3e2496f75e872d82c5ac4ab150957eb87e8877a4e84c74772405ee5c4aa430\": not found" May 15 00:57:43.439099 kubelet[1903]: I0515 00:57:43.439066 1903 scope.go:117] "RemoveContainer" containerID="ef1ce1c911991ac23c275876d070195e9b103759a2c36b796f6d5d9ec9501f3f" May 15 00:57:43.439339 env[1204]: time="2025-05-15T00:57:43.439273383Z" level=error msg="ContainerStatus for \"ef1ce1c911991ac23c275876d070195e9b103759a2c36b796f6d5d9ec9501f3f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ef1ce1c911991ac23c275876d070195e9b103759a2c36b796f6d5d9ec9501f3f\": not found" May 15 00:57:43.439500 kubelet[1903]: E0515 00:57:43.439474 1903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ef1ce1c911991ac23c275876d070195e9b103759a2c36b796f6d5d9ec9501f3f\": not found" containerID="ef1ce1c911991ac23c275876d070195e9b103759a2c36b796f6d5d9ec9501f3f" May 15 00:57:43.439562 kubelet[1903]: I0515 00:57:43.439506 1903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ef1ce1c911991ac23c275876d070195e9b103759a2c36b796f6d5d9ec9501f3f"} err="failed to get container status \"ef1ce1c911991ac23c275876d070195e9b103759a2c36b796f6d5d9ec9501f3f\": rpc error: code = NotFound desc = an error occurred when try to find container \"ef1ce1c911991ac23c275876d070195e9b103759a2c36b796f6d5d9ec9501f3f\": not found" May 15 00:57:43.439562 kubelet[1903]: I0515 00:57:43.439530 1903 scope.go:117] "RemoveContainer" containerID="0181ca6dd42c7f9f24edf5ecaf008d6191fb7ec23e52d344b68e7952bd11cebb" May 15 00:57:43.439738 env[1204]: time="2025-05-15T00:57:43.439703605Z" level=error msg="ContainerStatus for \"0181ca6dd42c7f9f24edf5ecaf008d6191fb7ec23e52d344b68e7952bd11cebb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0181ca6dd42c7f9f24edf5ecaf008d6191fb7ec23e52d344b68e7952bd11cebb\": not found" May 15 00:57:43.439859 kubelet[1903]: E0515 00:57:43.439840 1903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0181ca6dd42c7f9f24edf5ecaf008d6191fb7ec23e52d344b68e7952bd11cebb\": not found" containerID="0181ca6dd42c7f9f24edf5ecaf008d6191fb7ec23e52d344b68e7952bd11cebb" May 15 00:57:43.439909 kubelet[1903]: I0515 00:57:43.439857 1903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0181ca6dd42c7f9f24edf5ecaf008d6191fb7ec23e52d344b68e7952bd11cebb"} err="failed to get container status \"0181ca6dd42c7f9f24edf5ecaf008d6191fb7ec23e52d344b68e7952bd11cebb\": rpc error: code = NotFound desc = an error occurred when try to find container \"0181ca6dd42c7f9f24edf5ecaf008d6191fb7ec23e52d344b68e7952bd11cebb\": not found" May 15 00:57:43.439909 kubelet[1903]: I0515 00:57:43.439879 1903 scope.go:117] "RemoveContainer" containerID="2757892416af1c3145e7df42d8bfef468f24911b38a58be969808b836672cc69" May 15 00:57:43.440071 env[1204]: time="2025-05-15T00:57:43.440034583Z" level=error msg="ContainerStatus for \"2757892416af1c3145e7df42d8bfef468f24911b38a58be969808b836672cc69\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2757892416af1c3145e7df42d8bfef468f24911b38a58be969808b836672cc69\": not found" May 15 00:57:43.440204 kubelet[1903]: E0515 00:57:43.440162 1903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2757892416af1c3145e7df42d8bfef468f24911b38a58be969808b836672cc69\": not found" containerID="2757892416af1c3145e7df42d8bfef468f24911b38a58be969808b836672cc69" May 15 00:57:43.440434 kubelet[1903]: I0515 00:57:43.440395 1903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2757892416af1c3145e7df42d8bfef468f24911b38a58be969808b836672cc69"} err="failed to get container status \"2757892416af1c3145e7df42d8bfef468f24911b38a58be969808b836672cc69\": rpc error: code = NotFound desc = an error occurred when try to find container \"2757892416af1c3145e7df42d8bfef468f24911b38a58be969808b836672cc69\": not found" May 15 00:57:43.440434 kubelet[1903]: I0515 00:57:43.440436 1903 scope.go:117] "RemoveContainer" containerID="adaa3f00926880513c699c9b631587a072c81ab75ffcdc7b7109dd3cdbad3ca8" May 15 00:57:43.440662 env[1204]: time="2025-05-15T00:57:43.440619153Z" level=error msg="ContainerStatus for \"adaa3f00926880513c699c9b631587a072c81ab75ffcdc7b7109dd3cdbad3ca8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"adaa3f00926880513c699c9b631587a072c81ab75ffcdc7b7109dd3cdbad3ca8\": not found" May 15 00:57:43.440769 kubelet[1903]: E0515 00:57:43.440743 1903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"adaa3f00926880513c699c9b631587a072c81ab75ffcdc7b7109dd3cdbad3ca8\": not found" containerID="adaa3f00926880513c699c9b631587a072c81ab75ffcdc7b7109dd3cdbad3ca8" May 15 00:57:43.440816 kubelet[1903]: I0515 00:57:43.440773 1903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"adaa3f00926880513c699c9b631587a072c81ab75ffcdc7b7109dd3cdbad3ca8"} err="failed to get container status \"adaa3f00926880513c699c9b631587a072c81ab75ffcdc7b7109dd3cdbad3ca8\": rpc error: code = NotFound desc = an error occurred when try to find container \"adaa3f00926880513c699c9b631587a072c81ab75ffcdc7b7109dd3cdbad3ca8\": not found" May 15 00:57:43.440816 kubelet[1903]: I0515 00:57:43.440787 1903 scope.go:117] "RemoveContainer" containerID="33f37ed734ac9bc8c62706e03b1d1356cdd05880dcfa248105a74581559b011d" May 15 00:57:43.441725 env[1204]: time="2025-05-15T00:57:43.441697564Z" level=info msg="RemoveContainer for \"33f37ed734ac9bc8c62706e03b1d1356cdd05880dcfa248105a74581559b011d\"" May 15 00:57:43.444285 env[1204]: time="2025-05-15T00:57:43.444211804Z" level=info msg="RemoveContainer for \"33f37ed734ac9bc8c62706e03b1d1356cdd05880dcfa248105a74581559b011d\" returns successfully" May 15 00:57:43.444365 kubelet[1903]: I0515 00:57:43.444344 1903 scope.go:117] "RemoveContainer" containerID="33f37ed734ac9bc8c62706e03b1d1356cdd05880dcfa248105a74581559b011d" May 15 00:57:43.444521 env[1204]: time="2025-05-15T00:57:43.444478020Z" level=error msg="ContainerStatus for \"33f37ed734ac9bc8c62706e03b1d1356cdd05880dcfa248105a74581559b011d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"33f37ed734ac9bc8c62706e03b1d1356cdd05880dcfa248105a74581559b011d\": not found" May 15 00:57:43.444633 kubelet[1903]: E0515 00:57:43.444613 1903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"33f37ed734ac9bc8c62706e03b1d1356cdd05880dcfa248105a74581559b011d\": not found" containerID="33f37ed734ac9bc8c62706e03b1d1356cdd05880dcfa248105a74581559b011d" May 15 00:57:43.444683 kubelet[1903]: I0515 00:57:43.444639 1903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"33f37ed734ac9bc8c62706e03b1d1356cdd05880dcfa248105a74581559b011d"} err="failed to get container status \"33f37ed734ac9bc8c62706e03b1d1356cdd05880dcfa248105a74581559b011d\": rpc error: code = NotFound desc = an error occurred when try to find container \"33f37ed734ac9bc8c62706e03b1d1356cdd05880dcfa248105a74581559b011d\": not found" May 15 00:57:43.944481 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0ef6484e9f600a37f74b2690edf13812077b4b3adfed4b35c74db294780c499c-rootfs.mount: Deactivated successfully. May 15 00:57:43.944568 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0ef6484e9f600a37f74b2690edf13812077b4b3adfed4b35c74db294780c499c-shm.mount: Deactivated successfully. May 15 00:57:43.944631 systemd[1]: var-lib-kubelet-pods-50bd9f10\x2d710d\x2d4b99\x2db10c\x2d69c74603eef5-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 15 00:57:43.944688 systemd[1]: var-lib-kubelet-pods-50bd9f10\x2d710d\x2d4b99\x2db10c\x2d69c74603eef5-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 15 00:57:43.944739 systemd[1]: var-lib-kubelet-pods-e8cc16a3\x2d4608\x2d44da\x2d8397\x2df10c8967f356-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnqkvv.mount: Deactivated successfully. May 15 00:57:43.944786 systemd[1]: var-lib-kubelet-pods-50bd9f10\x2d710d\x2d4b99\x2db10c\x2d69c74603eef5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfss7d.mount: Deactivated successfully. May 15 00:57:44.892480 sshd[3516]: pam_unix(sshd:session): session closed for user core May 15 00:57:44.895314 systemd[1]: sshd@22-10.0.0.117:22-10.0.0.1:39678.service: Deactivated successfully. May 15 00:57:44.895919 systemd[1]: session-23.scope: Deactivated successfully. May 15 00:57:44.896445 systemd-logind[1192]: Session 23 logged out. Waiting for processes to exit. May 15 00:57:44.897453 systemd[1]: Started sshd@23-10.0.0.117:22-10.0.0.1:39680.service. May 15 00:57:44.898194 systemd-logind[1192]: Removed session 23. May 15 00:57:44.938076 sshd[3678]: Accepted publickey for core from 10.0.0.1 port 39680 ssh2: RSA SHA256:Iwoz1L9/QgXQ9OpXvCPQYapJE0cmIk+lKxZdEnPdReQ May 15 00:57:44.939376 sshd[3678]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 00:57:44.942638 systemd-logind[1192]: New session 24 of user core. May 15 00:57:44.943581 systemd[1]: Started session-24.scope. May 15 00:57:45.242765 kubelet[1903]: I0515 00:57:45.242665 1903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50bd9f10-710d-4b99-b10c-69c74603eef5" path="/var/lib/kubelet/pods/50bd9f10-710d-4b99-b10c-69c74603eef5/volumes" May 15 00:57:45.243292 kubelet[1903]: I0515 00:57:45.243256 1903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8cc16a3-4608-44da-8397-f10c8967f356" path="/var/lib/kubelet/pods/e8cc16a3-4608-44da-8397-f10c8967f356/volumes" May 15 00:57:45.243900 kubelet[1903]: E0515 00:57:45.243885 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:57:45.244034 kubelet[1903]: E0515 00:57:45.244010 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:57:45.275128 kubelet[1903]: E0515 00:57:45.275096 1903 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 15 00:57:45.514091 sshd[3678]: pam_unix(sshd:session): session closed for user core May 15 00:57:45.522195 kubelet[1903]: I0515 00:57:45.522156 1903 memory_manager.go:355] "RemoveStaleState removing state" podUID="e8cc16a3-4608-44da-8397-f10c8967f356" containerName="cilium-operator" May 15 00:57:45.522338 kubelet[1903]: I0515 00:57:45.522323 1903 memory_manager.go:355] "RemoveStaleState removing state" podUID="50bd9f10-710d-4b99-b10c-69c74603eef5" containerName="cilium-agent" May 15 00:57:45.524499 systemd[1]: Started sshd@24-10.0.0.117:22-10.0.0.1:39682.service. May 15 00:57:45.525242 systemd[1]: sshd@23-10.0.0.117:22-10.0.0.1:39680.service: Deactivated successfully. May 15 00:57:45.526196 systemd[1]: session-24.scope: Deactivated successfully. May 15 00:57:45.527072 systemd-logind[1192]: Session 24 logged out. Waiting for processes to exit. May 15 00:57:45.530325 systemd-logind[1192]: Removed session 24. May 15 00:57:45.542112 systemd[1]: Created slice kubepods-burstable-pod9e3a21b6_aa2c_4875_b92b_5eea6d6858c2.slice. May 15 00:57:45.574123 sshd[3690]: Accepted publickey for core from 10.0.0.1 port 39682 ssh2: RSA SHA256:Iwoz1L9/QgXQ9OpXvCPQYapJE0cmIk+lKxZdEnPdReQ May 15 00:57:45.575202 sshd[3690]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 00:57:45.579083 systemd-logind[1192]: New session 25 of user core. May 15 00:57:45.579355 systemd[1]: Started session-25.scope. May 15 00:57:45.608394 kubelet[1903]: I0515 00:57:45.608346 1903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9e3a21b6-aa2c-4875-b92b-5eea6d6858c2-cilium-run\") pod \"cilium-6d7g7\" (UID: \"9e3a21b6-aa2c-4875-b92b-5eea6d6858c2\") " pod="kube-system/cilium-6d7g7" May 15 00:57:45.608394 kubelet[1903]: I0515 00:57:45.608378 1903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9e3a21b6-aa2c-4875-b92b-5eea6d6858c2-cilium-cgroup\") pod \"cilium-6d7g7\" (UID: \"9e3a21b6-aa2c-4875-b92b-5eea6d6858c2\") " pod="kube-system/cilium-6d7g7" May 15 00:57:45.608394 kubelet[1903]: I0515 00:57:45.608395 1903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9e3a21b6-aa2c-4875-b92b-5eea6d6858c2-xtables-lock\") pod \"cilium-6d7g7\" (UID: \"9e3a21b6-aa2c-4875-b92b-5eea6d6858c2\") " pod="kube-system/cilium-6d7g7" May 15 00:57:45.608591 kubelet[1903]: I0515 00:57:45.608409 1903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgjzv\" (UniqueName: \"kubernetes.io/projected/9e3a21b6-aa2c-4875-b92b-5eea6d6858c2-kube-api-access-rgjzv\") pod \"cilium-6d7g7\" (UID: \"9e3a21b6-aa2c-4875-b92b-5eea6d6858c2\") " pod="kube-system/cilium-6d7g7" May 15 00:57:45.608591 kubelet[1903]: I0515 00:57:45.608427 1903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9e3a21b6-aa2c-4875-b92b-5eea6d6858c2-host-proc-sys-net\") pod \"cilium-6d7g7\" (UID: \"9e3a21b6-aa2c-4875-b92b-5eea6d6858c2\") " pod="kube-system/cilium-6d7g7" May 15 00:57:45.608591 kubelet[1903]: I0515 00:57:45.608441 1903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9e3a21b6-aa2c-4875-b92b-5eea6d6858c2-bpf-maps\") pod \"cilium-6d7g7\" (UID: \"9e3a21b6-aa2c-4875-b92b-5eea6d6858c2\") " pod="kube-system/cilium-6d7g7" May 15 00:57:45.608591 kubelet[1903]: I0515 00:57:45.608453 1903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9e3a21b6-aa2c-4875-b92b-5eea6d6858c2-lib-modules\") pod \"cilium-6d7g7\" (UID: \"9e3a21b6-aa2c-4875-b92b-5eea6d6858c2\") " pod="kube-system/cilium-6d7g7" May 15 00:57:45.608591 kubelet[1903]: I0515 00:57:45.608466 1903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9e3a21b6-aa2c-4875-b92b-5eea6d6858c2-hostproc\") pod \"cilium-6d7g7\" (UID: \"9e3a21b6-aa2c-4875-b92b-5eea6d6858c2\") " pod="kube-system/cilium-6d7g7" May 15 00:57:45.608591 kubelet[1903]: I0515 00:57:45.608555 1903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9e3a21b6-aa2c-4875-b92b-5eea6d6858c2-cilium-config-path\") pod \"cilium-6d7g7\" (UID: \"9e3a21b6-aa2c-4875-b92b-5eea6d6858c2\") " pod="kube-system/cilium-6d7g7" May 15 00:57:45.608752 kubelet[1903]: I0515 00:57:45.608591 1903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9e3a21b6-aa2c-4875-b92b-5eea6d6858c2-hubble-tls\") pod \"cilium-6d7g7\" (UID: \"9e3a21b6-aa2c-4875-b92b-5eea6d6858c2\") " pod="kube-system/cilium-6d7g7" May 15 00:57:45.608752 kubelet[1903]: I0515 00:57:45.608613 1903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9e3a21b6-aa2c-4875-b92b-5eea6d6858c2-cilium-ipsec-secrets\") pod \"cilium-6d7g7\" (UID: \"9e3a21b6-aa2c-4875-b92b-5eea6d6858c2\") " pod="kube-system/cilium-6d7g7" May 15 00:57:45.608752 kubelet[1903]: I0515 00:57:45.608633 1903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9e3a21b6-aa2c-4875-b92b-5eea6d6858c2-host-proc-sys-kernel\") pod \"cilium-6d7g7\" (UID: \"9e3a21b6-aa2c-4875-b92b-5eea6d6858c2\") " pod="kube-system/cilium-6d7g7" May 15 00:57:45.608752 kubelet[1903]: I0515 00:57:45.608654 1903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9e3a21b6-aa2c-4875-b92b-5eea6d6858c2-etc-cni-netd\") pod \"cilium-6d7g7\" (UID: \"9e3a21b6-aa2c-4875-b92b-5eea6d6858c2\") " pod="kube-system/cilium-6d7g7" May 15 00:57:45.608752 kubelet[1903]: I0515 00:57:45.608671 1903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9e3a21b6-aa2c-4875-b92b-5eea6d6858c2-clustermesh-secrets\") pod \"cilium-6d7g7\" (UID: \"9e3a21b6-aa2c-4875-b92b-5eea6d6858c2\") " pod="kube-system/cilium-6d7g7" May 15 00:57:45.608752 kubelet[1903]: I0515 00:57:45.608685 1903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9e3a21b6-aa2c-4875-b92b-5eea6d6858c2-cni-path\") pod \"cilium-6d7g7\" (UID: \"9e3a21b6-aa2c-4875-b92b-5eea6d6858c2\") " pod="kube-system/cilium-6d7g7" May 15 00:57:45.695986 sshd[3690]: pam_unix(sshd:session): session closed for user core May 15 00:57:45.698932 systemd[1]: sshd@24-10.0.0.117:22-10.0.0.1:39682.service: Deactivated successfully. May 15 00:57:45.699488 systemd[1]: session-25.scope: Deactivated successfully. May 15 00:57:45.700336 systemd-logind[1192]: Session 25 logged out. Waiting for processes to exit. May 15 00:57:45.701659 systemd[1]: Started sshd@25-10.0.0.117:22-10.0.0.1:39686.service. May 15 00:57:45.702644 systemd-logind[1192]: Removed session 25. May 15 00:57:45.707640 kubelet[1903]: E0515 00:57:45.707571 1903 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-rgjzv lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-6d7g7" podUID="9e3a21b6-aa2c-4875-b92b-5eea6d6858c2" May 15 00:57:45.741181 sshd[3705]: Accepted publickey for core from 10.0.0.1 port 39686 ssh2: RSA SHA256:Iwoz1L9/QgXQ9OpXvCPQYapJE0cmIk+lKxZdEnPdReQ May 15 00:57:45.742227 sshd[3705]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 00:57:45.745325 systemd-logind[1192]: New session 26 of user core. May 15 00:57:45.746205 systemd[1]: Started session-26.scope. May 15 00:57:46.513256 kubelet[1903]: I0515 00:57:46.513207 1903 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9e3a21b6-aa2c-4875-b92b-5eea6d6858c2-hubble-tls\") pod \"9e3a21b6-aa2c-4875-b92b-5eea6d6858c2\" (UID: \"9e3a21b6-aa2c-4875-b92b-5eea6d6858c2\") " May 15 00:57:46.513256 kubelet[1903]: I0515 00:57:46.513244 1903 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9e3a21b6-aa2c-4875-b92b-5eea6d6858c2-host-proc-sys-net\") pod \"9e3a21b6-aa2c-4875-b92b-5eea6d6858c2\" (UID: \"9e3a21b6-aa2c-4875-b92b-5eea6d6858c2\") " May 15 00:57:46.513256 kubelet[1903]: I0515 00:57:46.513258 1903 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9e3a21b6-aa2c-4875-b92b-5eea6d6858c2-lib-modules\") pod \"9e3a21b6-aa2c-4875-b92b-5eea6d6858c2\" (UID: \"9e3a21b6-aa2c-4875-b92b-5eea6d6858c2\") " May 15 00:57:46.513658 kubelet[1903]: I0515 00:57:46.513269 1903 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9e3a21b6-aa2c-4875-b92b-5eea6d6858c2-cni-path\") pod \"9e3a21b6-aa2c-4875-b92b-5eea6d6858c2\" (UID: \"9e3a21b6-aa2c-4875-b92b-5eea6d6858c2\") " May 15 00:57:46.513658 kubelet[1903]: I0515 00:57:46.513283 1903 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9e3a21b6-aa2c-4875-b92b-5eea6d6858c2-cilium-cgroup\") pod \"9e3a21b6-aa2c-4875-b92b-5eea6d6858c2\" (UID: \"9e3a21b6-aa2c-4875-b92b-5eea6d6858c2\") " May 15 00:57:46.513658 kubelet[1903]: I0515 00:57:46.513297 1903 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rgjzv\" (UniqueName: \"kubernetes.io/projected/9e3a21b6-aa2c-4875-b92b-5eea6d6858c2-kube-api-access-rgjzv\") pod \"9e3a21b6-aa2c-4875-b92b-5eea6d6858c2\" (UID: \"9e3a21b6-aa2c-4875-b92b-5eea6d6858c2\") " May 15 00:57:46.513658 kubelet[1903]: I0515 00:57:46.513310 1903 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9e3a21b6-aa2c-4875-b92b-5eea6d6858c2-etc-cni-netd\") pod \"9e3a21b6-aa2c-4875-b92b-5eea6d6858c2\" (UID: \"9e3a21b6-aa2c-4875-b92b-5eea6d6858c2\") " May 15 00:57:46.513658 kubelet[1903]: I0515 00:57:46.513325 1903 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9e3a21b6-aa2c-4875-b92b-5eea6d6858c2-cilium-config-path\") pod \"9e3a21b6-aa2c-4875-b92b-5eea6d6858c2\" (UID: \"9e3a21b6-aa2c-4875-b92b-5eea6d6858c2\") " May 15 00:57:46.513658 kubelet[1903]: I0515 00:57:46.513325 1903 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e3a21b6-aa2c-4875-b92b-5eea6d6858c2-cni-path" (OuterVolumeSpecName: "cni-path") pod "9e3a21b6-aa2c-4875-b92b-5eea6d6858c2" (UID: "9e3a21b6-aa2c-4875-b92b-5eea6d6858c2"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 00:57:46.513789 kubelet[1903]: I0515 00:57:46.513339 1903 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9e3a21b6-aa2c-4875-b92b-5eea6d6858c2-cilium-ipsec-secrets\") pod \"9e3a21b6-aa2c-4875-b92b-5eea6d6858c2\" (UID: \"9e3a21b6-aa2c-4875-b92b-5eea6d6858c2\") " May 15 00:57:46.513789 kubelet[1903]: I0515 00:57:46.513352 1903 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9e3a21b6-aa2c-4875-b92b-5eea6d6858c2-hostproc\") pod \"9e3a21b6-aa2c-4875-b92b-5eea6d6858c2\" (UID: \"9e3a21b6-aa2c-4875-b92b-5eea6d6858c2\") " May 15 00:57:46.513789 kubelet[1903]: I0515 00:57:46.513362 1903 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e3a21b6-aa2c-4875-b92b-5eea6d6858c2-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "9e3a21b6-aa2c-4875-b92b-5eea6d6858c2" (UID: "9e3a21b6-aa2c-4875-b92b-5eea6d6858c2"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 00:57:46.513789 kubelet[1903]: I0515 00:57:46.513364 1903 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9e3a21b6-aa2c-4875-b92b-5eea6d6858c2-bpf-maps\") pod \"9e3a21b6-aa2c-4875-b92b-5eea6d6858c2\" (UID: \"9e3a21b6-aa2c-4875-b92b-5eea6d6858c2\") " May 15 00:57:46.513789 kubelet[1903]: I0515 00:57:46.513381 1903 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e3a21b6-aa2c-4875-b92b-5eea6d6858c2-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "9e3a21b6-aa2c-4875-b92b-5eea6d6858c2" (UID: "9e3a21b6-aa2c-4875-b92b-5eea6d6858c2"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 00:57:46.513789 kubelet[1903]: I0515 00:57:46.513393 1903 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9e3a21b6-aa2c-4875-b92b-5eea6d6858c2-cilium-run\") pod \"9e3a21b6-aa2c-4875-b92b-5eea6d6858c2\" (UID: \"9e3a21b6-aa2c-4875-b92b-5eea6d6858c2\") " May 15 00:57:46.513927 kubelet[1903]: I0515 00:57:46.513419 1903 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9e3a21b6-aa2c-4875-b92b-5eea6d6858c2-clustermesh-secrets\") pod \"9e3a21b6-aa2c-4875-b92b-5eea6d6858c2\" (UID: \"9e3a21b6-aa2c-4875-b92b-5eea6d6858c2\") " May 15 00:57:46.513927 kubelet[1903]: I0515 00:57:46.513439 1903 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9e3a21b6-aa2c-4875-b92b-5eea6d6858c2-xtables-lock\") pod \"9e3a21b6-aa2c-4875-b92b-5eea6d6858c2\" (UID: \"9e3a21b6-aa2c-4875-b92b-5eea6d6858c2\") " May 15 00:57:46.513927 kubelet[1903]: I0515 00:57:46.513457 1903 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9e3a21b6-aa2c-4875-b92b-5eea6d6858c2-host-proc-sys-kernel\") pod \"9e3a21b6-aa2c-4875-b92b-5eea6d6858c2\" (UID: \"9e3a21b6-aa2c-4875-b92b-5eea6d6858c2\") " May 15 00:57:46.513927 kubelet[1903]: I0515 00:57:46.513491 1903 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9e3a21b6-aa2c-4875-b92b-5eea6d6858c2-lib-modules\") on node \"localhost\" DevicePath \"\"" May 15 00:57:46.513927 kubelet[1903]: I0515 00:57:46.513503 1903 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9e3a21b6-aa2c-4875-b92b-5eea6d6858c2-cni-path\") on node \"localhost\" DevicePath \"\"" May 15 00:57:46.513927 kubelet[1903]: I0515 00:57:46.513526 1903 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9e3a21b6-aa2c-4875-b92b-5eea6d6858c2-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 15 00:57:46.513927 kubelet[1903]: I0515 00:57:46.513396 1903 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e3a21b6-aa2c-4875-b92b-5eea6d6858c2-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "9e3a21b6-aa2c-4875-b92b-5eea6d6858c2" (UID: "9e3a21b6-aa2c-4875-b92b-5eea6d6858c2"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 00:57:46.514201 kubelet[1903]: I0515 00:57:46.513548 1903 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e3a21b6-aa2c-4875-b92b-5eea6d6858c2-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "9e3a21b6-aa2c-4875-b92b-5eea6d6858c2" (UID: "9e3a21b6-aa2c-4875-b92b-5eea6d6858c2"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 00:57:46.514201 kubelet[1903]: I0515 00:57:46.513562 1903 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e3a21b6-aa2c-4875-b92b-5eea6d6858c2-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "9e3a21b6-aa2c-4875-b92b-5eea6d6858c2" (UID: "9e3a21b6-aa2c-4875-b92b-5eea6d6858c2"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 00:57:46.514201 kubelet[1903]: I0515 00:57:46.513627 1903 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e3a21b6-aa2c-4875-b92b-5eea6d6858c2-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "9e3a21b6-aa2c-4875-b92b-5eea6d6858c2" (UID: "9e3a21b6-aa2c-4875-b92b-5eea6d6858c2"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 00:57:46.514201 kubelet[1903]: I0515 00:57:46.513658 1903 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e3a21b6-aa2c-4875-b92b-5eea6d6858c2-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "9e3a21b6-aa2c-4875-b92b-5eea6d6858c2" (UID: "9e3a21b6-aa2c-4875-b92b-5eea6d6858c2"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 00:57:46.514443 kubelet[1903]: I0515 00:57:46.514421 1903 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e3a21b6-aa2c-4875-b92b-5eea6d6858c2-hostproc" (OuterVolumeSpecName: "hostproc") pod "9e3a21b6-aa2c-4875-b92b-5eea6d6858c2" (UID: "9e3a21b6-aa2c-4875-b92b-5eea6d6858c2"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 00:57:46.514656 kubelet[1903]: I0515 00:57:46.514632 1903 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e3a21b6-aa2c-4875-b92b-5eea6d6858c2-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "9e3a21b6-aa2c-4875-b92b-5eea6d6858c2" (UID: "9e3a21b6-aa2c-4875-b92b-5eea6d6858c2"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 00:57:46.515728 kubelet[1903]: I0515 00:57:46.515701 1903 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e3a21b6-aa2c-4875-b92b-5eea6d6858c2-kube-api-access-rgjzv" (OuterVolumeSpecName: "kube-api-access-rgjzv") pod "9e3a21b6-aa2c-4875-b92b-5eea6d6858c2" (UID: "9e3a21b6-aa2c-4875-b92b-5eea6d6858c2"). InnerVolumeSpecName "kube-api-access-rgjzv". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 15 00:57:46.515872 kubelet[1903]: I0515 00:57:46.515849 1903 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e3a21b6-aa2c-4875-b92b-5eea6d6858c2-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "9e3a21b6-aa2c-4875-b92b-5eea6d6858c2" (UID: "9e3a21b6-aa2c-4875-b92b-5eea6d6858c2"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 15 00:57:46.516865 kubelet[1903]: I0515 00:57:46.516846 1903 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e3a21b6-aa2c-4875-b92b-5eea6d6858c2-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "9e3a21b6-aa2c-4875-b92b-5eea6d6858c2" (UID: "9e3a21b6-aa2c-4875-b92b-5eea6d6858c2"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 15 00:57:46.517043 kubelet[1903]: I0515 00:57:46.517028 1903 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e3a21b6-aa2c-4875-b92b-5eea6d6858c2-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "9e3a21b6-aa2c-4875-b92b-5eea6d6858c2" (UID: "9e3a21b6-aa2c-4875-b92b-5eea6d6858c2"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 15 00:57:46.517295 kubelet[1903]: I0515 00:57:46.517278 1903 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e3a21b6-aa2c-4875-b92b-5eea6d6858c2-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9e3a21b6-aa2c-4875-b92b-5eea6d6858c2" (UID: "9e3a21b6-aa2c-4875-b92b-5eea6d6858c2"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 15 00:57:46.517607 systemd[1]: var-lib-kubelet-pods-9e3a21b6\x2daa2c\x2d4875\x2db92b\x2d5eea6d6858c2-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 15 00:57:46.517713 systemd[1]: var-lib-kubelet-pods-9e3a21b6\x2daa2c\x2d4875\x2db92b\x2d5eea6d6858c2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drgjzv.mount: Deactivated successfully. May 15 00:57:46.614453 kubelet[1903]: I0515 00:57:46.614407 1903 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9e3a21b6-aa2c-4875-b92b-5eea6d6858c2-hostproc\") on node \"localhost\" DevicePath \"\"" May 15 00:57:46.614453 kubelet[1903]: I0515 00:57:46.614442 1903 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9e3a21b6-aa2c-4875-b92b-5eea6d6858c2-cilium-run\") on node \"localhost\" DevicePath \"\"" May 15 00:57:46.614453 kubelet[1903]: I0515 00:57:46.614454 1903 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9e3a21b6-aa2c-4875-b92b-5eea6d6858c2-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 15 00:57:46.614673 kubelet[1903]: I0515 00:57:46.614463 1903 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9e3a21b6-aa2c-4875-b92b-5eea6d6858c2-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 15 00:57:46.614673 kubelet[1903]: I0515 00:57:46.614475 1903 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9e3a21b6-aa2c-4875-b92b-5eea6d6858c2-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 15 00:57:46.614673 kubelet[1903]: I0515 00:57:46.614484 1903 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9e3a21b6-aa2c-4875-b92b-5eea6d6858c2-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 15 00:57:46.614673 kubelet[1903]: I0515 00:57:46.614493 1903 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9e3a21b6-aa2c-4875-b92b-5eea6d6858c2-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 15 00:57:46.614673 kubelet[1903]: I0515 00:57:46.614511 1903 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rgjzv\" (UniqueName: \"kubernetes.io/projected/9e3a21b6-aa2c-4875-b92b-5eea6d6858c2-kube-api-access-rgjzv\") on node \"localhost\" DevicePath \"\"" May 15 00:57:46.614673 kubelet[1903]: I0515 00:57:46.614522 1903 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9e3a21b6-aa2c-4875-b92b-5eea6d6858c2-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 15 00:57:46.614673 kubelet[1903]: I0515 00:57:46.614530 1903 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9e3a21b6-aa2c-4875-b92b-5eea6d6858c2-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 15 00:57:46.614673 kubelet[1903]: I0515 00:57:46.614541 1903 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9e3a21b6-aa2c-4875-b92b-5eea6d6858c2-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 15 00:57:46.614857 kubelet[1903]: I0515 00:57:46.614549 1903 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9e3a21b6-aa2c-4875-b92b-5eea6d6858c2-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" May 15 00:57:46.715426 systemd[1]: var-lib-kubelet-pods-9e3a21b6\x2daa2c\x2d4875\x2db92b\x2d5eea6d6858c2-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 15 00:57:46.715540 systemd[1]: var-lib-kubelet-pods-9e3a21b6\x2daa2c\x2d4875\x2db92b\x2d5eea6d6858c2-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. May 15 00:57:47.081261 kubelet[1903]: I0515 00:57:47.081221 1903 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-15T00:57:47Z","lastTransitionTime":"2025-05-15T00:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 15 00:57:47.241557 kubelet[1903]: E0515 00:57:47.241525 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:57:47.246749 systemd[1]: Removed slice kubepods-burstable-pod9e3a21b6_aa2c_4875_b92b_5eea6d6858c2.slice. May 15 00:57:47.451884 systemd[1]: Created slice kubepods-burstable-pod58a411d4_c49b_4906_b204_e4e137763aa2.slice. May 15 00:57:47.519863 kubelet[1903]: I0515 00:57:47.519831 1903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/58a411d4-c49b-4906-b204-e4e137763aa2-host-proc-sys-net\") pod \"cilium-44vh7\" (UID: \"58a411d4-c49b-4906-b204-e4e137763aa2\") " pod="kube-system/cilium-44vh7" May 15 00:57:47.519863 kubelet[1903]: I0515 00:57:47.519865 1903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/58a411d4-c49b-4906-b204-e4e137763aa2-host-proc-sys-kernel\") pod \"cilium-44vh7\" (UID: \"58a411d4-c49b-4906-b204-e4e137763aa2\") " pod="kube-system/cilium-44vh7" May 15 00:57:47.519863 kubelet[1903]: I0515 00:57:47.519885 1903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/58a411d4-c49b-4906-b204-e4e137763aa2-cilium-cgroup\") pod \"cilium-44vh7\" (UID: \"58a411d4-c49b-4906-b204-e4e137763aa2\") " pod="kube-system/cilium-44vh7" May 15 00:57:47.520254 kubelet[1903]: I0515 00:57:47.519898 1903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/58a411d4-c49b-4906-b204-e4e137763aa2-etc-cni-netd\") pod \"cilium-44vh7\" (UID: \"58a411d4-c49b-4906-b204-e4e137763aa2\") " pod="kube-system/cilium-44vh7" May 15 00:57:47.520254 kubelet[1903]: I0515 00:57:47.519912 1903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/58a411d4-c49b-4906-b204-e4e137763aa2-cni-path\") pod \"cilium-44vh7\" (UID: \"58a411d4-c49b-4906-b204-e4e137763aa2\") " pod="kube-system/cilium-44vh7" May 15 00:57:47.520254 kubelet[1903]: I0515 00:57:47.519926 1903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/58a411d4-c49b-4906-b204-e4e137763aa2-hubble-tls\") pod \"cilium-44vh7\" (UID: \"58a411d4-c49b-4906-b204-e4e137763aa2\") " pod="kube-system/cilium-44vh7" May 15 00:57:47.520254 kubelet[1903]: I0515 00:57:47.519961 1903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jd47f\" (UniqueName: \"kubernetes.io/projected/58a411d4-c49b-4906-b204-e4e137763aa2-kube-api-access-jd47f\") pod \"cilium-44vh7\" (UID: \"58a411d4-c49b-4906-b204-e4e137763aa2\") " pod="kube-system/cilium-44vh7" May 15 00:57:47.520254 kubelet[1903]: I0515 00:57:47.519998 1903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/58a411d4-c49b-4906-b204-e4e137763aa2-cilium-config-path\") pod \"cilium-44vh7\" (UID: \"58a411d4-c49b-4906-b204-e4e137763aa2\") " pod="kube-system/cilium-44vh7" May 15 00:57:47.520254 kubelet[1903]: I0515 00:57:47.520025 1903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/58a411d4-c49b-4906-b204-e4e137763aa2-clustermesh-secrets\") pod \"cilium-44vh7\" (UID: \"58a411d4-c49b-4906-b204-e4e137763aa2\") " pod="kube-system/cilium-44vh7" May 15 00:57:47.520394 kubelet[1903]: I0515 00:57:47.520046 1903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/58a411d4-c49b-4906-b204-e4e137763aa2-lib-modules\") pod \"cilium-44vh7\" (UID: \"58a411d4-c49b-4906-b204-e4e137763aa2\") " pod="kube-system/cilium-44vh7" May 15 00:57:47.520394 kubelet[1903]: I0515 00:57:47.520074 1903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/58a411d4-c49b-4906-b204-e4e137763aa2-bpf-maps\") pod \"cilium-44vh7\" (UID: \"58a411d4-c49b-4906-b204-e4e137763aa2\") " pod="kube-system/cilium-44vh7" May 15 00:57:47.520394 kubelet[1903]: I0515 00:57:47.520104 1903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/58a411d4-c49b-4906-b204-e4e137763aa2-hostproc\") pod \"cilium-44vh7\" (UID: \"58a411d4-c49b-4906-b204-e4e137763aa2\") " pod="kube-system/cilium-44vh7" May 15 00:57:47.520394 kubelet[1903]: I0515 00:57:47.520117 1903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/58a411d4-c49b-4906-b204-e4e137763aa2-xtables-lock\") pod \"cilium-44vh7\" (UID: \"58a411d4-c49b-4906-b204-e4e137763aa2\") " pod="kube-system/cilium-44vh7" May 15 00:57:47.520394 kubelet[1903]: I0515 00:57:47.520131 1903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/58a411d4-c49b-4906-b204-e4e137763aa2-cilium-run\") pod \"cilium-44vh7\" (UID: \"58a411d4-c49b-4906-b204-e4e137763aa2\") " pod="kube-system/cilium-44vh7" May 15 00:57:47.520394 kubelet[1903]: I0515 00:57:47.520143 1903 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/58a411d4-c49b-4906-b204-e4e137763aa2-cilium-ipsec-secrets\") pod \"cilium-44vh7\" (UID: \"58a411d4-c49b-4906-b204-e4e137763aa2\") " pod="kube-system/cilium-44vh7" May 15 00:57:47.754255 kubelet[1903]: E0515 00:57:47.754137 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:57:47.754796 env[1204]: time="2025-05-15T00:57:47.754747014Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-44vh7,Uid:58a411d4-c49b-4906-b204-e4e137763aa2,Namespace:kube-system,Attempt:0,}" May 15 00:57:47.903820 env[1204]: time="2025-05-15T00:57:47.903738955Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:57:47.903820 env[1204]: time="2025-05-15T00:57:47.903781183Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:57:47.903820 env[1204]: time="2025-05-15T00:57:47.903793697Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:57:47.904028 env[1204]: time="2025-05-15T00:57:47.903945792Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b034d30d36aa05693dc15bf6ee05257f47057a7083707aefc48aaeaf09f1ac5b pid=3734 runtime=io.containerd.runc.v2 May 15 00:57:47.922194 systemd[1]: run-containerd-runc-k8s.io-b034d30d36aa05693dc15bf6ee05257f47057a7083707aefc48aaeaf09f1ac5b-runc.h5IlhQ.mount: Deactivated successfully. May 15 00:57:47.924140 systemd[1]: Started cri-containerd-b034d30d36aa05693dc15bf6ee05257f47057a7083707aefc48aaeaf09f1ac5b.scope. May 15 00:57:47.946890 env[1204]: time="2025-05-15T00:57:47.946840316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-44vh7,Uid:58a411d4-c49b-4906-b204-e4e137763aa2,Namespace:kube-system,Attempt:0,} returns sandbox id \"b034d30d36aa05693dc15bf6ee05257f47057a7083707aefc48aaeaf09f1ac5b\"" May 15 00:57:47.947798 kubelet[1903]: E0515 00:57:47.947585 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:57:47.949365 env[1204]: time="2025-05-15T00:57:47.949335956Z" level=info msg="CreateContainer within sandbox \"b034d30d36aa05693dc15bf6ee05257f47057a7083707aefc48aaeaf09f1ac5b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 15 00:57:47.962835 env[1204]: time="2025-05-15T00:57:47.962776289Z" level=info msg="CreateContainer within sandbox \"b034d30d36aa05693dc15bf6ee05257f47057a7083707aefc48aaeaf09f1ac5b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"409c3cd7135309a835e7921392c79fee4021307edeb0e2e5fc33bbd7f613be8b\"" May 15 00:57:47.963447 env[1204]: time="2025-05-15T00:57:47.963410306Z" level=info msg="StartContainer for \"409c3cd7135309a835e7921392c79fee4021307edeb0e2e5fc33bbd7f613be8b\"" May 15 00:57:47.978265 systemd[1]: Started cri-containerd-409c3cd7135309a835e7921392c79fee4021307edeb0e2e5fc33bbd7f613be8b.scope. May 15 00:57:48.001893 env[1204]: time="2025-05-15T00:57:48.001854423Z" level=info msg="StartContainer for \"409c3cd7135309a835e7921392c79fee4021307edeb0e2e5fc33bbd7f613be8b\" returns successfully" May 15 00:57:48.007596 systemd[1]: cri-containerd-409c3cd7135309a835e7921392c79fee4021307edeb0e2e5fc33bbd7f613be8b.scope: Deactivated successfully. May 15 00:57:48.036991 env[1204]: time="2025-05-15T00:57:48.036933040Z" level=info msg="shim disconnected" id=409c3cd7135309a835e7921392c79fee4021307edeb0e2e5fc33bbd7f613be8b May 15 00:57:48.036991 env[1204]: time="2025-05-15T00:57:48.036989546Z" level=warning msg="cleaning up after shim disconnected" id=409c3cd7135309a835e7921392c79fee4021307edeb0e2e5fc33bbd7f613be8b namespace=k8s.io May 15 00:57:48.037231 env[1204]: time="2025-05-15T00:57:48.037004264Z" level=info msg="cleaning up dead shim" May 15 00:57:48.043261 env[1204]: time="2025-05-15T00:57:48.043205228Z" level=warning msg="cleanup warnings time=\"2025-05-15T00:57:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3816 runtime=io.containerd.runc.v2\n" May 15 00:57:48.422836 kubelet[1903]: E0515 00:57:48.422804 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:57:48.424690 env[1204]: time="2025-05-15T00:57:48.424648826Z" level=info msg="CreateContainer within sandbox \"b034d30d36aa05693dc15bf6ee05257f47057a7083707aefc48aaeaf09f1ac5b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 15 00:57:48.440977 env[1204]: time="2025-05-15T00:57:48.440916233Z" level=info msg="CreateContainer within sandbox \"b034d30d36aa05693dc15bf6ee05257f47057a7083707aefc48aaeaf09f1ac5b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0f4e6e1eeb1d0f5aa0760d3e20acb260b3b1e716d6a52a5e38f75b1d972ad4f8\"" May 15 00:57:48.441437 env[1204]: time="2025-05-15T00:57:48.441411300Z" level=info msg="StartContainer for \"0f4e6e1eeb1d0f5aa0760d3e20acb260b3b1e716d6a52a5e38f75b1d972ad4f8\"" May 15 00:57:48.454066 systemd[1]: Started cri-containerd-0f4e6e1eeb1d0f5aa0760d3e20acb260b3b1e716d6a52a5e38f75b1d972ad4f8.scope. May 15 00:57:48.476256 env[1204]: time="2025-05-15T00:57:48.476201999Z" level=info msg="StartContainer for \"0f4e6e1eeb1d0f5aa0760d3e20acb260b3b1e716d6a52a5e38f75b1d972ad4f8\" returns successfully" May 15 00:57:48.481109 systemd[1]: cri-containerd-0f4e6e1eeb1d0f5aa0760d3e20acb260b3b1e716d6a52a5e38f75b1d972ad4f8.scope: Deactivated successfully. May 15 00:57:48.498728 env[1204]: time="2025-05-15T00:57:48.498686029Z" level=info msg="shim disconnected" id=0f4e6e1eeb1d0f5aa0760d3e20acb260b3b1e716d6a52a5e38f75b1d972ad4f8 May 15 00:57:48.498728 env[1204]: time="2025-05-15T00:57:48.498725742Z" level=warning msg="cleaning up after shim disconnected" id=0f4e6e1eeb1d0f5aa0760d3e20acb260b3b1e716d6a52a5e38f75b1d972ad4f8 namespace=k8s.io May 15 00:57:48.498728 env[1204]: time="2025-05-15T00:57:48.498733607Z" level=info msg="cleaning up dead shim" May 15 00:57:48.505037 env[1204]: time="2025-05-15T00:57:48.504979415Z" level=warning msg="cleanup warnings time=\"2025-05-15T00:57:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3878 runtime=io.containerd.runc.v2\n" May 15 00:57:49.243470 kubelet[1903]: I0515 00:57:49.243418 1903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e3a21b6-aa2c-4875-b92b-5eea6d6858c2" path="/var/lib/kubelet/pods/9e3a21b6-aa2c-4875-b92b-5eea6d6858c2/volumes" May 15 00:57:49.426950 kubelet[1903]: E0515 00:57:49.426913 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:57:49.428750 env[1204]: time="2025-05-15T00:57:49.428700779Z" level=info msg="CreateContainer within sandbox \"b034d30d36aa05693dc15bf6ee05257f47057a7083707aefc48aaeaf09f1ac5b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 15 00:57:49.724072 env[1204]: time="2025-05-15T00:57:49.723983528Z" level=info msg="CreateContainer within sandbox \"b034d30d36aa05693dc15bf6ee05257f47057a7083707aefc48aaeaf09f1ac5b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ef0b79b61019e42d2c6202228974f20d102c5915a5ae1c614e5c1204726cf44d\"" May 15 00:57:49.724581 env[1204]: time="2025-05-15T00:57:49.724539550Z" level=info msg="StartContainer for \"ef0b79b61019e42d2c6202228974f20d102c5915a5ae1c614e5c1204726cf44d\"" May 15 00:57:49.742854 systemd[1]: Started cri-containerd-ef0b79b61019e42d2c6202228974f20d102c5915a5ae1c614e5c1204726cf44d.scope. May 15 00:57:49.766472 systemd[1]: cri-containerd-ef0b79b61019e42d2c6202228974f20d102c5915a5ae1c614e5c1204726cf44d.scope: Deactivated successfully. May 15 00:57:49.805244 env[1204]: time="2025-05-15T00:57:49.805145759Z" level=info msg="StartContainer for \"ef0b79b61019e42d2c6202228974f20d102c5915a5ae1c614e5c1204726cf44d\" returns successfully" May 15 00:57:49.899842 systemd[1]: run-containerd-runc-k8s.io-ef0b79b61019e42d2c6202228974f20d102c5915a5ae1c614e5c1204726cf44d-runc.ioCtn9.mount: Deactivated successfully. May 15 00:57:49.899934 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ef0b79b61019e42d2c6202228974f20d102c5915a5ae1c614e5c1204726cf44d-rootfs.mount: Deactivated successfully. May 15 00:57:49.950942 env[1204]: time="2025-05-15T00:57:49.950886267Z" level=info msg="shim disconnected" id=ef0b79b61019e42d2c6202228974f20d102c5915a5ae1c614e5c1204726cf44d May 15 00:57:49.950942 env[1204]: time="2025-05-15T00:57:49.950932654Z" level=warning msg="cleaning up after shim disconnected" id=ef0b79b61019e42d2c6202228974f20d102c5915a5ae1c614e5c1204726cf44d namespace=k8s.io May 15 00:57:49.950942 env[1204]: time="2025-05-15T00:57:49.950941240Z" level=info msg="cleaning up dead shim" May 15 00:57:49.957107 env[1204]: time="2025-05-15T00:57:49.957062323Z" level=warning msg="cleanup warnings time=\"2025-05-15T00:57:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3933 runtime=io.containerd.runc.v2\n" May 15 00:57:50.276218 kubelet[1903]: E0515 00:57:50.276144 1903 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 15 00:57:50.429873 kubelet[1903]: E0515 00:57:50.429845 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:57:50.431690 env[1204]: time="2025-05-15T00:57:50.431647282Z" level=info msg="CreateContainer within sandbox \"b034d30d36aa05693dc15bf6ee05257f47057a7083707aefc48aaeaf09f1ac5b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 15 00:57:50.451588 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4294514078.mount: Deactivated successfully. May 15 00:57:50.453706 env[1204]: time="2025-05-15T00:57:50.453659188Z" level=info msg="CreateContainer within sandbox \"b034d30d36aa05693dc15bf6ee05257f47057a7083707aefc48aaeaf09f1ac5b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"001493cecd7d2982cb93fcfe50337eb84fa825f79eeb0ed3867c26e873caf952\"" May 15 00:57:50.454156 env[1204]: time="2025-05-15T00:57:50.454128669Z" level=info msg="StartContainer for \"001493cecd7d2982cb93fcfe50337eb84fa825f79eeb0ed3867c26e873caf952\"" May 15 00:57:50.471363 systemd[1]: Started cri-containerd-001493cecd7d2982cb93fcfe50337eb84fa825f79eeb0ed3867c26e873caf952.scope. May 15 00:57:50.492679 systemd[1]: cri-containerd-001493cecd7d2982cb93fcfe50337eb84fa825f79eeb0ed3867c26e873caf952.scope: Deactivated successfully. May 15 00:57:50.494480 env[1204]: time="2025-05-15T00:57:50.494410412Z" level=info msg="StartContainer for \"001493cecd7d2982cb93fcfe50337eb84fa825f79eeb0ed3867c26e873caf952\" returns successfully" May 15 00:57:50.513417 env[1204]: time="2025-05-15T00:57:50.513364052Z" level=info msg="shim disconnected" id=001493cecd7d2982cb93fcfe50337eb84fa825f79eeb0ed3867c26e873caf952 May 15 00:57:50.513417 env[1204]: time="2025-05-15T00:57:50.513426198Z" level=warning msg="cleaning up after shim disconnected" id=001493cecd7d2982cb93fcfe50337eb84fa825f79eeb0ed3867c26e873caf952 namespace=k8s.io May 15 00:57:50.513578 env[1204]: time="2025-05-15T00:57:50.513436307Z" level=info msg="cleaning up dead shim" May 15 00:57:50.519063 env[1204]: time="2025-05-15T00:57:50.519030379Z" level=warning msg="cleanup warnings time=\"2025-05-15T00:57:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3987 runtime=io.containerd.runc.v2\n" May 15 00:57:50.899909 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-001493cecd7d2982cb93fcfe50337eb84fa825f79eeb0ed3867c26e873caf952-rootfs.mount: Deactivated successfully. May 15 00:57:51.432879 kubelet[1903]: E0515 00:57:51.432852 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:57:51.434992 env[1204]: time="2025-05-15T00:57:51.434384764Z" level=info msg="CreateContainer within sandbox \"b034d30d36aa05693dc15bf6ee05257f47057a7083707aefc48aaeaf09f1ac5b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 15 00:57:51.451789 env[1204]: time="2025-05-15T00:57:51.451735400Z" level=info msg="CreateContainer within sandbox \"b034d30d36aa05693dc15bf6ee05257f47057a7083707aefc48aaeaf09f1ac5b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a7fa3291de18cb917ecf9a32a137825ae92f2c550988285617bd404889858bc3\"" May 15 00:57:51.452233 env[1204]: time="2025-05-15T00:57:51.452207435Z" level=info msg="StartContainer for \"a7fa3291de18cb917ecf9a32a137825ae92f2c550988285617bd404889858bc3\"" May 15 00:57:51.467301 systemd[1]: Started cri-containerd-a7fa3291de18cb917ecf9a32a137825ae92f2c550988285617bd404889858bc3.scope. May 15 00:57:51.492923 env[1204]: time="2025-05-15T00:57:51.492870708Z" level=info msg="StartContainer for \"a7fa3291de18cb917ecf9a32a137825ae92f2c550988285617bd404889858bc3\" returns successfully" May 15 00:57:51.727210 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) May 15 00:57:52.436447 kubelet[1903]: E0515 00:57:52.436416 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:57:52.449327 kubelet[1903]: I0515 00:57:52.449111 1903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-44vh7" podStartSLOduration=5.449095885 podStartE2EDuration="5.449095885s" podCreationTimestamp="2025-05-15 00:57:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:57:52.448664134 +0000 UTC m=+87.295993912" watchObservedRunningTime="2025-05-15 00:57:52.449095885 +0000 UTC m=+87.296425663" May 15 00:57:53.755718 kubelet[1903]: E0515 00:57:53.755675 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:57:54.140314 systemd[1]: run-containerd-runc-k8s.io-a7fa3291de18cb917ecf9a32a137825ae92f2c550988285617bd404889858bc3-runc.kIAIAS.mount: Deactivated successfully. May 15 00:57:54.276983 systemd-networkd[1025]: lxc_health: Link UP May 15 00:57:54.286571 systemd-networkd[1025]: lxc_health: Gained carrier May 15 00:57:54.287603 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 15 00:57:55.640371 systemd-networkd[1025]: lxc_health: Gained IPv6LL May 15 00:57:55.755320 kubelet[1903]: E0515 00:57:55.755260 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:57:56.443674 kubelet[1903]: E0515 00:57:56.443623 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:57:57.445640 kubelet[1903]: E0515 00:57:57.445595 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:57:58.310113 systemd[1]: run-containerd-runc-k8s.io-a7fa3291de18cb917ecf9a32a137825ae92f2c550988285617bd404889858bc3-runc.M7WQmj.mount: Deactivated successfully. May 15 00:58:00.437151 sshd[3705]: pam_unix(sshd:session): session closed for user core May 15 00:58:00.439647 systemd[1]: sshd@25-10.0.0.117:22-10.0.0.1:39686.service: Deactivated successfully. May 15 00:58:00.440387 systemd[1]: session-26.scope: Deactivated successfully. May 15 00:58:00.441088 systemd-logind[1192]: Session 26 logged out. Waiting for processes to exit. May 15 00:58:00.441827 systemd-logind[1192]: Removed session 26.