May 8 00:45:33.081831 kernel: Linux version 5.15.180-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Wed May 7 23:10:51 -00 2025 May 8 00:45:33.081871 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a54efb5fced97d6fa50818abcad373184ba88ccc0f58664d2cd82270befba488 May 8 00:45:33.081881 kernel: BIOS-provided physical RAM map: May 8 00:45:33.081889 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 8 00:45:33.081895 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 8 00:45:33.081902 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 8 00:45:33.081910 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable May 8 00:45:33.081917 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved May 8 00:45:33.081926 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 8 00:45:33.081933 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved May 8 00:45:33.081940 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 8 00:45:33.081947 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 8 00:45:33.081954 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 8 00:45:33.081961 kernel: NX (Execute Disable) protection: active May 8 00:45:33.081972 kernel: SMBIOS 2.8 present. May 8 00:45:33.081979 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 May 8 00:45:33.081986 kernel: Hypervisor detected: KVM May 8 00:45:33.081994 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 8 00:45:33.082001 kernel: kvm-clock: cpu 0, msr 7a198001, primary cpu clock May 8 00:45:33.082008 kernel: kvm-clock: using sched offset of 3665294315 cycles May 8 00:45:33.082016 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 8 00:45:33.082024 kernel: tsc: Detected 2794.748 MHz processor May 8 00:45:33.082032 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 8 00:45:33.082041 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 8 00:45:33.082049 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 May 8 00:45:33.082057 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 8 00:45:33.082064 kernel: Using GB pages for direct mapping May 8 00:45:33.082071 kernel: ACPI: Early table checksum verification disabled May 8 00:45:33.082079 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) May 8 00:45:33.082086 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:45:33.082097 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:45:33.082105 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:45:33.082114 kernel: ACPI: FACS 0x000000009CFE0000 000040 May 8 00:45:33.082121 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:45:33.082129 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:45:33.082136 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:45:33.082144 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:45:33.082151 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] May 8 00:45:33.082159 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] May 8 00:45:33.082167 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] May 8 00:45:33.082179 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] May 8 00:45:33.082187 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] May 8 00:45:33.082195 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] May 8 00:45:33.082203 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] May 8 00:45:33.082211 kernel: No NUMA configuration found May 8 00:45:33.082219 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] May 8 00:45:33.082229 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] May 8 00:45:33.082245 kernel: Zone ranges: May 8 00:45:33.082264 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 8 00:45:33.082277 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] May 8 00:45:33.082286 kernel: Normal empty May 8 00:45:33.082294 kernel: Movable zone start for each node May 8 00:45:33.082309 kernel: Early memory node ranges May 8 00:45:33.082318 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 8 00:45:33.082326 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] May 8 00:45:33.082336 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] May 8 00:45:33.082347 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 8 00:45:33.082355 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 8 00:45:33.082363 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges May 8 00:45:33.082371 kernel: ACPI: PM-Timer IO Port: 0x608 May 8 00:45:33.082379 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 8 00:45:33.082401 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 8 00:45:33.082409 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 8 00:45:33.082417 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 8 00:45:33.082425 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 8 00:45:33.082435 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 8 00:45:33.082443 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 8 00:45:33.082460 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 8 00:45:33.082469 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 8 00:45:33.082476 kernel: TSC deadline timer available May 8 00:45:33.082484 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs May 8 00:45:33.082492 kernel: kvm-guest: KVM setup pv remote TLB flush May 8 00:45:33.082500 kernel: kvm-guest: setup PV sched yield May 8 00:45:33.082508 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices May 8 00:45:33.082518 kernel: Booting paravirtualized kernel on KVM May 8 00:45:33.082526 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 8 00:45:33.082537 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 May 8 00:45:33.082546 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 May 8 00:45:33.082554 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 May 8 00:45:33.082562 kernel: pcpu-alloc: [0] 0 1 2 3 May 8 00:45:33.082569 kernel: kvm-guest: setup async PF for cpu 0 May 8 00:45:33.082578 kernel: kvm-guest: stealtime: cpu 0, msr 9a41c0c0 May 8 00:45:33.082585 kernel: kvm-guest: PV spinlocks enabled May 8 00:45:33.082595 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 8 00:45:33.082604 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 May 8 00:45:33.082611 kernel: Policy zone: DMA32 May 8 00:45:33.082621 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a54efb5fced97d6fa50818abcad373184ba88ccc0f58664d2cd82270befba488 May 8 00:45:33.082629 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 8 00:45:33.082640 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 8 00:45:33.082648 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 8 00:45:33.082656 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 8 00:45:33.082671 kernel: Memory: 2436696K/2571752K available (12294K kernel code, 2279K rwdata, 13724K rodata, 47464K init, 4116K bss, 134796K reserved, 0K cma-reserved) May 8 00:45:33.082679 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 8 00:45:33.082687 kernel: ftrace: allocating 34584 entries in 136 pages May 8 00:45:33.082697 kernel: ftrace: allocated 136 pages with 2 groups May 8 00:45:33.082705 kernel: rcu: Hierarchical RCU implementation. May 8 00:45:33.082714 kernel: rcu: RCU event tracing is enabled. May 8 00:45:33.082722 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 8 00:45:33.082730 kernel: Rude variant of Tasks RCU enabled. May 8 00:45:33.082738 kernel: Tracing variant of Tasks RCU enabled. May 8 00:45:33.082748 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 8 00:45:33.082756 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 8 00:45:33.082764 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 8 00:45:33.082777 kernel: random: crng init done May 8 00:45:33.082786 kernel: Console: colour VGA+ 80x25 May 8 00:45:33.082794 kernel: printk: console [ttyS0] enabled May 8 00:45:33.082802 kernel: ACPI: Core revision 20210730 May 8 00:45:33.082810 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 8 00:45:33.082818 kernel: APIC: Switch to symmetric I/O mode setup May 8 00:45:33.082828 kernel: x2apic enabled May 8 00:45:33.082836 kernel: Switched APIC routing to physical x2apic. May 8 00:45:33.082844 kernel: kvm-guest: setup PV IPIs May 8 00:45:33.082852 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 8 00:45:33.082860 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 8 00:45:33.082874 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) May 8 00:45:33.082882 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 8 00:45:33.082890 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 8 00:45:33.082899 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 8 00:45:33.082914 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 8 00:45:33.082922 kernel: Spectre V2 : Mitigation: Retpolines May 8 00:45:33.082931 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch May 8 00:45:33.082941 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT May 8 00:45:33.082950 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 8 00:45:33.082961 kernel: RETBleed: Mitigation: untrained return thunk May 8 00:45:33.082969 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 8 00:45:33.082978 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp May 8 00:45:33.082987 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 8 00:45:33.082997 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 8 00:45:33.083006 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 8 00:45:33.083014 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 8 00:45:33.083023 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. May 8 00:45:33.083031 kernel: Freeing SMP alternatives memory: 32K May 8 00:45:33.083040 kernel: pid_max: default: 32768 minimum: 301 May 8 00:45:33.083050 kernel: LSM: Security Framework initializing May 8 00:45:33.083062 kernel: SELinux: Initializing. May 8 00:45:33.083071 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 8 00:45:33.083079 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 8 00:45:33.083088 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 8 00:45:33.083097 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 8 00:45:33.083105 kernel: ... version: 0 May 8 00:45:33.083113 kernel: ... bit width: 48 May 8 00:45:33.083122 kernel: ... generic registers: 6 May 8 00:45:33.083130 kernel: ... value mask: 0000ffffffffffff May 8 00:45:33.083144 kernel: ... max period: 00007fffffffffff May 8 00:45:33.083158 kernel: ... fixed-purpose events: 0 May 8 00:45:33.083167 kernel: ... event mask: 000000000000003f May 8 00:45:33.083180 kernel: signal: max sigframe size: 1776 May 8 00:45:33.083189 kernel: rcu: Hierarchical SRCU implementation. May 8 00:45:33.083202 kernel: smp: Bringing up secondary CPUs ... May 8 00:45:33.083213 kernel: x86: Booting SMP configuration: May 8 00:45:33.083222 kernel: .... node #0, CPUs: #1 May 8 00:45:33.083230 kernel: kvm-clock: cpu 1, msr 7a198041, secondary cpu clock May 8 00:45:33.083242 kernel: kvm-guest: setup async PF for cpu 1 May 8 00:45:33.083253 kernel: kvm-guest: stealtime: cpu 1, msr 9a49c0c0 May 8 00:45:33.083261 kernel: #2 May 8 00:45:33.083270 kernel: kvm-clock: cpu 2, msr 7a198081, secondary cpu clock May 8 00:45:33.083278 kernel: kvm-guest: setup async PF for cpu 2 May 8 00:45:33.083287 kernel: kvm-guest: stealtime: cpu 2, msr 9a51c0c0 May 8 00:45:33.083295 kernel: #3 May 8 00:45:33.083303 kernel: kvm-clock: cpu 3, msr 7a1980c1, secondary cpu clock May 8 00:45:33.083311 kernel: kvm-guest: setup async PF for cpu 3 May 8 00:45:33.083320 kernel: kvm-guest: stealtime: cpu 3, msr 9a59c0c0 May 8 00:45:33.083330 kernel: smp: Brought up 1 node, 4 CPUs May 8 00:45:33.083338 kernel: smpboot: Max logical packages: 1 May 8 00:45:33.083346 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) May 8 00:45:33.083355 kernel: devtmpfs: initialized May 8 00:45:33.083363 kernel: x86/mm: Memory block size: 128MB May 8 00:45:33.083372 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 8 00:45:33.083380 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 8 00:45:33.083399 kernel: pinctrl core: initialized pinctrl subsystem May 8 00:45:33.083412 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 8 00:45:33.083427 kernel: audit: initializing netlink subsys (disabled) May 8 00:45:33.083442 kernel: audit: type=2000 audit(1746665132.670:1): state=initialized audit_enabled=0 res=1 May 8 00:45:33.083458 kernel: thermal_sys: Registered thermal governor 'step_wise' May 8 00:45:33.083469 kernel: thermal_sys: Registered thermal governor 'user_space' May 8 00:45:33.083477 kernel: cpuidle: using governor menu May 8 00:45:33.083486 kernel: ACPI: bus type PCI registered May 8 00:45:33.083494 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 8 00:45:33.083503 kernel: dca service started, version 1.12.1 May 8 00:45:33.083512 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) May 8 00:45:33.083523 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 May 8 00:45:33.083531 kernel: PCI: Using configuration type 1 for base access May 8 00:45:33.083540 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 8 00:45:33.083548 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages May 8 00:45:33.083557 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages May 8 00:45:33.083565 kernel: ACPI: Added _OSI(Module Device) May 8 00:45:33.083574 kernel: ACPI: Added _OSI(Processor Device) May 8 00:45:33.083582 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 8 00:45:33.083590 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 8 00:45:33.083600 kernel: ACPI: Added _OSI(Linux-Dell-Video) May 8 00:45:33.083608 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) May 8 00:45:33.083617 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) May 8 00:45:33.083630 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 8 00:45:33.083639 kernel: ACPI: Interpreter enabled May 8 00:45:33.083649 kernel: ACPI: PM: (supports S0 S3 S5) May 8 00:45:33.083658 kernel: ACPI: Using IOAPIC for interrupt routing May 8 00:45:33.083666 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 8 00:45:33.083675 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 8 00:45:33.083686 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 8 00:45:33.083898 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 8 00:45:33.084005 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 8 00:45:33.084087 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 8 00:45:33.084097 kernel: PCI host bridge to bus 0000:00 May 8 00:45:33.084206 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 8 00:45:33.084549 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 8 00:45:33.084639 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 8 00:45:33.084717 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] May 8 00:45:33.084790 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 8 00:45:33.084864 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] May 8 00:45:33.084937 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 8 00:45:33.085058 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 8 00:45:33.085162 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 May 8 00:45:33.085250 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] May 8 00:45:33.085335 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] May 8 00:45:33.085471 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] May 8 00:45:33.085566 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 8 00:45:33.085665 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 May 8 00:45:33.085760 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] May 8 00:45:33.085862 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] May 8 00:45:33.085959 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] May 8 00:45:33.086059 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 May 8 00:45:33.086142 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] May 8 00:45:33.086280 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] May 8 00:45:33.087143 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] May 8 00:45:33.087250 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 May 8 00:45:33.087347 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] May 8 00:45:33.087470 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] May 8 00:45:33.087580 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] May 8 00:45:33.087851 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] May 8 00:45:33.087956 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 8 00:45:33.088043 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 8 00:45:33.088133 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 8 00:45:33.088250 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] May 8 00:45:33.088359 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] May 8 00:45:33.088629 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 8 00:45:33.088855 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] May 8 00:45:33.088873 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 8 00:45:33.088882 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 8 00:45:33.088896 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 8 00:45:33.088909 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 8 00:45:33.088918 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 8 00:45:33.088926 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 8 00:45:33.088937 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 8 00:45:33.088946 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 8 00:45:33.088956 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 8 00:45:33.088965 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 8 00:45:33.088973 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 8 00:45:33.088982 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 8 00:45:33.088992 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 8 00:45:33.089002 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 8 00:45:33.089011 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 8 00:45:33.089019 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 8 00:45:33.089027 kernel: iommu: Default domain type: Translated May 8 00:45:33.089036 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 8 00:45:33.089133 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 8 00:45:33.089216 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 8 00:45:33.089304 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 8 00:45:33.089316 kernel: vgaarb: loaded May 8 00:45:33.089324 kernel: pps_core: LinuxPPS API ver. 1 registered May 8 00:45:33.089333 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 8 00:45:33.089342 kernel: PTP clock support registered May 8 00:45:33.089351 kernel: PCI: Using ACPI for IRQ routing May 8 00:45:33.089359 kernel: PCI: pci_cache_line_size set to 64 bytes May 8 00:45:33.089368 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 8 00:45:33.089377 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] May 8 00:45:33.089411 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 8 00:45:33.089421 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 8 00:45:33.089429 kernel: clocksource: Switched to clocksource kvm-clock May 8 00:45:33.089438 kernel: VFS: Disk quotas dquot_6.6.0 May 8 00:45:33.089455 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 8 00:45:33.089464 kernel: pnp: PnP ACPI init May 8 00:45:33.089602 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved May 8 00:45:33.089618 kernel: pnp: PnP ACPI: found 6 devices May 8 00:45:33.089630 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 8 00:45:33.089639 kernel: NET: Registered PF_INET protocol family May 8 00:45:33.089648 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 8 00:45:33.089656 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 8 00:45:33.089668 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 8 00:45:33.089676 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 8 00:45:33.089685 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) May 8 00:45:33.089694 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 8 00:45:33.089702 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 8 00:45:33.089712 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 8 00:45:33.089721 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 8 00:45:33.089730 kernel: NET: Registered PF_XDP protocol family May 8 00:45:33.089815 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 8 00:45:33.089895 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 8 00:45:33.089971 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 8 00:45:33.090043 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] May 8 00:45:33.090154 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 8 00:45:33.090772 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] May 8 00:45:33.090791 kernel: PCI: CLS 0 bytes, default 64 May 8 00:45:33.090799 kernel: Initialise system trusted keyrings May 8 00:45:33.090808 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 8 00:45:33.090817 kernel: Key type asymmetric registered May 8 00:45:33.090826 kernel: Asymmetric key parser 'x509' registered May 8 00:45:33.090834 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 8 00:45:33.090843 kernel: io scheduler mq-deadline registered May 8 00:45:33.090851 kernel: io scheduler kyber registered May 8 00:45:33.090860 kernel: io scheduler bfq registered May 8 00:45:33.090870 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 8 00:45:33.090879 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 8 00:45:33.090887 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 8 00:45:33.090896 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 8 00:45:33.090907 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 8 00:45:33.090916 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 8 00:45:33.090925 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 8 00:45:33.090933 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 8 00:45:33.090944 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 8 00:45:33.091034 kernel: rtc_cmos 00:04: RTC can wake from S4 May 8 00:45:33.091046 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 8 00:45:33.091898 kernel: rtc_cmos 00:04: registered as rtc0 May 8 00:45:33.091984 kernel: rtc_cmos 00:04: setting system clock to 2025-05-08T00:45:32 UTC (1746665132) May 8 00:45:33.092061 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 8 00:45:33.092072 kernel: NET: Registered PF_INET6 protocol family May 8 00:45:33.092081 kernel: Segment Routing with IPv6 May 8 00:45:33.092090 kernel: In-situ OAM (IOAM) with IPv6 May 8 00:45:33.092102 kernel: NET: Registered PF_PACKET protocol family May 8 00:45:33.092111 kernel: Key type dns_resolver registered May 8 00:45:33.092120 kernel: IPI shorthand broadcast: enabled May 8 00:45:33.092128 kernel: sched_clock: Marking stable (675575030, 164141966)->(980634740, -140917744) May 8 00:45:33.092137 kernel: registered taskstats version 1 May 8 00:45:33.092146 kernel: Loading compiled-in X.509 certificates May 8 00:45:33.092155 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.180-flatcar: c9ff13353458e6fa2786638fdd3dcad841d1075c' May 8 00:45:33.092163 kernel: Key type .fscrypt registered May 8 00:45:33.092172 kernel: Key type fscrypt-provisioning registered May 8 00:45:33.092182 kernel: ima: No TPM chip found, activating TPM-bypass! May 8 00:45:33.092191 kernel: ima: Allocated hash algorithm: sha1 May 8 00:45:33.092200 kernel: ima: No architecture policies found May 8 00:45:33.092208 kernel: clk: Disabling unused clocks May 8 00:45:33.092217 kernel: Freeing unused kernel image (initmem) memory: 47464K May 8 00:45:33.092226 kernel: Write protecting the kernel read-only data: 28672k May 8 00:45:33.092234 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K May 8 00:45:33.092243 kernel: Freeing unused kernel image (rodata/data gap) memory: 612K May 8 00:45:33.092253 kernel: Run /init as init process May 8 00:45:33.092262 kernel: with arguments: May 8 00:45:33.092270 kernel: /init May 8 00:45:33.092279 kernel: with environment: May 8 00:45:33.092287 kernel: HOME=/ May 8 00:45:33.092295 kernel: TERM=linux May 8 00:45:33.092304 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 8 00:45:33.092315 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 8 00:45:33.092328 systemd[1]: Detected virtualization kvm. May 8 00:45:33.092338 systemd[1]: Detected architecture x86-64. May 8 00:45:33.092347 systemd[1]: Running in initrd. May 8 00:45:33.092355 systemd[1]: No hostname configured, using default hostname. May 8 00:45:33.092364 systemd[1]: Hostname set to . May 8 00:45:33.092374 systemd[1]: Initializing machine ID from VM UUID. May 8 00:45:33.092398 systemd[1]: Queued start job for default target initrd.target. May 8 00:45:33.092407 systemd[1]: Started systemd-ask-password-console.path. May 8 00:45:33.092416 systemd[1]: Reached target cryptsetup.target. May 8 00:45:33.092427 systemd[1]: Reached target paths.target. May 8 00:45:33.092444 systemd[1]: Reached target slices.target. May 8 00:45:33.092466 systemd[1]: Reached target swap.target. May 8 00:45:33.092476 systemd[1]: Reached target timers.target. May 8 00:45:33.092486 systemd[1]: Listening on iscsid.socket. May 8 00:45:33.092497 systemd[1]: Listening on iscsiuio.socket. May 8 00:45:33.092507 systemd[1]: Listening on systemd-journald-audit.socket. May 8 00:45:33.092516 systemd[1]: Listening on systemd-journald-dev-log.socket. May 8 00:45:33.092526 systemd[1]: Listening on systemd-journald.socket. May 8 00:45:33.092535 systemd[1]: Listening on systemd-networkd.socket. May 8 00:45:33.092545 systemd[1]: Listening on systemd-udevd-control.socket. May 8 00:45:33.092554 systemd[1]: Listening on systemd-udevd-kernel.socket. May 8 00:45:33.092565 systemd[1]: Reached target sockets.target. May 8 00:45:33.092574 systemd[1]: Starting kmod-static-nodes.service... May 8 00:45:33.092585 systemd[1]: Finished network-cleanup.service. May 8 00:45:33.092595 systemd[1]: Starting systemd-fsck-usr.service... May 8 00:45:33.092604 systemd[1]: Starting systemd-journald.service... May 8 00:45:33.092614 systemd[1]: Starting systemd-modules-load.service... May 8 00:45:33.092623 systemd[1]: Starting systemd-resolved.service... May 8 00:45:33.092633 systemd[1]: Starting systemd-vconsole-setup.service... May 8 00:45:33.092642 systemd[1]: Finished kmod-static-nodes.service. May 8 00:45:33.092651 systemd[1]: Finished systemd-fsck-usr.service. May 8 00:45:33.092661 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 8 00:45:33.092680 systemd-journald[198]: Journal started May 8 00:45:33.092733 systemd-journald[198]: Runtime Journal (/run/log/journal/537f1e88dc1943908bbc3402662f40a7) is 6.0M, max 48.5M, 42.5M free. May 8 00:45:33.080831 systemd-modules-load[199]: Inserted module 'overlay' May 8 00:45:33.122965 systemd[1]: Started systemd-journald.service. May 8 00:45:33.112750 systemd-resolved[200]: Positive Trust Anchors: May 8 00:45:33.112762 systemd-resolved[200]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 00:45:33.112788 systemd-resolved[200]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 8 00:45:33.115241 systemd-resolved[200]: Defaulting to hostname 'linux'. May 8 00:45:33.127423 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 8 00:45:33.136000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:33.140877 systemd[1]: Started systemd-resolved.service. May 8 00:45:33.156317 kernel: audit: type=1130 audit(1746665133.136:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:33.156355 kernel: Bridge firewalling registered May 8 00:45:33.156368 kernel: audit: type=1130 audit(1746665133.150:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:33.156380 kernel: audit: type=1130 audit(1746665133.155:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:33.150000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:33.155000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:33.151176 systemd-modules-load[199]: Inserted module 'br_netfilter' May 8 00:45:33.170943 kernel: audit: type=1130 audit(1746665133.159:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:33.159000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:33.151481 systemd[1]: Finished systemd-vconsole-setup.service. May 8 00:45:33.156443 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 8 00:45:33.159791 systemd[1]: Reached target nss-lookup.target. May 8 00:45:33.168863 systemd[1]: Starting dracut-cmdline-ask.service... May 8 00:45:33.199559 systemd[1]: Finished dracut-cmdline-ask.service. May 8 00:45:33.239613 kernel: SCSI subsystem initialized May 8 00:45:33.239657 kernel: audit: type=1130 audit(1746665133.200:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:33.200000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:33.202473 systemd[1]: Starting dracut-cmdline.service... May 8 00:45:33.244557 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 8 00:45:33.244579 kernel: device-mapper: uevent: version 1.0.3 May 8 00:45:33.244589 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com May 8 00:45:33.244611 dracut-cmdline[214]: dracut-dracut-053 May 8 00:45:33.244611 dracut-cmdline[214]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA May 8 00:45:33.244611 dracut-cmdline[214]: BEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a54efb5fced97d6fa50818abcad373184ba88ccc0f58664d2cd82270befba488 May 8 00:45:33.252303 systemd-modules-load[199]: Inserted module 'dm_multipath' May 8 00:45:33.253481 systemd[1]: Finished systemd-modules-load.service. May 8 00:45:33.281024 kernel: audit: type=1130 audit(1746665133.275:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:33.281055 kernel: Loading iSCSI transport class v2.0-870. May 8 00:45:33.275000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:33.281202 systemd[1]: Starting systemd-sysctl.service... May 8 00:45:33.289611 systemd[1]: Finished systemd-sysctl.service. May 8 00:45:33.295125 kernel: audit: type=1130 audit(1746665133.290:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:33.290000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:33.367460 kernel: iscsi: registered transport (tcp) May 8 00:45:33.390586 kernel: iscsi: registered transport (qla4xxx) May 8 00:45:33.390662 kernel: QLogic iSCSI HBA Driver May 8 00:45:33.423309 systemd[1]: Finished dracut-cmdline.service. May 8 00:45:33.480358 kernel: audit: type=1130 audit(1746665133.474:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:33.474000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:33.476428 systemd[1]: Starting dracut-pre-udev.service... May 8 00:45:33.535456 kernel: raid6: avx2x4 gen() 24067 MB/s May 8 00:45:33.552428 kernel: raid6: avx2x4 xor() 6031 MB/s May 8 00:45:33.600429 kernel: raid6: avx2x2 gen() 19004 MB/s May 8 00:45:33.617443 kernel: raid6: avx2x2 xor() 15030 MB/s May 8 00:45:33.703466 kernel: raid6: avx2x1 gen() 25833 MB/s May 8 00:45:33.720440 kernel: raid6: avx2x1 xor() 14672 MB/s May 8 00:45:33.737412 kernel: raid6: sse2x4 gen() 13376 MB/s May 8 00:45:33.787453 kernel: raid6: sse2x4 xor() 6926 MB/s May 8 00:45:33.830447 kernel: raid6: sse2x2 gen() 11994 MB/s May 8 00:45:33.936455 kernel: raid6: sse2x2 xor() 9297 MB/s May 8 00:45:33.953468 kernel: raid6: sse2x1 gen() 12030 MB/s May 8 00:45:34.027111 kernel: raid6: sse2x1 xor() 6175 MB/s May 8 00:45:34.027202 kernel: raid6: using algorithm avx2x1 gen() 25833 MB/s May 8 00:45:34.027212 kernel: raid6: .... xor() 14672 MB/s, rmw enabled May 8 00:45:34.027846 kernel: raid6: using avx2x2 recovery algorithm May 8 00:45:34.044449 kernel: xor: automatically using best checksumming function avx May 8 00:45:34.153460 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no May 8 00:45:34.163844 systemd[1]: Finished dracut-pre-udev.service. May 8 00:45:34.164000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:34.165000 audit: BPF prog-id=7 op=LOAD May 8 00:45:34.168000 audit: BPF prog-id=8 op=LOAD May 8 00:45:34.169408 kernel: audit: type=1130 audit(1746665134.164:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:34.169576 systemd[1]: Starting systemd-udevd.service... May 8 00:45:34.185769 systemd-udevd[399]: Using default interface naming scheme 'v252'. May 8 00:45:34.191089 systemd[1]: Started systemd-udevd.service. May 8 00:45:34.214000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:34.215689 systemd[1]: Starting dracut-pre-trigger.service... May 8 00:45:34.229104 dracut-pre-trigger[406]: rd.md=0: removing MD RAID activation May 8 00:45:34.265874 systemd[1]: Finished dracut-pre-trigger.service. May 8 00:45:34.266000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:34.267626 systemd[1]: Starting systemd-udev-trigger.service... May 8 00:45:34.304723 systemd[1]: Finished systemd-udev-trigger.service. May 8 00:45:34.304000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:34.339707 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 8 00:45:34.376648 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 8 00:45:34.376668 kernel: GPT:9289727 != 19775487 May 8 00:45:34.376680 kernel: GPT:Alternate GPT header not at the end of the disk. May 8 00:45:34.376692 kernel: GPT:9289727 != 19775487 May 8 00:45:34.376703 kernel: GPT: Use GNU Parted to correct GPT errors. May 8 00:45:34.376720 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 00:45:34.379409 kernel: cryptd: max_cpu_qlen set to 1000 May 8 00:45:34.403427 kernel: libata version 3.00 loaded. May 8 00:45:34.404406 kernel: AVX2 version of gcm_enc/dec engaged. May 8 00:45:34.404463 kernel: AES CTR mode by8 optimization enabled May 8 00:45:34.412430 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (448) May 8 00:45:34.422710 kernel: ahci 0000:00:1f.2: version 3.0 May 8 00:45:34.438067 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 8 00:45:34.438096 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 8 00:45:34.438225 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 8 00:45:34.438322 kernel: scsi host0: ahci May 8 00:45:34.438513 kernel: scsi host1: ahci May 8 00:45:34.438624 kernel: scsi host2: ahci May 8 00:45:34.438710 kernel: scsi host3: ahci May 8 00:45:34.438874 kernel: scsi host4: ahci May 8 00:45:34.439007 kernel: scsi host5: ahci May 8 00:45:34.439097 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 May 8 00:45:34.439108 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 May 8 00:45:34.439117 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 May 8 00:45:34.439126 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 May 8 00:45:34.439135 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 May 8 00:45:34.439143 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 May 8 00:45:34.422658 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. May 8 00:45:34.522290 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. May 8 00:45:34.528779 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. May 8 00:45:34.575246 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. May 8 00:45:34.580064 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 8 00:45:34.666674 systemd[1]: Starting disk-uuid.service... May 8 00:45:34.768722 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 8 00:45:34.768811 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 8 00:45:34.768837 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 8 00:45:34.770970 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 8 00:45:34.771054 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 8 00:45:34.772429 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 8 00:45:34.773421 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 8 00:45:34.774522 kernel: ata3.00: applying bridge limits May 8 00:45:34.775425 kernel: ata3.00: configured for UDMA/100 May 8 00:45:34.777423 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 8 00:45:34.810986 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 8 00:45:34.828299 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 8 00:45:34.828329 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 8 00:45:34.837971 disk-uuid[523]: Primary Header is updated. May 8 00:45:34.837971 disk-uuid[523]: Secondary Entries is updated. May 8 00:45:34.837971 disk-uuid[523]: Secondary Header is updated. May 8 00:45:34.843399 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 00:45:34.848467 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 00:45:34.853450 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 00:45:35.853441 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 00:45:35.853511 disk-uuid[536]: The operation has completed successfully. May 8 00:45:35.878644 systemd[1]: disk-uuid.service: Deactivated successfully. May 8 00:45:35.878000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:35.878000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:35.878756 systemd[1]: Finished disk-uuid.service. May 8 00:45:35.888114 systemd[1]: Starting verity-setup.service... May 8 00:45:35.903423 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" May 8 00:45:35.924870 systemd[1]: Found device dev-mapper-usr.device. May 8 00:45:35.927710 systemd[1]: Mounting sysusr-usr.mount... May 8 00:45:35.931996 systemd[1]: Finished verity-setup.service. May 8 00:45:35.931000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:36.002426 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. May 8 00:45:36.002585 systemd[1]: Mounted sysusr-usr.mount. May 8 00:45:36.003737 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. May 8 00:45:36.004671 systemd[1]: Starting ignition-setup.service... May 8 00:45:36.007569 systemd[1]: Starting parse-ip-for-networkd.service... May 8 00:45:36.015176 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 8 00:45:36.015210 kernel: BTRFS info (device vda6): using free space tree May 8 00:45:36.015220 kernel: BTRFS info (device vda6): has skinny extents May 8 00:45:36.024754 systemd[1]: mnt-oem.mount: Deactivated successfully. May 8 00:45:36.034484 systemd[1]: Finished ignition-setup.service. May 8 00:45:36.035000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:36.036575 systemd[1]: Starting ignition-fetch-offline.service... May 8 00:45:36.116298 systemd[1]: Finished parse-ip-for-networkd.service. May 8 00:45:36.116000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:36.182000 audit: BPF prog-id=9 op=LOAD May 8 00:45:36.183871 systemd[1]: Starting systemd-networkd.service... May 8 00:45:36.206993 systemd-networkd[710]: lo: Link UP May 8 00:45:36.207006 systemd-networkd[710]: lo: Gained carrier May 8 00:45:36.221985 systemd-networkd[710]: Enumeration completed May 8 00:45:36.222136 systemd[1]: Started systemd-networkd.service. May 8 00:45:36.223228 systemd-networkd[710]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 8 00:45:36.225055 systemd-networkd[710]: eth0: Link UP May 8 00:45:36.225074 systemd-networkd[710]: eth0: Gained carrier May 8 00:45:36.229000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:36.229834 systemd[1]: Reached target network.target. May 8 00:45:36.233896 systemd[1]: Starting iscsiuio.service... May 8 00:45:36.287849 ignition[638]: Ignition 2.14.0 May 8 00:45:36.287865 ignition[638]: Stage: fetch-offline May 8 00:45:36.287973 ignition[638]: no configs at "/usr/lib/ignition/base.d" May 8 00:45:36.287984 ignition[638]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:45:36.288130 ignition[638]: parsed url from cmdline: "" May 8 00:45:36.288134 ignition[638]: no config URL provided May 8 00:45:36.288138 ignition[638]: reading system config file "/usr/lib/ignition/user.ign" May 8 00:45:36.322418 systemd[1]: Started iscsiuio.service. May 8 00:45:36.288145 ignition[638]: no config at "/usr/lib/ignition/user.ign" May 8 00:45:36.323556 systemd-networkd[710]: eth0: DHCPv4 address 10.0.0.73/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 8 00:45:36.288166 ignition[638]: op(1): [started] loading QEMU firmware config module May 8 00:45:36.328086 systemd[1]: Starting iscsid.service... May 8 00:45:36.334191 iscsid[722]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi May 8 00:45:36.334191 iscsid[722]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log May 8 00:45:36.334191 iscsid[722]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. May 8 00:45:36.334191 iscsid[722]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. May 8 00:45:36.334191 iscsid[722]: If using hardware iscsi like qla4xxx this message can be ignored. May 8 00:45:36.334191 iscsid[722]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi May 8 00:45:36.334191 iscsid[722]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf May 8 00:45:36.326000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:36.336000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:36.356000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:36.288170 ignition[638]: op(1): executing: "modprobe" "qemu_fw_cfg" May 8 00:45:36.336064 systemd[1]: Started iscsid.service. May 8 00:45:36.323599 ignition[638]: op(1): [finished] loading QEMU firmware config module May 8 00:45:36.375000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:36.338132 systemd[1]: Starting dracut-initqueue.service... May 8 00:45:36.353577 systemd[1]: Finished dracut-initqueue.service. May 8 00:45:36.356921 systemd[1]: Reached target remote-fs-pre.target. May 8 00:45:36.360242 systemd[1]: Reached target remote-cryptsetup.target. May 8 00:45:36.361663 systemd[1]: Reached target remote-fs.target. May 8 00:45:36.363766 systemd[1]: Starting dracut-pre-mount.service... May 8 00:45:36.375845 systemd[1]: Finished dracut-pre-mount.service. May 8 00:45:36.430375 ignition[638]: parsing config with SHA512: 15f8a35bf69198af10dd4218944f88d8cfd4ab93526d8401897aa7d8bf28f9d705e628ca7fc9a73343574ac63a47e5c82fa894b7811b66d653c11904b3fe5d31 May 8 00:45:36.466078 unknown[638]: fetched base config from "system" May 8 00:45:36.466092 unknown[638]: fetched user config from "qemu" May 8 00:45:36.466602 ignition[638]: fetch-offline: fetch-offline passed May 8 00:45:36.466668 ignition[638]: Ignition finished successfully May 8 00:45:36.487470 systemd[1]: Finished ignition-fetch-offline.service. May 8 00:45:36.489000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:36.489553 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 8 00:45:36.490703 systemd[1]: Starting ignition-kargs.service... May 8 00:45:36.597553 ignition[736]: Ignition 2.14.0 May 8 00:45:36.597571 ignition[736]: Stage: kargs May 8 00:45:36.597750 ignition[736]: no configs at "/usr/lib/ignition/base.d" May 8 00:45:36.597763 ignition[736]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:45:36.599639 ignition[736]: kargs: kargs passed May 8 00:45:36.602000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:36.601063 systemd[1]: Finished ignition-kargs.service. May 8 00:45:36.599709 ignition[736]: Ignition finished successfully May 8 00:45:36.604027 systemd[1]: Starting ignition-disks.service... May 8 00:45:36.613717 ignition[742]: Ignition 2.14.0 May 8 00:45:36.613728 ignition[742]: Stage: disks May 8 00:45:36.613830 ignition[742]: no configs at "/usr/lib/ignition/base.d" May 8 00:45:36.613839 ignition[742]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:45:36.616000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:36.615778 systemd[1]: Finished ignition-disks.service. May 8 00:45:36.615024 ignition[742]: disks: disks passed May 8 00:45:36.617041 systemd[1]: Reached target initrd-root-device.target. May 8 00:45:36.615067 ignition[742]: Ignition finished successfully May 8 00:45:36.619014 systemd[1]: Reached target local-fs-pre.target. May 8 00:45:36.619890 systemd[1]: Reached target local-fs.target. May 8 00:45:36.621454 systemd[1]: Reached target sysinit.target. May 8 00:45:36.622872 systemd[1]: Reached target basic.target. May 8 00:45:36.625398 systemd[1]: Starting systemd-fsck-root.service... May 8 00:45:36.672922 systemd-fsck[750]: ROOT: clean, 623/553520 files, 56023/553472 blocks May 8 00:45:37.308194 systemd[1]: Finished systemd-fsck-root.service. May 8 00:45:37.314788 kernel: kauditd_printk_skb: 19 callbacks suppressed May 8 00:45:37.314814 kernel: audit: type=1130 audit(1746665137.309:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:37.309000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:37.310315 systemd[1]: Mounting sysroot.mount... May 8 00:45:37.365434 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. May 8 00:45:37.366331 systemd[1]: Mounted sysroot.mount. May 8 00:45:37.366555 systemd[1]: Reached target initrd-root-fs.target. May 8 00:45:37.370034 systemd[1]: Mounting sysroot-usr.mount... May 8 00:45:37.371123 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. May 8 00:45:37.371164 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 8 00:45:37.371189 systemd[1]: Reached target ignition-diskful.target. May 8 00:45:37.373540 systemd[1]: Mounted sysroot-usr.mount. May 8 00:45:37.376203 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 8 00:45:37.378407 systemd[1]: Starting initrd-setup-root.service... May 8 00:45:37.385420 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (756) May 8 00:45:37.385476 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 8 00:45:37.387544 initrd-setup-root[761]: cut: /sysroot/etc/passwd: No such file or directory May 8 00:45:37.390066 kernel: BTRFS info (device vda6): using free space tree May 8 00:45:37.390085 kernel: BTRFS info (device vda6): has skinny extents May 8 00:45:37.392142 initrd-setup-root[787]: cut: /sysroot/etc/group: No such file or directory May 8 00:45:37.392531 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 8 00:45:37.398631 initrd-setup-root[795]: cut: /sysroot/etc/shadow: No such file or directory May 8 00:45:37.402488 initrd-setup-root[803]: cut: /sysroot/etc/gshadow: No such file or directory May 8 00:45:37.441711 systemd[1]: Finished initrd-setup-root.service. May 8 00:45:37.447973 kernel: audit: type=1130 audit(1746665137.442:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:37.442000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:37.444081 systemd[1]: Starting ignition-mount.service... May 8 00:45:37.448809 systemd[1]: Starting sysroot-boot.service... May 8 00:45:37.451118 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. May 8 00:45:37.451195 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. May 8 00:45:37.471871 systemd[1]: Finished sysroot-boot.service. May 8 00:45:37.476796 kernel: audit: type=1130 audit(1746665137.472:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:37.472000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:37.491828 ignition[824]: INFO : Ignition 2.14.0 May 8 00:45:37.491828 ignition[824]: INFO : Stage: mount May 8 00:45:37.493811 ignition[824]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:45:37.493811 ignition[824]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:45:37.497043 ignition[824]: INFO : mount: mount passed May 8 00:45:37.497043 ignition[824]: INFO : Ignition finished successfully May 8 00:45:37.498475 systemd[1]: Finished ignition-mount.service. May 8 00:45:37.504516 kernel: audit: type=1130 audit(1746665137.499:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:37.499000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:37.501180 systemd[1]: Starting ignition-files.service... May 8 00:45:37.510140 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 8 00:45:37.520025 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (831) May 8 00:45:37.520112 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 8 00:45:37.520123 kernel: BTRFS info (device vda6): using free space tree May 8 00:45:37.520834 kernel: BTRFS info (device vda6): has skinny extents May 8 00:45:37.526308 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 8 00:45:37.539800 ignition[850]: INFO : Ignition 2.14.0 May 8 00:45:37.539800 ignition[850]: INFO : Stage: files May 8 00:45:37.559401 ignition[850]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:45:37.559401 ignition[850]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:45:37.559401 ignition[850]: DEBUG : files: compiled without relabeling support, skipping May 8 00:45:37.564051 ignition[850]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 8 00:45:37.564051 ignition[850]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 8 00:45:37.564051 ignition[850]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 8 00:45:37.564051 ignition[850]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 8 00:45:37.564051 ignition[850]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 8 00:45:37.563621 unknown[850]: wrote ssh authorized keys file for user: core May 8 00:45:37.575347 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 8 00:45:37.575347 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 May 8 00:45:37.617323 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 8 00:45:37.815049 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 8 00:45:37.817297 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 8 00:45:37.817297 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 8 00:45:37.870554 systemd-networkd[710]: eth0: Gained IPv6LL May 8 00:45:38.324635 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 8 00:45:38.510835 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 8 00:45:38.510835 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 8 00:45:38.515536 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 8 00:45:38.515536 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 8 00:45:38.515536 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 8 00:45:38.515536 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 8 00:45:38.515536 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 8 00:45:38.515536 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 8 00:45:38.515536 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 8 00:45:38.515536 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 8 00:45:38.515536 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 8 00:45:38.515536 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 8 00:45:38.515536 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 8 00:45:38.515536 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 8 00:45:38.515536 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 May 8 00:45:38.804239 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 8 00:45:39.408905 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 8 00:45:39.408905 ignition[850]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 8 00:45:39.414438 ignition[850]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 8 00:45:39.414438 ignition[850]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 8 00:45:39.414438 ignition[850]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 8 00:45:39.414438 ignition[850]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 8 00:45:39.414438 ignition[850]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 8 00:45:39.414438 ignition[850]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 8 00:45:39.414438 ignition[850]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 8 00:45:39.414438 ignition[850]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" May 8 00:45:39.414438 ignition[850]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" May 8 00:45:39.414438 ignition[850]: INFO : files: op(11): [started] setting preset to disabled for "coreos-metadata.service" May 8 00:45:39.414438 ignition[850]: INFO : files: op(11): op(12): [started] removing enablement symlink(s) for "coreos-metadata.service" May 8 00:45:39.532956 ignition[850]: INFO : files: op(11): op(12): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 8 00:45:39.535019 ignition[850]: INFO : files: op(11): [finished] setting preset to disabled for "coreos-metadata.service" May 8 00:45:39.536978 ignition[850]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 8 00:45:39.538944 ignition[850]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 8 00:45:39.538944 ignition[850]: INFO : files: files passed May 8 00:45:39.541826 ignition[850]: INFO : Ignition finished successfully May 8 00:45:39.541439 systemd[1]: Finished ignition-files.service. May 8 00:45:39.549021 kernel: audit: type=1130 audit(1746665139.543:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:39.543000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:39.544803 systemd[1]: Starting initrd-setup-root-after-ignition.service... May 8 00:45:39.549047 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). May 8 00:45:39.554746 initrd-setup-root-after-ignition[873]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory May 8 00:45:39.560670 kernel: audit: type=1130 audit(1746665139.554:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:39.554000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:39.560000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:39.549898 systemd[1]: Starting ignition-quench.service... May 8 00:45:39.569165 kernel: audit: type=1130 audit(1746665139.560:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:39.569186 kernel: audit: type=1131 audit(1746665139.560:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:39.560000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:39.569334 initrd-setup-root-after-ignition[876]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 8 00:45:39.551464 systemd[1]: Finished initrd-setup-root-after-ignition.service. May 8 00:45:39.555019 systemd[1]: ignition-quench.service: Deactivated successfully. May 8 00:45:39.555099 systemd[1]: Finished ignition-quench.service. May 8 00:45:39.560843 systemd[1]: Reached target ignition-complete.target. May 8 00:45:39.570078 systemd[1]: Starting initrd-parse-etc.service... May 8 00:45:39.584771 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 8 00:45:39.584857 systemd[1]: Finished initrd-parse-etc.service. May 8 00:45:39.593940 kernel: audit: type=1130 audit(1746665139.586:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:39.593960 kernel: audit: type=1131 audit(1746665139.586:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:39.586000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:39.586000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:39.586668 systemd[1]: Reached target initrd-fs.target. May 8 00:45:39.593947 systemd[1]: Reached target initrd.target. May 8 00:45:39.594763 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. May 8 00:45:39.595490 systemd[1]: Starting dracut-pre-pivot.service... May 8 00:45:39.606758 systemd[1]: Finished dracut-pre-pivot.service. May 8 00:45:39.607000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:39.608375 systemd[1]: Starting initrd-cleanup.service... May 8 00:45:39.616827 systemd[1]: Stopped target nss-lookup.target. May 8 00:45:39.629979 systemd[1]: Stopped target remote-cryptsetup.target. May 8 00:45:39.631602 systemd[1]: Stopped target timers.target. May 8 00:45:39.633146 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 8 00:45:39.634000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:39.633244 systemd[1]: Stopped dracut-pre-pivot.service. May 8 00:45:39.634714 systemd[1]: Stopped target initrd.target. May 8 00:45:39.636295 systemd[1]: Stopped target basic.target. May 8 00:45:39.637797 systemd[1]: Stopped target ignition-complete.target. May 8 00:45:39.639431 systemd[1]: Stopped target ignition-diskful.target. May 8 00:45:39.640964 systemd[1]: Stopped target initrd-root-device.target. May 8 00:45:39.642698 systemd[1]: Stopped target remote-fs.target. May 8 00:45:39.644300 systemd[1]: Stopped target remote-fs-pre.target. May 8 00:45:39.645943 systemd[1]: Stopped target sysinit.target. May 8 00:45:39.647492 systemd[1]: Stopped target local-fs.target. May 8 00:45:39.649095 systemd[1]: Stopped target local-fs-pre.target. May 8 00:45:39.650633 systemd[1]: Stopped target swap.target. May 8 00:45:39.653000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:39.652055 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 8 00:45:39.652146 systemd[1]: Stopped dracut-pre-mount.service. May 8 00:45:39.656000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:39.653748 systemd[1]: Stopped target cryptsetup.target. May 8 00:45:39.658000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:39.655153 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 8 00:45:39.655243 systemd[1]: Stopped dracut-initqueue.service. May 8 00:45:39.656997 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 8 00:45:39.657100 systemd[1]: Stopped ignition-fetch-offline.service. May 8 00:45:39.658628 systemd[1]: Stopped target paths.target. May 8 00:45:39.660020 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 8 00:45:39.661459 systemd[1]: Stopped systemd-ask-password-console.path. May 8 00:45:39.662723 systemd[1]: Stopped target slices.target. May 8 00:45:39.670000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:39.664502 systemd[1]: Stopped target sockets.target. May 8 00:45:39.672000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:39.666200 systemd[1]: iscsid.socket: Deactivated successfully. May 8 00:45:39.666282 systemd[1]: Closed iscsid.socket. May 8 00:45:39.667631 systemd[1]: iscsiuio.socket: Deactivated successfully. May 8 00:45:39.667697 systemd[1]: Closed iscsiuio.socket. May 8 00:45:39.669374 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 8 00:45:39.678000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:39.669488 systemd[1]: Stopped initrd-setup-root-after-ignition.service. May 8 00:45:39.680000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:39.671238 systemd[1]: ignition-files.service: Deactivated successfully. May 8 00:45:39.671342 systemd[1]: Stopped ignition-files.service. May 8 00:45:39.673646 systemd[1]: Stopping ignition-mount.service... May 8 00:45:39.676067 systemd[1]: Stopping sysroot-boot.service... May 8 00:45:39.685000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:39.685000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:39.677333 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 8 00:45:39.677602 systemd[1]: Stopped systemd-udev-trigger.service. May 8 00:45:39.679175 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 8 00:45:39.679368 systemd[1]: Stopped dracut-pre-trigger.service. May 8 00:45:39.684657 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 8 00:45:39.684763 systemd[1]: Finished initrd-cleanup.service. May 8 00:45:39.692242 ignition[890]: INFO : Ignition 2.14.0 May 8 00:45:39.692242 ignition[890]: INFO : Stage: umount May 8 00:45:39.693937 ignition[890]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:45:39.693937 ignition[890]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:45:39.693937 ignition[890]: INFO : umount: umount passed May 8 00:45:39.693937 ignition[890]: INFO : Ignition finished successfully May 8 00:45:39.696000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:39.700000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:39.694741 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 8 00:45:39.701000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:39.703000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:39.695185 systemd[1]: ignition-mount.service: Deactivated successfully. May 8 00:45:39.695291 systemd[1]: Stopped ignition-mount.service. May 8 00:45:39.697359 systemd[1]: Stopped target network.target. May 8 00:45:39.698910 systemd[1]: ignition-disks.service: Deactivated successfully. May 8 00:45:39.698950 systemd[1]: Stopped ignition-disks.service. May 8 00:45:39.700577 systemd[1]: ignition-kargs.service: Deactivated successfully. May 8 00:45:39.700610 systemd[1]: Stopped ignition-kargs.service. May 8 00:45:39.701456 systemd[1]: ignition-setup.service: Deactivated successfully. May 8 00:45:39.711000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:39.701490 systemd[1]: Stopped ignition-setup.service. May 8 00:45:39.703160 systemd[1]: Stopping systemd-networkd.service... May 8 00:45:39.704905 systemd[1]: Stopping systemd-resolved.service... May 8 00:45:39.708429 systemd-networkd[710]: eth0: DHCPv6 lease lost May 8 00:45:39.716000 audit: BPF prog-id=9 op=UNLOAD May 8 00:45:39.717000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:39.719000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:39.710145 systemd[1]: systemd-networkd.service: Deactivated successfully. May 8 00:45:39.721000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:39.710254 systemd[1]: Stopped systemd-networkd.service. May 8 00:45:39.713598 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 8 00:45:39.713627 systemd[1]: Closed systemd-networkd.socket. May 8 00:45:39.724000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:39.715572 systemd[1]: Stopping network-cleanup.service... May 8 00:45:39.716964 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 8 00:45:39.727000 audit: BPF prog-id=6 op=UNLOAD May 8 00:45:39.717022 systemd[1]: Stopped parse-ip-for-networkd.service. May 8 00:45:39.717929 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 8 00:45:39.717971 systemd[1]: Stopped systemd-sysctl.service. May 8 00:45:39.745000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:39.719519 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 8 00:45:39.719552 systemd[1]: Stopped systemd-modules-load.service. May 8 00:45:39.748000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:39.721531 systemd[1]: Stopping systemd-udevd.service... May 8 00:45:39.722632 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 8 00:45:39.723121 systemd[1]: systemd-resolved.service: Deactivated successfully. May 8 00:45:39.767000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:39.769000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:39.723212 systemd[1]: Stopped systemd-resolved.service. May 8 00:45:39.729288 systemd[1]: systemd-udevd.service: Deactivated successfully. May 8 00:45:39.771000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:39.729431 systemd[1]: Stopped systemd-udevd.service. May 8 00:45:39.774000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:39.776000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:39.746902 systemd[1]: network-cleanup.service: Deactivated successfully. May 8 00:45:39.777000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:39.746989 systemd[1]: Stopped network-cleanup.service. May 8 00:45:39.748960 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 8 00:45:39.781000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:39.781000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:39.749012 systemd[1]: Closed systemd-udevd-control.socket. May 8 00:45:39.763950 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 8 00:45:39.763989 systemd[1]: Closed systemd-udevd-kernel.socket. May 8 00:45:39.765773 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 8 00:45:39.765823 systemd[1]: Stopped dracut-pre-udev.service. May 8 00:45:39.767981 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 8 00:45:39.768026 systemd[1]: Stopped dracut-cmdline.service. May 8 00:45:39.769918 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 8 00:45:39.769962 systemd[1]: Stopped dracut-cmdline-ask.service. May 8 00:45:39.772627 systemd[1]: Starting initrd-udevadm-cleanup-db.service... May 8 00:45:39.773809 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 8 00:45:39.773862 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. May 8 00:45:39.775727 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 8 00:45:39.775779 systemd[1]: Stopped kmod-static-nodes.service. May 8 00:45:39.776731 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 00:45:39.776777 systemd[1]: Stopped systemd-vconsole-setup.service. May 8 00:45:39.778783 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 8 00:45:39.779283 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 8 00:45:39.779372 systemd[1]: Finished initrd-udevadm-cleanup-db.service. May 8 00:45:39.817655 systemd[1]: sysroot-boot.service: Deactivated successfully. May 8 00:45:39.818000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:39.817783 systemd[1]: Stopped sysroot-boot.service. May 8 00:45:39.822000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:39.818935 systemd[1]: Reached target initrd-switch-root.target. May 8 00:45:39.820687 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 8 00:45:39.820729 systemd[1]: Stopped initrd-setup-root.service. May 8 00:45:39.823959 systemd[1]: Starting initrd-switch-root.service... May 8 00:45:39.837610 systemd[1]: Switching root. May 8 00:45:39.860302 iscsid[722]: iscsid shutting down. May 8 00:45:39.861576 systemd-journald[198]: Received SIGTERM from PID 1 (n/a). May 8 00:45:39.861645 systemd-journald[198]: Journal stopped May 8 00:45:44.803191 kernel: SELinux: Class mctp_socket not defined in policy. May 8 00:45:44.803246 kernel: SELinux: Class anon_inode not defined in policy. May 8 00:45:44.803259 kernel: SELinux: the above unknown classes and permissions will be allowed May 8 00:45:44.803270 kernel: SELinux: policy capability network_peer_controls=1 May 8 00:45:44.803279 kernel: SELinux: policy capability open_perms=1 May 8 00:45:44.803292 kernel: SELinux: policy capability extended_socket_class=1 May 8 00:45:44.803302 kernel: SELinux: policy capability always_check_network=0 May 8 00:45:44.803314 kernel: SELinux: policy capability cgroup_seclabel=1 May 8 00:45:44.803328 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 8 00:45:44.803338 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 8 00:45:44.803348 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 8 00:45:44.803359 systemd[1]: Successfully loaded SELinux policy in 45.437ms. May 8 00:45:44.803378 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.740ms. May 8 00:45:44.803403 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 8 00:45:44.803426 systemd[1]: Detected virtualization kvm. May 8 00:45:44.803437 systemd[1]: Detected architecture x86-64. May 8 00:45:44.803452 systemd[1]: Detected first boot. May 8 00:45:44.803463 systemd[1]: Initializing machine ID from VM UUID. May 8 00:45:44.803474 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). May 8 00:45:44.803487 systemd[1]: Populated /etc with preset unit settings. May 8 00:45:44.803500 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 8 00:45:44.803516 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 8 00:45:44.803528 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:45:44.803540 kernel: kauditd_printk_skb: 50 callbacks suppressed May 8 00:45:44.803550 kernel: audit: type=1334 audit(1746665144.423:83): prog-id=12 op=LOAD May 8 00:45:44.803560 kernel: audit: type=1334 audit(1746665144.423:84): prog-id=3 op=UNLOAD May 8 00:45:44.803569 kernel: audit: type=1334 audit(1746665144.426:85): prog-id=13 op=LOAD May 8 00:45:44.803578 kernel: audit: type=1334 audit(1746665144.428:86): prog-id=14 op=LOAD May 8 00:45:44.803590 kernel: audit: type=1334 audit(1746665144.428:87): prog-id=4 op=UNLOAD May 8 00:45:44.803599 kernel: audit: type=1334 audit(1746665144.428:88): prog-id=5 op=UNLOAD May 8 00:45:44.803609 kernel: audit: type=1334 audit(1746665144.430:89): prog-id=15 op=LOAD May 8 00:45:44.803621 kernel: audit: type=1334 audit(1746665144.430:90): prog-id=12 op=UNLOAD May 8 00:45:44.803631 kernel: audit: type=1334 audit(1746665144.432:91): prog-id=16 op=LOAD May 8 00:45:44.803640 kernel: audit: type=1334 audit(1746665144.433:92): prog-id=17 op=LOAD May 8 00:45:44.803650 systemd[1]: iscsiuio.service: Deactivated successfully. May 8 00:45:44.803661 systemd[1]: Stopped iscsiuio.service. May 8 00:45:44.803672 systemd[1]: iscsid.service: Deactivated successfully. May 8 00:45:44.803686 systemd[1]: Stopped iscsid.service. May 8 00:45:44.803697 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 8 00:45:44.803707 systemd[1]: Stopped initrd-switch-root.service. May 8 00:45:44.803720 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 8 00:45:44.803732 systemd[1]: Created slice system-addon\x2dconfig.slice. May 8 00:45:44.803743 systemd[1]: Created slice system-addon\x2drun.slice. May 8 00:45:44.803753 systemd[1]: Created slice system-getty.slice. May 8 00:45:44.803766 systemd[1]: Created slice system-modprobe.slice. May 8 00:45:44.803777 systemd[1]: Created slice system-serial\x2dgetty.slice. May 8 00:45:44.803789 systemd[1]: Created slice system-system\x2dcloudinit.slice. May 8 00:45:44.803799 systemd[1]: Created slice system-systemd\x2dfsck.slice. May 8 00:45:44.803810 systemd[1]: Created slice user.slice. May 8 00:45:44.803820 systemd[1]: Started systemd-ask-password-console.path. May 8 00:45:44.803831 systemd[1]: Started systemd-ask-password-wall.path. May 8 00:45:44.803842 systemd[1]: Set up automount boot.automount. May 8 00:45:44.803852 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. May 8 00:45:44.803864 systemd[1]: Stopped target initrd-switch-root.target. May 8 00:45:44.803875 systemd[1]: Stopped target initrd-fs.target. May 8 00:45:44.803885 systemd[1]: Stopped target initrd-root-fs.target. May 8 00:45:44.803895 systemd[1]: Reached target integritysetup.target. May 8 00:45:44.803906 systemd[1]: Reached target remote-cryptsetup.target. May 8 00:45:44.803917 systemd[1]: Reached target remote-fs.target. May 8 00:45:44.803929 systemd[1]: Reached target slices.target. May 8 00:45:44.803940 systemd[1]: Reached target swap.target. May 8 00:45:44.803952 systemd[1]: Reached target torcx.target. May 8 00:45:44.803962 systemd[1]: Reached target veritysetup.target. May 8 00:45:44.803975 systemd[1]: Listening on systemd-coredump.socket. May 8 00:45:44.803986 systemd[1]: Listening on systemd-initctl.socket. May 8 00:45:44.803997 systemd[1]: Listening on systemd-networkd.socket. May 8 00:45:44.804008 systemd[1]: Listening on systemd-udevd-control.socket. May 8 00:45:44.804018 systemd[1]: Listening on systemd-udevd-kernel.socket. May 8 00:45:44.804029 systemd[1]: Listening on systemd-userdbd.socket. May 8 00:45:44.804039 systemd[1]: Mounting dev-hugepages.mount... May 8 00:45:44.804050 systemd[1]: Mounting dev-mqueue.mount... May 8 00:45:44.804061 systemd[1]: Mounting media.mount... May 8 00:45:44.804073 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:45:44.804084 systemd[1]: Mounting sys-kernel-debug.mount... May 8 00:45:44.804095 systemd[1]: Mounting sys-kernel-tracing.mount... May 8 00:45:44.804107 systemd[1]: Mounting tmp.mount... May 8 00:45:44.804118 systemd[1]: Starting flatcar-tmpfiles.service... May 8 00:45:44.804128 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 8 00:45:44.804149 systemd[1]: Starting kmod-static-nodes.service... May 8 00:45:44.804161 systemd[1]: Starting modprobe@configfs.service... May 8 00:45:44.804172 systemd[1]: Starting modprobe@dm_mod.service... May 8 00:45:44.804185 systemd[1]: Starting modprobe@drm.service... May 8 00:45:44.804198 systemd[1]: Starting modprobe@efi_pstore.service... May 8 00:45:44.804209 systemd[1]: Starting modprobe@fuse.service... May 8 00:45:44.804219 systemd[1]: Starting modprobe@loop.service... May 8 00:45:44.804230 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 8 00:45:44.804241 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 8 00:45:44.804252 systemd[1]: Stopped systemd-fsck-root.service. May 8 00:45:44.804264 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 8 00:45:44.804275 kernel: loop: module loaded May 8 00:45:44.804287 systemd[1]: Stopped systemd-fsck-usr.service. May 8 00:45:44.804297 kernel: fuse: init (API version 7.34) May 8 00:45:44.804308 systemd[1]: Stopped systemd-journald.service. May 8 00:45:44.804318 systemd[1]: Starting systemd-journald.service... May 8 00:45:44.804329 systemd[1]: Starting systemd-modules-load.service... May 8 00:45:44.804340 systemd[1]: Starting systemd-network-generator.service... May 8 00:45:44.804350 systemd[1]: Starting systemd-remount-fs.service... May 8 00:45:44.804361 systemd[1]: Starting systemd-udev-trigger.service... May 8 00:45:44.804372 systemd[1]: verity-setup.service: Deactivated successfully. May 8 00:45:44.804410 systemd[1]: Stopped verity-setup.service. May 8 00:45:44.804422 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:45:44.804432 systemd[1]: Mounted dev-hugepages.mount. May 8 00:45:44.804443 systemd[1]: Mounted dev-mqueue.mount. May 8 00:45:44.804454 systemd[1]: Mounted media.mount. May 8 00:45:44.804464 systemd[1]: Mounted sys-kernel-debug.mount. May 8 00:45:44.804475 systemd[1]: Mounted sys-kernel-tracing.mount. May 8 00:45:44.804486 systemd[1]: Mounted tmp.mount. May 8 00:45:44.804497 systemd[1]: Finished flatcar-tmpfiles.service. May 8 00:45:44.804509 systemd[1]: Finished kmod-static-nodes.service. May 8 00:45:44.804521 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 8 00:45:44.804531 systemd[1]: Finished modprobe@configfs.service. May 8 00:45:44.804542 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:45:44.804552 systemd[1]: Finished modprobe@dm_mod.service. May 8 00:45:44.804564 systemd[1]: modprobe@drm.service: Deactivated successfully. May 8 00:45:44.804575 systemd[1]: Finished modprobe@drm.service. May 8 00:45:44.804587 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:45:44.804598 systemd[1]: Finished modprobe@efi_pstore.service. May 8 00:45:44.804608 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 8 00:45:44.804620 systemd[1]: Finished modprobe@fuse.service. May 8 00:45:44.804632 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:45:44.804643 systemd[1]: Finished modprobe@loop.service. May 8 00:45:44.804653 systemd[1]: Finished systemd-modules-load.service. May 8 00:45:44.804665 systemd[1]: Finished systemd-network-generator.service. May 8 00:45:44.804675 systemd[1]: Finished systemd-remount-fs.service. May 8 00:45:44.804685 systemd[1]: Reached target network-pre.target. May 8 00:45:44.804696 systemd[1]: Mounting sys-fs-fuse-connections.mount... May 8 00:45:44.804706 systemd[1]: Mounting sys-kernel-config.mount... May 8 00:45:44.804717 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 8 00:45:44.804728 systemd[1]: Starting systemd-hwdb-update.service... May 8 00:45:44.804741 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 00:45:44.804767 systemd[1]: Starting systemd-random-seed.service... May 8 00:45:44.804785 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 8 00:45:44.804798 systemd-journald[1005]: Journal started May 8 00:45:44.804839 systemd-journald[1005]: Runtime Journal (/run/log/journal/537f1e88dc1943908bbc3402662f40a7) is 6.0M, max 48.5M, 42.5M free. May 8 00:45:39.934000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 May 8 00:45:40.829000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 8 00:45:40.829000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 8 00:45:40.829000 audit: BPF prog-id=10 op=LOAD May 8 00:45:40.829000 audit: BPF prog-id=10 op=UNLOAD May 8 00:45:40.829000 audit: BPF prog-id=11 op=LOAD May 8 00:45:40.829000 audit: BPF prog-id=11 op=UNLOAD May 8 00:45:40.870000 audit[924]: AVC avc: denied { associate } for pid=924 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" May 8 00:45:40.870000 audit[924]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0000a48e4 a1=c0000a6c18 a2=c0000bad40 a3=32 items=0 ppid=907 pid=924 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:45:40.870000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 8 00:45:40.872000 audit[924]: AVC avc: denied { associate } for pid=924 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 May 8 00:45:40.872000 audit[924]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0000a49b9 a2=1ed a3=0 items=2 ppid=907 pid=924 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:45:40.872000 audit: CWD cwd="/" May 8 00:45:40.872000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:40.872000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:40.872000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 8 00:45:44.423000 audit: BPF prog-id=12 op=LOAD May 8 00:45:44.423000 audit: BPF prog-id=3 op=UNLOAD May 8 00:45:44.426000 audit: BPF prog-id=13 op=LOAD May 8 00:45:44.428000 audit: BPF prog-id=14 op=LOAD May 8 00:45:44.428000 audit: BPF prog-id=4 op=UNLOAD May 8 00:45:44.428000 audit: BPF prog-id=5 op=UNLOAD May 8 00:45:44.430000 audit: BPF prog-id=15 op=LOAD May 8 00:45:44.430000 audit: BPF prog-id=12 op=UNLOAD May 8 00:45:44.432000 audit: BPF prog-id=16 op=LOAD May 8 00:45:44.433000 audit: BPF prog-id=17 op=LOAD May 8 00:45:44.433000 audit: BPF prog-id=13 op=UNLOAD May 8 00:45:44.433000 audit: BPF prog-id=14 op=UNLOAD May 8 00:45:44.806434 systemd[1]: Starting systemd-sysctl.service... May 8 00:45:44.434000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:44.439000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:44.442000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:44.445000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:44.445000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:44.473000 audit: BPF prog-id=15 op=UNLOAD May 8 00:45:44.620000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:44.629000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:44.631000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:44.631000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:44.632000 audit: BPF prog-id=18 op=LOAD May 8 00:45:44.632000 audit: BPF prog-id=19 op=LOAD May 8 00:45:44.632000 audit: BPF prog-id=20 op=LOAD May 8 00:45:44.632000 audit: BPF prog-id=16 op=UNLOAD May 8 00:45:44.632000 audit: BPF prog-id=17 op=UNLOAD May 8 00:45:44.722000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:44.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:44.757000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:44.760000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:44.760000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:44.763000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:44.763000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:44.767000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:44.767000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:44.770000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:44.770000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:44.773000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:44.773000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:44.776000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:44.813550 systemd[1]: Starting systemd-sysusers.service... May 8 00:45:44.776000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:44.778000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:44.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:44.782000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:44.800000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 8 00:45:44.800000 audit[1005]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7fffd816d600 a2=4000 a3=7fffd816d69c items=0 ppid=1 pid=1005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:45:44.800000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" May 8 00:45:40.869369 /usr/lib/systemd/system-generators/torcx-generator[924]: time="2025-05-08T00:45:40Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 8 00:45:44.422779 systemd[1]: Queued start job for default target multi-user.target. May 8 00:45:40.869625 /usr/lib/systemd/system-generators/torcx-generator[924]: time="2025-05-08T00:45:40Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 8 00:45:44.422792 systemd[1]: Unnecessary job was removed for dev-vda6.device. May 8 00:45:40.869645 /usr/lib/systemd/system-generators/torcx-generator[924]: time="2025-05-08T00:45:40Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 8 00:45:44.434631 systemd[1]: systemd-journald.service: Deactivated successfully. May 8 00:45:40.869686 /usr/lib/systemd/system-generators/torcx-generator[924]: time="2025-05-08T00:45:40Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" May 8 00:45:40.869697 /usr/lib/systemd/system-generators/torcx-generator[924]: time="2025-05-08T00:45:40Z" level=debug msg="skipped missing lower profile" missing profile=oem May 8 00:45:40.869723 /usr/lib/systemd/system-generators/torcx-generator[924]: time="2025-05-08T00:45:40Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" May 8 00:45:40.869737 /usr/lib/systemd/system-generators/torcx-generator[924]: time="2025-05-08T00:45:40Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= May 8 00:45:44.815755 systemd[1]: Started systemd-journald.service. May 8 00:45:40.869962 /usr/lib/systemd/system-generators/torcx-generator[924]: time="2025-05-08T00:45:40Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack May 8 00:45:40.869993 /usr/lib/systemd/system-generators/torcx-generator[924]: time="2025-05-08T00:45:40Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 8 00:45:40.870006 /usr/lib/systemd/system-generators/torcx-generator[924]: time="2025-05-08T00:45:40Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 8 00:45:40.870796 /usr/lib/systemd/system-generators/torcx-generator[924]: time="2025-05-08T00:45:40Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 May 8 00:45:40.870831 /usr/lib/systemd/system-generators/torcx-generator[924]: time="2025-05-08T00:45:40Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl May 8 00:45:44.815000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:40.870854 /usr/lib/systemd/system-generators/torcx-generator[924]: time="2025-05-08T00:45:40Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.7: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.7 May 8 00:45:40.870868 /usr/lib/systemd/system-generators/torcx-generator[924]: time="2025-05-08T00:45:40Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store May 8 00:45:40.870885 /usr/lib/systemd/system-generators/torcx-generator[924]: time="2025-05-08T00:45:40Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.7: no such file or directory" path=/var/lib/torcx/store/3510.3.7 May 8 00:45:40.870898 /usr/lib/systemd/system-generators/torcx-generator[924]: time="2025-05-08T00:45:40Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store May 8 00:45:44.116212 /usr/lib/systemd/system-generators/torcx-generator[924]: time="2025-05-08T00:45:44Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 8 00:45:44.116513 /usr/lib/systemd/system-generators/torcx-generator[924]: time="2025-05-08T00:45:44Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 8 00:45:44.816878 systemd[1]: Mounted sys-fs-fuse-connections.mount. May 8 00:45:44.116617 /usr/lib/systemd/system-generators/torcx-generator[924]: time="2025-05-08T00:45:44Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 8 00:45:44.817950 systemd[1]: Mounted sys-kernel-config.mount. May 8 00:45:44.819000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:44.116778 /usr/lib/systemd/system-generators/torcx-generator[924]: time="2025-05-08T00:45:44Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 8 00:45:44.819141 systemd[1]: Finished systemd-random-seed.service. May 8 00:45:44.116825 /usr/lib/systemd/system-generators/torcx-generator[924]: time="2025-05-08T00:45:44Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= May 8 00:45:44.116880 /usr/lib/systemd/system-generators/torcx-generator[924]: time="2025-05-08T00:45:44Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx May 8 00:45:44.820675 systemd[1]: Reached target first-boot-complete.target. May 8 00:45:44.823026 systemd[1]: Starting systemd-journal-flush.service... May 8 00:45:44.830517 systemd-journald[1005]: Time spent on flushing to /var/log/journal/537f1e88dc1943908bbc3402662f40a7 is 17.003ms for 1129 entries. May 8 00:45:44.830517 systemd-journald[1005]: System Journal (/var/log/journal/537f1e88dc1943908bbc3402662f40a7) is 8.0M, max 195.6M, 187.6M free. May 8 00:45:44.858358 systemd-journald[1005]: Received client request to flush runtime journal. May 8 00:45:44.833000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:44.841000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:44.831372 systemd[1]: Finished systemd-sysctl.service. May 8 00:45:44.841092 systemd[1]: Finished systemd-sysusers.service. May 8 00:45:44.843349 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 8 00:45:44.859829 systemd[1]: Finished systemd-journal-flush.service. May 8 00:45:44.861000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:44.864896 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 8 00:45:44.865000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:44.874649 systemd[1]: Finished systemd-udev-trigger.service. May 8 00:45:44.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:44.876958 systemd[1]: Starting systemd-udev-settle.service... May 8 00:45:44.884911 udevadm[1032]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 8 00:45:45.788902 systemd[1]: Finished systemd-hwdb-update.service. May 8 00:45:45.790000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:45.790000 audit: BPF prog-id=21 op=LOAD May 8 00:45:45.790000 audit: BPF prog-id=22 op=LOAD May 8 00:45:45.790000 audit: BPF prog-id=7 op=UNLOAD May 8 00:45:45.790000 audit: BPF prog-id=8 op=UNLOAD May 8 00:45:45.791964 systemd[1]: Starting systemd-udevd.service... May 8 00:45:45.810006 systemd-udevd[1033]: Using default interface naming scheme 'v252'. May 8 00:45:45.833215 systemd[1]: Started systemd-udevd.service. May 8 00:45:45.834000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:45.835000 audit: BPF prog-id=23 op=LOAD May 8 00:45:45.837098 systemd[1]: Starting systemd-networkd.service... May 8 00:45:45.843000 audit: BPF prog-id=24 op=LOAD May 8 00:45:45.843000 audit: BPF prog-id=25 op=LOAD May 8 00:45:45.843000 audit: BPF prog-id=26 op=LOAD May 8 00:45:45.844492 systemd[1]: Starting systemd-userdbd.service... May 8 00:45:45.873449 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. May 8 00:45:45.875211 systemd[1]: Started systemd-userdbd.service. May 8 00:45:45.876000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:45.902233 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 8 00:45:45.918449 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 8 00:45:45.924414 kernel: ACPI: button: Power Button [PWRF] May 8 00:45:45.929254 systemd-networkd[1039]: lo: Link UP May 8 00:45:45.929268 systemd-networkd[1039]: lo: Gained carrier May 8 00:45:45.931000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:45.930521 systemd-networkd[1039]: Enumeration completed May 8 00:45:45.930636 systemd[1]: Started systemd-networkd.service. May 8 00:45:45.933159 systemd-networkd[1039]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 8 00:45:45.934260 systemd-networkd[1039]: eth0: Link UP May 8 00:45:45.934267 systemd-networkd[1039]: eth0: Gained carrier May 8 00:45:45.949649 systemd-networkd[1039]: eth0: DHCPv4 address 10.0.0.73/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 8 00:45:45.940000 audit[1044]: AVC avc: denied { confidentiality } for pid=1044 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 May 8 00:45:45.940000 audit[1044]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=557575c454b0 a1=338ac a2=7f378f464bc5 a3=5 items=110 ppid=1033 pid=1044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:45:45.940000 audit: CWD cwd="/" May 8 00:45:45.940000 audit: PATH item=0 name=(null) inode=44 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=1 name=(null) inode=11098 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=2 name=(null) inode=11098 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=3 name=(null) inode=11099 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=4 name=(null) inode=11098 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=5 name=(null) inode=11100 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=6 name=(null) inode=11098 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=7 name=(null) inode=11101 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=8 name=(null) inode=11101 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=9 name=(null) inode=11102 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=10 name=(null) inode=11101 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=11 name=(null) inode=11103 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=12 name=(null) inode=11101 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=13 name=(null) inode=11104 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=14 name=(null) inode=11101 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=15 name=(null) inode=11105 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=16 name=(null) inode=11101 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=17 name=(null) inode=11106 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=18 name=(null) inode=11098 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=19 name=(null) inode=11107 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=20 name=(null) inode=11107 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=21 name=(null) inode=11108 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=22 name=(null) inode=11107 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=23 name=(null) inode=11109 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=24 name=(null) inode=11107 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=25 name=(null) inode=11110 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=26 name=(null) inode=11107 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=27 name=(null) inode=11111 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=28 name=(null) inode=11107 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=29 name=(null) inode=11112 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=30 name=(null) inode=11098 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=31 name=(null) inode=11113 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=32 name=(null) inode=11113 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=33 name=(null) inode=11114 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=34 name=(null) inode=11113 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=35 name=(null) inode=11115 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=36 name=(null) inode=11113 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=37 name=(null) inode=11116 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=38 name=(null) inode=11113 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=39 name=(null) inode=11117 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=40 name=(null) inode=11113 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=41 name=(null) inode=11118 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=42 name=(null) inode=11098 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=43 name=(null) inode=11119 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=44 name=(null) inode=11119 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=45 name=(null) inode=11120 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=46 name=(null) inode=11119 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=47 name=(null) inode=11121 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=48 name=(null) inode=11119 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=49 name=(null) inode=11122 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=50 name=(null) inode=11119 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=51 name=(null) inode=11123 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=52 name=(null) inode=11119 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=53 name=(null) inode=11124 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=54 name=(null) inode=44 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=55 name=(null) inode=11125 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=56 name=(null) inode=11125 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=57 name=(null) inode=11126 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=58 name=(null) inode=11125 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=59 name=(null) inode=11127 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=60 name=(null) inode=11125 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=61 name=(null) inode=11128 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=62 name=(null) inode=11128 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=63 name=(null) inode=11129 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=64 name=(null) inode=11128 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=65 name=(null) inode=11130 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=66 name=(null) inode=11128 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=67 name=(null) inode=11131 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=68 name=(null) inode=11128 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=69 name=(null) inode=11132 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=70 name=(null) inode=11128 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=71 name=(null) inode=11133 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=72 name=(null) inode=11125 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=73 name=(null) inode=11134 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=74 name=(null) inode=11134 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=75 name=(null) inode=11135 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=76 name=(null) inode=11134 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=77 name=(null) inode=11136 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=78 name=(null) inode=11134 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=79 name=(null) inode=11137 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=80 name=(null) inode=11134 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=81 name=(null) inode=11138 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=82 name=(null) inode=11134 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=83 name=(null) inode=11139 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=84 name=(null) inode=11125 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=85 name=(null) inode=11140 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=86 name=(null) inode=11140 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=87 name=(null) inode=11141 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=88 name=(null) inode=11140 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=89 name=(null) inode=11142 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=90 name=(null) inode=11140 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=91 name=(null) inode=11143 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=92 name=(null) inode=11140 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=93 name=(null) inode=11144 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=94 name=(null) inode=11140 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=95 name=(null) inode=11145 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=96 name=(null) inode=11125 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=97 name=(null) inode=11146 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=98 name=(null) inode=11146 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=99 name=(null) inode=11147 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=100 name=(null) inode=11146 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=101 name=(null) inode=11148 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=102 name=(null) inode=11146 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=103 name=(null) inode=11149 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=104 name=(null) inode=11146 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=105 name=(null) inode=11150 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=106 name=(null) inode=11146 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=107 name=(null) inode=11151 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PATH item=109 name=(null) inode=11152 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:45:45.940000 audit: PROCTITLE proctitle="(udev-worker)" May 8 00:45:45.974420 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 8 00:45:45.983792 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) May 8 00:45:45.984035 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 8 00:45:45.984186 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 8 00:45:45.994500 kernel: mousedev: PS/2 mouse device common for all mice May 8 00:45:46.122476 kernel: kvm: Nested Virtualization enabled May 8 00:45:46.122627 kernel: SVM: kvm: Nested Paging enabled May 8 00:45:46.122643 kernel: SVM: Virtual VMLOAD VMSAVE supported May 8 00:45:46.123801 kernel: SVM: Virtual GIF supported May 8 00:45:46.142449 kernel: EDAC MC: Ver: 3.0.0 May 8 00:45:46.170138 systemd[1]: Finished systemd-udev-settle.service. May 8 00:45:46.171000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:46.173548 systemd[1]: Starting lvm2-activation-early.service... May 8 00:45:46.185336 lvm[1068]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 8 00:45:46.214688 systemd[1]: Finished lvm2-activation-early.service. May 8 00:45:46.215000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:46.216175 systemd[1]: Reached target cryptsetup.target. May 8 00:45:46.218716 systemd[1]: Starting lvm2-activation.service... May 8 00:45:46.222765 lvm[1069]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 8 00:45:46.251711 systemd[1]: Finished lvm2-activation.service. May 8 00:45:46.252000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:46.253057 systemd[1]: Reached target local-fs-pre.target. May 8 00:45:46.254048 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 8 00:45:46.254076 systemd[1]: Reached target local-fs.target. May 8 00:45:46.255001 systemd[1]: Reached target machines.target. May 8 00:45:46.257341 systemd[1]: Starting ldconfig.service... May 8 00:45:46.258720 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 8 00:45:46.258784 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 8 00:45:46.260421 systemd[1]: Starting systemd-boot-update.service... May 8 00:45:46.262747 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... May 8 00:45:46.265636 systemd[1]: Starting systemd-machine-id-commit.service... May 8 00:45:46.268059 systemd[1]: Starting systemd-sysext.service... May 8 00:45:46.269612 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1071 (bootctl) May 8 00:45:46.271038 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... May 8 00:45:46.272877 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. May 8 00:45:46.274000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:46.333247 systemd[1]: Unmounting usr-share-oem.mount... May 8 00:45:46.337361 systemd[1]: usr-share-oem.mount: Deactivated successfully. May 8 00:45:46.337626 systemd[1]: Unmounted usr-share-oem.mount. May 8 00:45:46.360442 kernel: loop0: detected capacity change from 0 to 218376 May 8 00:45:46.365597 systemd-fsck[1078]: fsck.fat 4.2 (2021-01-31) May 8 00:45:46.365597 systemd-fsck[1078]: /dev/vda1: 790 files, 120710/258078 clusters May 8 00:45:46.368000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:46.367237 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. May 8 00:45:46.370482 systemd[1]: Mounting boot.mount... May 8 00:45:46.374419 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 8 00:45:46.378493 systemd[1]: Mounted boot.mount. May 8 00:45:46.390961 systemd[1]: Finished systemd-boot-update.service. May 8 00:45:46.393000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:46.393401 kernel: loop1: detected capacity change from 0 to 218376 May 8 00:45:46.436449 (sd-sysext)[1084]: Using extensions 'kubernetes'. May 8 00:45:46.436990 (sd-sysext)[1084]: Merged extensions into '/usr'. May 8 00:45:46.501444 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:45:46.503150 systemd[1]: Mounting usr-share-oem.mount... May 8 00:45:46.504161 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 8 00:45:46.505950 systemd[1]: Starting modprobe@dm_mod.service... May 8 00:45:46.508134 systemd[1]: Starting modprobe@efi_pstore.service... May 8 00:45:46.510791 systemd[1]: Starting modprobe@loop.service... May 8 00:45:46.511672 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 8 00:45:46.511897 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 8 00:45:46.512019 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:45:46.515105 systemd[1]: Mounted usr-share-oem.mount. May 8 00:45:46.516234 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:45:46.516348 systemd[1]: Finished modprobe@dm_mod.service. May 8 00:45:46.516000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:46.516000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:46.517579 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:45:46.517809 systemd[1]: Finished modprobe@efi_pstore.service. May 8 00:45:46.518000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:46.518000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:46.519625 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:45:46.519788 systemd[1]: Finished modprobe@loop.service. May 8 00:45:46.520000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:46.520000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:46.521292 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 00:45:46.521421 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 8 00:45:46.522871 systemd[1]: Finished systemd-sysext.service. May 8 00:45:46.523000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:46.525176 systemd[1]: Starting ensure-sysext.service... May 8 00:45:46.527613 systemd[1]: Starting systemd-tmpfiles-setup.service... May 8 00:45:46.536378 ldconfig[1070]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 8 00:45:46.554140 systemd-tmpfiles[1091]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. May 8 00:45:46.555607 systemd-tmpfiles[1091]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 8 00:45:46.557311 systemd-tmpfiles[1091]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 8 00:45:46.602691 systemd[1]: Reloading. May 8 00:45:46.663839 /usr/lib/systemd/system-generators/torcx-generator[1110]: time="2025-05-08T00:45:46Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 8 00:45:46.663870 /usr/lib/systemd/system-generators/torcx-generator[1110]: time="2025-05-08T00:45:46Z" level=info msg="torcx already run" May 8 00:45:46.764214 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 8 00:45:46.764234 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 8 00:45:46.782232 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:45:46.837000 audit: BPF prog-id=27 op=LOAD May 8 00:45:46.837000 audit: BPF prog-id=23 op=UNLOAD May 8 00:45:46.838000 audit: BPF prog-id=28 op=LOAD May 8 00:45:46.838000 audit: BPF prog-id=18 op=UNLOAD May 8 00:45:46.838000 audit: BPF prog-id=29 op=LOAD May 8 00:45:46.838000 audit: BPF prog-id=30 op=LOAD May 8 00:45:46.838000 audit: BPF prog-id=19 op=UNLOAD May 8 00:45:46.838000 audit: BPF prog-id=20 op=UNLOAD May 8 00:45:46.839000 audit: BPF prog-id=31 op=LOAD May 8 00:45:46.839000 audit: BPF prog-id=24 op=UNLOAD May 8 00:45:46.839000 audit: BPF prog-id=32 op=LOAD May 8 00:45:46.839000 audit: BPF prog-id=33 op=LOAD May 8 00:45:46.839000 audit: BPF prog-id=25 op=UNLOAD May 8 00:45:46.839000 audit: BPF prog-id=26 op=UNLOAD May 8 00:45:46.840000 audit: BPF prog-id=34 op=LOAD May 8 00:45:46.840000 audit: BPF prog-id=35 op=LOAD May 8 00:45:46.840000 audit: BPF prog-id=21 op=UNLOAD May 8 00:45:46.840000 audit: BPF prog-id=22 op=UNLOAD May 8 00:45:46.844030 systemd[1]: Finished systemd-tmpfiles-setup.service. May 8 00:45:46.844000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:46.848229 systemd[1]: Starting audit-rules.service... May 8 00:45:46.850103 systemd[1]: Starting clean-ca-certificates.service... May 8 00:45:46.852161 systemd[1]: Starting systemd-journal-catalog-update.service... May 8 00:45:46.853000 audit: BPF prog-id=36 op=LOAD May 8 00:45:46.854542 systemd[1]: Starting systemd-resolved.service... May 8 00:45:46.855000 audit: BPF prog-id=37 op=LOAD May 8 00:45:46.856670 systemd[1]: Starting systemd-timesyncd.service... May 8 00:45:46.858447 systemd[1]: Starting systemd-update-utmp.service... May 8 00:45:47.304356 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:45:47.304602 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 8 00:45:47.306153 systemd[1]: Starting modprobe@dm_mod.service... May 8 00:45:47.309000 audit[1158]: SYSTEM_BOOT pid=1158 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' May 8 00:45:47.308126 systemd[1]: Starting modprobe@efi_pstore.service... May 8 00:45:47.310599 systemd[1]: Starting modprobe@loop.service... May 8 00:45:47.311740 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 8 00:45:47.311896 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 8 00:45:47.312075 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:45:47.313812 systemd[1]: Finished clean-ca-certificates.service. May 8 00:45:47.314000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:47.315896 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:45:47.316185 systemd[1]: Finished modprobe@dm_mod.service. May 8 00:45:47.317000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:47.317000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:47.318207 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:45:47.318530 systemd[1]: Finished modprobe@efi_pstore.service. May 8 00:45:47.319000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:47.319000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:47.320677 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:45:47.320942 systemd[1]: Finished modprobe@loop.service. May 8 00:45:47.321000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:47.322000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:47.325240 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 00:45:47.325458 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 8 00:45:47.325636 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 8 00:45:47.329025 systemd[1]: Finished systemd-update-utmp.service. May 8 00:45:47.330000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:47.331596 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:45:47.332007 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 8 00:45:47.334007 systemd[1]: Starting modprobe@dm_mod.service... May 8 00:45:47.357112 systemd[1]: Starting modprobe@efi_pstore.service... May 8 00:45:47.359702 systemd[1]: Starting modprobe@loop.service... May 8 00:45:47.360790 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 8 00:45:47.360928 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 8 00:45:47.361061 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 8 00:45:47.361162 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:45:47.362366 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:45:47.362537 systemd[1]: Finished modprobe@dm_mod.service. May 8 00:45:47.363000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:47.363000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:47.364175 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:45:47.364341 systemd[1]: Finished modprobe@efi_pstore.service. May 8 00:45:47.365000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:47.365000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:47.365777 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:45:47.365876 systemd[1]: Finished modprobe@loop.service. May 8 00:45:47.366000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:47.366000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:47.367174 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 00:45:47.367274 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 8 00:45:47.369648 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:45:47.369855 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 8 00:45:47.371098 systemd[1]: Starting modprobe@dm_mod.service... May 8 00:45:47.406308 systemd[1]: Starting modprobe@drm.service... May 8 00:45:47.408358 systemd[1]: Starting modprobe@efi_pstore.service... May 8 00:45:47.410591 systemd[1]: Starting modprobe@loop.service... May 8 00:45:47.411537 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 8 00:45:47.411668 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 8 00:45:47.412987 systemd[1]: Starting systemd-networkd-wait-online.service... May 8 00:45:47.414184 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 8 00:45:47.414324 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:45:47.415630 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:45:47.415768 systemd[1]: Finished modprobe@dm_mod.service. May 8 00:45:47.416000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:47.416000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:47.417199 systemd[1]: modprobe@drm.service: Deactivated successfully. May 8 00:45:47.417311 systemd[1]: Finished modprobe@drm.service. May 8 00:45:47.417000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:47.417000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:47.418674 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:45:47.418787 systemd[1]: Finished modprobe@efi_pstore.service. May 8 00:45:47.419000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:47.419000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:47.420239 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:45:47.420355 systemd[1]: Finished modprobe@loop.service. May 8 00:45:47.420000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:47.420000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:47.421900 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 00:45:47.422035 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 8 00:45:47.423768 systemd[1]: Finished ensure-sysext.service. May 8 00:45:47.424000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:47.476554 systemd[1]: Started systemd-timesyncd.service. May 8 00:45:47.477000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:48.502626 systemd[1]: Finished systemd-journal-catalog-update.service. May 8 00:45:48.503447 systemd-resolved[1156]: Positive Trust Anchors: May 8 00:45:48.503745 systemd-resolved[1156]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 00:45:48.503850 systemd-resolved[1156]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 8 00:45:48.503000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:48.504091 systemd-timesyncd[1157]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 8 00:45:48.504151 systemd-timesyncd[1157]: Initial clock synchronization to Thu 2025-05-08 00:45:48.502460 UTC. May 8 00:45:48.504238 systemd[1]: Reached target time-set.target. May 8 00:45:48.514618 systemd-resolved[1156]: Defaulting to hostname 'linux'. May 8 00:45:48.516053 systemd[1]: Started systemd-resolved.service. May 8 00:45:48.537000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:48.538173 systemd[1]: Finished ldconfig.service. May 8 00:45:48.538000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:48.539095 systemd[1]: Reached target network.target. May 8 00:45:48.539971 systemd[1]: Reached target nss-lookup.target. May 8 00:45:48.542269 systemd[1]: Starting systemd-update-done.service... May 8 00:45:48.558738 systemd-networkd[1039]: eth0: Gained IPv6LL May 8 00:45:48.560861 systemd[1]: Finished systemd-networkd-wait-online.service. May 8 00:45:48.579000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:45:48.579740 systemd[1]: Reached target network-online.target. May 8 00:45:48.600000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 May 8 00:45:48.600000 audit[1186]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc2e945180 a2=420 a3=0 items=0 ppid=1153 pid=1186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:45:48.600000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 May 8 00:45:48.601414 augenrules[1186]: No rules May 8 00:45:48.602103 systemd[1]: Finished audit-rules.service. May 8 00:45:48.620725 systemd[1]: Finished systemd-update-done.service. May 8 00:45:48.621993 systemd[1]: Reached target sysinit.target. May 8 00:45:48.623079 systemd[1]: Started motdgen.path. May 8 00:45:48.624355 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. May 8 00:45:48.626057 systemd[1]: Started logrotate.timer. May 8 00:45:48.627201 systemd[1]: Started mdadm.timer. May 8 00:45:48.631338 systemd[1]: Started systemd-tmpfiles-clean.timer. May 8 00:45:48.632621 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 8 00:45:48.632658 systemd[1]: Reached target paths.target. May 8 00:45:48.633734 systemd[1]: Reached target timers.target. May 8 00:45:48.635120 systemd[1]: Listening on dbus.socket. May 8 00:45:48.637321 systemd[1]: Starting docker.socket... May 8 00:45:48.669613 systemd[1]: Listening on sshd.socket. May 8 00:45:48.670753 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 8 00:45:48.672933 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 8 00:45:48.673483 systemd[1]: Finished systemd-machine-id-commit.service. May 8 00:45:48.679083 systemd[1]: Listening on docker.socket. May 8 00:45:48.680203 systemd[1]: Reached target sockets.target. May 8 00:45:48.681254 systemd[1]: Reached target basic.target. May 8 00:45:48.682328 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. May 8 00:45:48.682359 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. May 8 00:45:48.683687 systemd[1]: Starting containerd.service... May 8 00:45:48.686326 systemd[1]: Starting dbus.service... May 8 00:45:48.688457 systemd[1]: Starting enable-oem-cloudinit.service... May 8 00:45:48.721832 systemd[1]: Starting extend-filesystems.service... May 8 00:45:48.723046 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). May 8 00:45:48.724055 jq[1195]: false May 8 00:45:48.724699 systemd[1]: Starting kubelet.service... May 8 00:45:48.726543 systemd[1]: Starting motdgen.service... May 8 00:45:48.728551 systemd[1]: Starting prepare-helm.service... May 8 00:45:48.731436 systemd[1]: Starting ssh-key-proc-cmdline.service... May 8 00:45:48.733633 systemd[1]: Starting sshd-keygen.service... May 8 00:45:48.736854 systemd[1]: Starting systemd-logind.service... May 8 00:45:48.737884 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 8 00:45:48.737958 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 8 00:45:48.738412 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 8 00:45:48.739131 systemd[1]: Starting update-engine.service... May 8 00:45:48.743047 systemd[1]: Starting update-ssh-keys-after-ignition.service... May 8 00:45:48.746087 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 8 00:45:48.746618 jq[1211]: true May 8 00:45:48.746297 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. May 8 00:45:48.749139 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 8 00:45:48.749359 systemd[1]: Finished ssh-key-proc-cmdline.service. May 8 00:45:48.751679 systemd[1]: Started dbus.service. May 8 00:45:48.751212 dbus-daemon[1194]: [system] SELinux support is enabled May 8 00:45:48.759652 jq[1219]: true May 8 00:45:48.758761 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 8 00:45:48.758808 systemd[1]: Reached target system-config.target. May 8 00:45:48.760083 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 8 00:45:48.760103 systemd[1]: Reached target user-config.target. May 8 00:45:48.761474 systemd[1]: motdgen.service: Deactivated successfully. May 8 00:45:48.761627 systemd[1]: Finished motdgen.service. May 8 00:45:48.880895 extend-filesystems[1196]: Found loop1 May 8 00:45:48.891349 extend-filesystems[1196]: Found sr0 May 8 00:45:48.891349 extend-filesystems[1196]: Found vda May 8 00:45:48.891349 extend-filesystems[1196]: Found vda1 May 8 00:45:48.891349 extend-filesystems[1196]: Found vda2 May 8 00:45:48.891349 extend-filesystems[1196]: Found vda3 May 8 00:45:48.891349 extend-filesystems[1196]: Found usr May 8 00:45:48.891349 extend-filesystems[1196]: Found vda4 May 8 00:45:48.891349 extend-filesystems[1196]: Found vda6 May 8 00:45:48.891349 extend-filesystems[1196]: Found vda7 May 8 00:45:48.891349 extend-filesystems[1196]: Found vda9 May 8 00:45:48.891349 extend-filesystems[1196]: Checking size of /dev/vda9 May 8 00:45:48.902201 update_engine[1206]: I0508 00:45:48.901455 1206 main.cc:92] Flatcar Update Engine starting May 8 00:45:48.903876 systemd[1]: Started update-engine.service. May 8 00:45:48.906795 update_engine[1206]: I0508 00:45:48.904140 1206 update_check_scheduler.cc:74] Next update check in 3m45s May 8 00:45:48.930808 systemd[1]: Started locksmithd.service. May 8 00:45:48.933324 extend-filesystems[1196]: Resized partition /dev/vda9 May 8 00:45:48.935023 extend-filesystems[1245]: resize2fs 1.46.5 (30-Dec-2021) May 8 00:45:48.972411 env[1220]: time="2025-05-08T00:45:48.972334212Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 May 8 00:45:48.993513 env[1220]: time="2025-05-08T00:45:48.993423770Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 8 00:45:48.993702 env[1220]: time="2025-05-08T00:45:48.993633924Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 8 00:45:48.995694 env[1220]: time="2025-05-08T00:45:48.995591686Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.180-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 8 00:45:48.995694 env[1220]: time="2025-05-08T00:45:48.995683208Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 8 00:45:48.996002 env[1220]: time="2025-05-08T00:45:48.995968813Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:45:48.996002 env[1220]: time="2025-05-08T00:45:48.995992718Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 8 00:45:48.996056 env[1220]: time="2025-05-08T00:45:48.996006354Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" May 8 00:45:48.996056 env[1220]: time="2025-05-08T00:45:48.996015301Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 8 00:45:48.996097 env[1220]: time="2025-05-08T00:45:48.996084771Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 8 00:45:48.996450 env[1220]: time="2025-05-08T00:45:48.996418717Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 8 00:45:48.996594 env[1220]: time="2025-05-08T00:45:48.996548050Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:45:48.996594 env[1220]: time="2025-05-08T00:45:48.996566184Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 8 00:45:48.996654 env[1220]: time="2025-05-08T00:45:48.996627990Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" May 8 00:45:48.996654 env[1220]: time="2025-05-08T00:45:48.996639321Z" level=info msg="metadata content store policy set" policy=shared May 8 00:45:49.029102 tar[1215]: linux-amd64/LICENSE May 8 00:45:49.029102 tar[1215]: linux-amd64/helm May 8 00:45:49.154623 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 8 00:45:49.377827 systemd-logind[1205]: Watching system buttons on /dev/input/event1 (Power Button) May 8 00:45:49.378355 sshd_keygen[1210]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 8 00:45:49.377854 systemd-logind[1205]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 8 00:45:49.379513 systemd-logind[1205]: New seat seat0. May 8 00:45:49.385711 systemd[1]: Started systemd-logind.service. May 8 00:45:49.472064 systemd[1]: Finished sshd-keygen.service. May 8 00:45:49.476501 systemd[1]: Starting issuegen.service... May 8 00:45:49.484635 systemd[1]: issuegen.service: Deactivated successfully. May 8 00:45:49.484823 systemd[1]: Finished issuegen.service. May 8 00:45:49.488319 systemd[1]: Starting systemd-user-sessions.service... May 8 00:45:49.501248 systemd[1]: Finished systemd-user-sessions.service. May 8 00:45:49.505568 systemd[1]: Started getty@tty1.service. May 8 00:45:49.508553 systemd[1]: Started serial-getty@ttyS0.service. May 8 00:45:49.509877 systemd[1]: Reached target getty.target. May 8 00:45:49.873281 tar[1215]: linux-amd64/README.md May 8 00:45:49.878252 systemd[1]: Finished prepare-helm.service. May 8 00:45:49.898761 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 8 00:45:50.084990 locksmithd[1244]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 8 00:45:51.508924 extend-filesystems[1245]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 8 00:45:51.508924 extend-filesystems[1245]: old_desc_blocks = 1, new_desc_blocks = 1 May 8 00:45:51.508924 extend-filesystems[1245]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 8 00:45:51.533085 extend-filesystems[1196]: Resized filesystem in /dev/vda9 May 8 00:45:51.509442 systemd[1]: extend-filesystems.service: Deactivated successfully. May 8 00:45:51.509619 systemd[1]: Finished extend-filesystems.service. May 8 00:45:51.710638 env[1220]: time="2025-05-08T00:45:51.710505859Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 8 00:45:51.710638 env[1220]: time="2025-05-08T00:45:51.710652805Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 8 00:45:51.711236 env[1220]: time="2025-05-08T00:45:51.710680216Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 8 00:45:51.711236 env[1220]: time="2025-05-08T00:45:51.710768742Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 8 00:45:51.711236 env[1220]: time="2025-05-08T00:45:51.710791144Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 8 00:45:51.711236 env[1220]: time="2025-05-08T00:45:51.710811112Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 8 00:45:51.711236 env[1220]: time="2025-05-08T00:45:51.710832592Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 8 00:45:51.711236 env[1220]: time="2025-05-08T00:45:51.710849393Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 8 00:45:51.711236 env[1220]: time="2025-05-08T00:45:51.710875713Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 May 8 00:45:51.711236 env[1220]: time="2025-05-08T00:45:51.710894728Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 8 00:45:51.711236 env[1220]: time="2025-05-08T00:45:51.710911600Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 8 00:45:51.711236 env[1220]: time="2025-05-08T00:45:51.710927340Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 8 00:45:51.711236 env[1220]: time="2025-05-08T00:45:51.711180084Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 8 00:45:51.711780 env[1220]: time="2025-05-08T00:45:51.711737569Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 8 00:45:51.712217 env[1220]: time="2025-05-08T00:45:51.712183646Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 8 00:45:51.712275 env[1220]: time="2025-05-08T00:45:51.712242296Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 8 00:45:51.712275 env[1220]: time="2025-05-08T00:45:51.712260110Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 8 00:45:51.712389 env[1220]: time="2025-05-08T00:45:51.712366018Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 8 00:45:51.712423 env[1220]: time="2025-05-08T00:45:51.712398078Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 8 00:45:51.712457 env[1220]: time="2025-05-08T00:45:51.712421883Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 8 00:45:51.712457 env[1220]: time="2025-05-08T00:45:51.712438404Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 8 00:45:51.712515 env[1220]: time="2025-05-08T00:45:51.712456939Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 8 00:45:51.712515 env[1220]: time="2025-05-08T00:45:51.712476806Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 8 00:45:51.712515 env[1220]: time="2025-05-08T00:45:51.712493157Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 8 00:45:51.712515 env[1220]: time="2025-05-08T00:45:51.712507373Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 8 00:45:51.712666 env[1220]: time="2025-05-08T00:45:51.712524886Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 8 00:45:51.712726 env[1220]: time="2025-05-08T00:45:51.712702660Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 8 00:45:51.712771 env[1220]: time="2025-05-08T00:45:51.712728458Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 8 00:45:51.712771 env[1220]: time="2025-05-08T00:45:51.712748766Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 8 00:45:51.712771 env[1220]: time="2025-05-08T00:45:51.712762983Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 8 00:45:51.712855 env[1220]: time="2025-05-08T00:45:51.712788170Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 May 8 00:45:51.712855 env[1220]: time="2025-05-08T00:45:51.712805022Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 8 00:45:51.712855 env[1220]: time="2025-05-08T00:45:51.712844085Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" May 8 00:45:51.712943 env[1220]: time="2025-05-08T00:45:51.712909197Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 8 00:45:51.713342 env[1220]: time="2025-05-08T00:45:51.713262910Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 8 00:45:51.714449 env[1220]: time="2025-05-08T00:45:51.713360894Z" level=info msg="Connect containerd service" May 8 00:45:51.714449 env[1220]: time="2025-05-08T00:45:51.713428030Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 8 00:45:51.714449 env[1220]: time="2025-05-08T00:45:51.714308652Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 8 00:45:51.714546 env[1220]: time="2025-05-08T00:45:51.714478531Z" level=info msg="Start subscribing containerd event" May 8 00:45:51.714603 env[1220]: time="2025-05-08T00:45:51.714584650Z" level=info msg="Start recovering state" May 8 00:45:51.714710 env[1220]: time="2025-05-08T00:45:51.714685409Z" level=info msg="Start event monitor" May 8 00:45:51.714802 env[1220]: time="2025-05-08T00:45:51.714731685Z" level=info msg="Start snapshots syncer" May 8 00:45:51.714802 env[1220]: time="2025-05-08T00:45:51.714754679Z" level=info msg="Start cni network conf syncer for default" May 8 00:45:51.714802 env[1220]: time="2025-05-08T00:45:51.714786348Z" level=info msg="Start streaming server" May 8 00:45:51.715220 env[1220]: time="2025-05-08T00:45:51.715197830Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 8 00:45:51.715290 env[1220]: time="2025-05-08T00:45:51.715263403Z" level=info msg=serving... address=/run/containerd/containerd.sock May 8 00:45:51.715363 env[1220]: time="2025-05-08T00:45:51.715327373Z" level=info msg="containerd successfully booted in 2.746821s" May 8 00:45:51.715477 systemd[1]: Started containerd.service. May 8 00:45:51.717335 bash[1246]: Updated "/home/core/.ssh/authorized_keys" May 8 00:45:51.717969 systemd[1]: Finished update-ssh-keys-after-ignition.service. May 8 00:45:52.302920 systemd[1]: Started kubelet.service. May 8 00:45:52.310758 systemd[1]: Reached target multi-user.target. May 8 00:45:52.313451 systemd[1]: Starting systemd-update-utmp-runlevel.service... May 8 00:45:52.321237 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. May 8 00:45:52.321410 systemd[1]: Finished systemd-update-utmp-runlevel.service. May 8 00:45:52.322684 systemd[1]: Startup finished in 1.002s (kernel) + 7.045s (initrd) + 11.411s (userspace) = 19.460s. May 8 00:45:52.750353 kubelet[1276]: E0508 00:45:52.750210 1276 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:45:52.751932 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:45:52.752048 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:45:52.752274 systemd[1]: kubelet.service: Consumed 1.214s CPU time. May 8 00:45:58.333456 systemd[1]: Created slice system-sshd.slice. May 8 00:45:58.334757 systemd[1]: Started sshd@0-10.0.0.73:22-10.0.0.1:47926.service. May 8 00:45:58.374174 sshd[1285]: Accepted publickey for core from 10.0.0.1 port 47926 ssh2: RSA SHA256:7kNtsjKndXT6BkWKn4/gWKPiKnt49pZMw1UDp15jA9U May 8 00:45:58.376012 sshd[1285]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:45:58.386996 systemd-logind[1205]: New session 1 of user core. May 8 00:45:58.388380 systemd[1]: Created slice user-500.slice. May 8 00:45:58.389965 systemd[1]: Starting user-runtime-dir@500.service... May 8 00:45:58.399681 systemd[1]: Finished user-runtime-dir@500.service. May 8 00:45:58.401527 systemd[1]: Starting user@500.service... May 8 00:45:58.404599 (systemd)[1288]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 8 00:45:58.496674 systemd[1288]: Queued start job for default target default.target. May 8 00:45:58.497220 systemd[1288]: Reached target paths.target. May 8 00:45:58.497243 systemd[1288]: Reached target sockets.target. May 8 00:45:58.497259 systemd[1288]: Reached target timers.target. May 8 00:45:58.497274 systemd[1288]: Reached target basic.target. May 8 00:45:58.497318 systemd[1288]: Reached target default.target. May 8 00:45:58.497344 systemd[1288]: Startup finished in 87ms. May 8 00:45:58.497526 systemd[1]: Started user@500.service. May 8 00:45:58.498740 systemd[1]: Started session-1.scope. May 8 00:45:58.552361 systemd[1]: Started sshd@1-10.0.0.73:22-10.0.0.1:47936.service. May 8 00:45:58.587374 sshd[1297]: Accepted publickey for core from 10.0.0.1 port 47936 ssh2: RSA SHA256:7kNtsjKndXT6BkWKn4/gWKPiKnt49pZMw1UDp15jA9U May 8 00:45:58.588560 sshd[1297]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:45:58.594711 systemd[1]: Started session-2.scope. May 8 00:45:58.595455 systemd-logind[1205]: New session 2 of user core. May 8 00:45:58.650396 sshd[1297]: pam_unix(sshd:session): session closed for user core May 8 00:45:58.653486 systemd[1]: Started sshd@2-10.0.0.73:22-10.0.0.1:47938.service. May 8 00:45:58.654031 systemd[1]: sshd@1-10.0.0.73:22-10.0.0.1:47936.service: Deactivated successfully. May 8 00:45:58.654676 systemd[1]: session-2.scope: Deactivated successfully. May 8 00:45:58.655151 systemd-logind[1205]: Session 2 logged out. Waiting for processes to exit. May 8 00:45:58.656075 systemd-logind[1205]: Removed session 2. May 8 00:45:58.688017 sshd[1302]: Accepted publickey for core from 10.0.0.1 port 47938 ssh2: RSA SHA256:7kNtsjKndXT6BkWKn4/gWKPiKnt49pZMw1UDp15jA9U May 8 00:45:58.689364 sshd[1302]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:45:58.694410 systemd-logind[1205]: New session 3 of user core. May 8 00:45:58.695511 systemd[1]: Started session-3.scope. May 8 00:45:58.745598 sshd[1302]: pam_unix(sshd:session): session closed for user core May 8 00:45:58.748845 systemd[1]: sshd@2-10.0.0.73:22-10.0.0.1:47938.service: Deactivated successfully. May 8 00:45:58.749523 systemd[1]: session-3.scope: Deactivated successfully. May 8 00:45:58.750118 systemd-logind[1205]: Session 3 logged out. Waiting for processes to exit. May 8 00:45:58.751508 systemd[1]: Started sshd@3-10.0.0.73:22-10.0.0.1:47954.service. May 8 00:45:58.752380 systemd-logind[1205]: Removed session 3. May 8 00:45:58.783381 sshd[1310]: Accepted publickey for core from 10.0.0.1 port 47954 ssh2: RSA SHA256:7kNtsjKndXT6BkWKn4/gWKPiKnt49pZMw1UDp15jA9U May 8 00:45:58.784751 sshd[1310]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:45:58.788807 systemd-logind[1205]: New session 4 of user core. May 8 00:45:58.789835 systemd[1]: Started session-4.scope. May 8 00:45:58.846606 sshd[1310]: pam_unix(sshd:session): session closed for user core May 8 00:45:58.849448 systemd[1]: sshd@3-10.0.0.73:22-10.0.0.1:47954.service: Deactivated successfully. May 8 00:45:58.850054 systemd[1]: session-4.scope: Deactivated successfully. May 8 00:45:58.850752 systemd-logind[1205]: Session 4 logged out. Waiting for processes to exit. May 8 00:45:58.852056 systemd[1]: Started sshd@4-10.0.0.73:22-10.0.0.1:47962.service. May 8 00:45:58.852853 systemd-logind[1205]: Removed session 4. May 8 00:45:58.884340 sshd[1316]: Accepted publickey for core from 10.0.0.1 port 47962 ssh2: RSA SHA256:7kNtsjKndXT6BkWKn4/gWKPiKnt49pZMw1UDp15jA9U May 8 00:45:58.885696 sshd[1316]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:45:58.888985 systemd-logind[1205]: New session 5 of user core. May 8 00:45:58.889938 systemd[1]: Started session-5.scope. May 8 00:45:58.949304 sudo[1319]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 8 00:45:58.949608 sudo[1319]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 8 00:45:58.987444 systemd[1]: Starting docker.service... May 8 00:45:59.101334 env[1330]: time="2025-05-08T00:45:59.101174932Z" level=info msg="Starting up" May 8 00:45:59.103730 env[1330]: time="2025-05-08T00:45:59.103635808Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 8 00:45:59.103730 env[1330]: time="2025-05-08T00:45:59.103674881Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 8 00:45:59.103730 env[1330]: time="2025-05-08T00:45:59.103706991Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 8 00:45:59.103730 env[1330]: time="2025-05-08T00:45:59.103724474Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 8 00:45:59.107042 env[1330]: time="2025-05-08T00:45:59.106981694Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 8 00:45:59.107042 env[1330]: time="2025-05-08T00:45:59.107012662Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 8 00:45:59.107042 env[1330]: time="2025-05-08T00:45:59.107035274Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 8 00:45:59.107042 env[1330]: time="2025-05-08T00:45:59.107047287Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 8 00:45:59.145935 env[1330]: time="2025-05-08T00:45:59.145855480Z" level=info msg="Loading containers: start." May 8 00:45:59.346619 kernel: Initializing XFRM netlink socket May 8 00:45:59.380041 env[1330]: time="2025-05-08T00:45:59.379885453Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" May 8 00:45:59.449631 systemd-networkd[1039]: docker0: Link UP May 8 00:45:59.468234 env[1330]: time="2025-05-08T00:45:59.468172474Z" level=info msg="Loading containers: done." May 8 00:45:59.484173 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2139993210-merged.mount: Deactivated successfully. May 8 00:45:59.512318 env[1330]: time="2025-05-08T00:45:59.512246765Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 8 00:45:59.512519 env[1330]: time="2025-05-08T00:45:59.512499188Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 May 8 00:45:59.512678 env[1330]: time="2025-05-08T00:45:59.512650913Z" level=info msg="Daemon has completed initialization" May 8 00:45:59.537959 systemd[1]: Started docker.service. May 8 00:45:59.570455 env[1330]: time="2025-05-08T00:45:59.570349249Z" level=info msg="API listen on /run/docker.sock" May 8 00:46:00.666771 env[1220]: time="2025-05-08T00:46:00.666647892Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\"" May 8 00:46:01.832305 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount12888251.mount: Deactivated successfully. May 8 00:46:02.820147 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 8 00:46:02.820355 systemd[1]: Stopped kubelet.service. May 8 00:46:02.820402 systemd[1]: kubelet.service: Consumed 1.214s CPU time. May 8 00:46:02.823685 systemd[1]: Starting kubelet.service... May 8 00:46:02.928613 systemd[1]: Started kubelet.service. May 8 00:46:03.037380 kubelet[1466]: E0508 00:46:03.037303 1466 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:46:03.040709 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:46:03.040830 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:46:06.902298 env[1220]: time="2025-05-08T00:46:06.902197691Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.32.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:46:06.907562 env[1220]: time="2025-05-08T00:46:06.907486150Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:46:06.912023 env[1220]: time="2025-05-08T00:46:06.911880162Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.32.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:46:06.915690 env[1220]: time="2025-05-08T00:46:06.915598226Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:46:06.916475 env[1220]: time="2025-05-08T00:46:06.916376576Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\" returns image reference \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\"" May 8 00:46:06.917388 env[1220]: time="2025-05-08T00:46:06.917335765Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\"" May 8 00:46:10.386654 env[1220]: time="2025-05-08T00:46:10.386584635Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.32.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:46:10.389902 env[1220]: time="2025-05-08T00:46:10.389866591Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:46:10.393203 env[1220]: time="2025-05-08T00:46:10.393148036Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.32.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:46:10.396287 env[1220]: time="2025-05-08T00:46:10.396240296Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:46:10.397121 env[1220]: time="2025-05-08T00:46:10.397080091Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\" returns image reference \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\"" May 8 00:46:10.397895 env[1220]: time="2025-05-08T00:46:10.397834135Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\"" May 8 00:46:13.007750 env[1220]: time="2025-05-08T00:46:13.007676113Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.32.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:46:13.010746 env[1220]: time="2025-05-08T00:46:13.010646875Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:46:13.014037 env[1220]: time="2025-05-08T00:46:13.013921568Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.32.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:46:13.017090 env[1220]: time="2025-05-08T00:46:13.016887481Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:46:13.017895 env[1220]: time="2025-05-08T00:46:13.017845057Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\" returns image reference \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\"" May 8 00:46:13.018708 env[1220]: time="2025-05-08T00:46:13.018667329Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" May 8 00:46:13.070124 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 8 00:46:13.070345 systemd[1]: Stopped kubelet.service. May 8 00:46:13.072131 systemd[1]: Starting kubelet.service... May 8 00:46:13.160328 systemd[1]: Started kubelet.service. May 8 00:46:14.011825 kubelet[1477]: E0508 00:46:14.011723 1477 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:46:14.013783 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:46:14.013908 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:46:15.017455 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2565495197.mount: Deactivated successfully. May 8 00:46:16.525948 env[1220]: time="2025-05-08T00:46:16.525876091Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.32.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:46:16.620980 env[1220]: time="2025-05-08T00:46:16.620782197Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:46:16.762253 env[1220]: time="2025-05-08T00:46:16.762170842Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.32.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:46:16.773450 env[1220]: time="2025-05-08T00:46:16.773367263Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:46:16.773944 env[1220]: time="2025-05-08T00:46:16.773880586Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\"" May 8 00:46:16.774534 env[1220]: time="2025-05-08T00:46:16.774476895Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 8 00:46:17.489710 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1201139108.mount: Deactivated successfully. May 8 00:46:18.860036 env[1220]: time="2025-05-08T00:46:18.859951188Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:46:18.862051 env[1220]: time="2025-05-08T00:46:18.861992587Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:46:18.864053 env[1220]: time="2025-05-08T00:46:18.863985465Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:46:18.865985 env[1220]: time="2025-05-08T00:46:18.865916868Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:46:18.866619 env[1220]: time="2025-05-08T00:46:18.866584651Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 8 00:46:18.867221 env[1220]: time="2025-05-08T00:46:18.867160481Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 8 00:46:19.598088 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount823001912.mount: Deactivated successfully. May 8 00:46:19.604445 env[1220]: time="2025-05-08T00:46:19.604381707Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:46:19.606288 env[1220]: time="2025-05-08T00:46:19.606225816Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:46:19.607808 env[1220]: time="2025-05-08T00:46:19.607766205Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:46:19.609287 env[1220]: time="2025-05-08T00:46:19.609222818Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:46:19.609727 env[1220]: time="2025-05-08T00:46:19.609688862Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 8 00:46:19.610204 env[1220]: time="2025-05-08T00:46:19.610168331Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 8 00:46:20.148887 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount78647985.mount: Deactivated successfully. May 8 00:46:24.070090 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 8 00:46:24.070282 systemd[1]: Stopped kubelet.service. May 8 00:46:24.072082 systemd[1]: Starting kubelet.service... May 8 00:46:24.151283 env[1220]: time="2025-05-08T00:46:24.151205952Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:46:24.161625 systemd[1]: Started kubelet.service. May 8 00:46:24.222812 kubelet[1488]: E0508 00:46:24.222729 1488 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:46:24.224811 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:46:24.224935 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:46:24.609112 env[1220]: time="2025-05-08T00:46:24.609040306Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:46:24.614595 env[1220]: time="2025-05-08T00:46:24.614518011Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:46:24.616988 env[1220]: time="2025-05-08T00:46:24.616916319Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:46:24.617863 env[1220]: time="2025-05-08T00:46:24.617810007Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" May 8 00:46:27.041399 systemd[1]: Stopped kubelet.service. May 8 00:46:27.043757 systemd[1]: Starting kubelet.service... May 8 00:46:27.070256 systemd[1]: Reloading. May 8 00:46:27.158287 /usr/lib/systemd/system-generators/torcx-generator[1543]: time="2025-05-08T00:46:27Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 8 00:46:27.158688 /usr/lib/systemd/system-generators/torcx-generator[1543]: time="2025-05-08T00:46:27Z" level=info msg="torcx already run" May 8 00:46:27.890369 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 8 00:46:27.890393 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 8 00:46:27.911726 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:46:28.005881 systemd[1]: Started kubelet.service. May 8 00:46:28.007280 systemd[1]: Stopping kubelet.service... May 8 00:46:28.007500 systemd[1]: kubelet.service: Deactivated successfully. May 8 00:46:28.007685 systemd[1]: Stopped kubelet.service. May 8 00:46:28.009391 systemd[1]: Starting kubelet.service... May 8 00:46:28.099880 systemd[1]: Started kubelet.service. May 8 00:46:28.196168 kubelet[1591]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:46:28.196168 kubelet[1591]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 8 00:46:28.196168 kubelet[1591]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:46:28.196168 kubelet[1591]: I0508 00:46:28.195624 1591 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 8 00:46:28.563213 kubelet[1591]: I0508 00:46:28.563087 1591 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 8 00:46:28.563213 kubelet[1591]: I0508 00:46:28.563121 1591 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 8 00:46:28.563425 kubelet[1591]: I0508 00:46:28.563405 1591 server.go:954] "Client rotation is on, will bootstrap in background" May 8 00:46:28.624973 kubelet[1591]: I0508 00:46:28.624643 1591 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 00:46:28.625297 kubelet[1591]: E0508 00:46:28.625245 1591 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.73:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.73:6443: connect: connection refused" logger="UnhandledError" May 8 00:46:28.636084 kubelet[1591]: E0508 00:46:28.636030 1591 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 8 00:46:28.636084 kubelet[1591]: I0508 00:46:28.636063 1591 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 8 00:46:28.641276 kubelet[1591]: I0508 00:46:28.641226 1591 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 8 00:46:28.642957 kubelet[1591]: I0508 00:46:28.642897 1591 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 8 00:46:28.643162 kubelet[1591]: I0508 00:46:28.642951 1591 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 8 00:46:28.643302 kubelet[1591]: I0508 00:46:28.643163 1591 topology_manager.go:138] "Creating topology manager with none policy" May 8 00:46:28.643302 kubelet[1591]: I0508 00:46:28.643175 1591 container_manager_linux.go:304] "Creating device plugin manager" May 8 00:46:28.643373 kubelet[1591]: I0508 00:46:28.643325 1591 state_mem.go:36] "Initialized new in-memory state store" May 8 00:46:28.655000 kubelet[1591]: I0508 00:46:28.654952 1591 kubelet.go:446] "Attempting to sync node with API server" May 8 00:46:28.655000 kubelet[1591]: I0508 00:46:28.654981 1591 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 8 00:46:28.655000 kubelet[1591]: I0508 00:46:28.655000 1591 kubelet.go:352] "Adding apiserver pod source" May 8 00:46:28.655000 kubelet[1591]: I0508 00:46:28.655014 1591 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 8 00:46:28.688292 kubelet[1591]: I0508 00:46:28.688242 1591 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 8 00:46:28.688756 kubelet[1591]: I0508 00:46:28.688736 1591 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 8 00:46:28.688841 kubelet[1591]: W0508 00:46:28.688802 1591 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 8 00:46:28.713716 kubelet[1591]: W0508 00:46:28.713644 1591 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.73:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.73:6443: connect: connection refused May 8 00:46:28.713716 kubelet[1591]: E0508 00:46:28.713705 1591 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.73:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.73:6443: connect: connection refused" logger="UnhandledError" May 8 00:46:28.714214 kubelet[1591]: I0508 00:46:28.714159 1591 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 8 00:46:28.714214 kubelet[1591]: I0508 00:46:28.714233 1591 server.go:1287] "Started kubelet" May 8 00:46:28.718495 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). May 8 00:46:28.719964 kubelet[1591]: I0508 00:46:28.719929 1591 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 8 00:46:28.720974 kubelet[1591]: W0508 00:46:28.720897 1591 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.73:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.73:6443: connect: connection refused May 8 00:46:28.721122 kubelet[1591]: E0508 00:46:28.720979 1591 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.73:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.73:6443: connect: connection refused" logger="UnhandledError" May 8 00:46:28.722007 kubelet[1591]: I0508 00:46:28.719892 1591 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 8 00:46:28.722103 kubelet[1591]: I0508 00:46:28.722077 1591 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 8 00:46:28.722191 kubelet[1591]: I0508 00:46:28.722160 1591 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 8 00:46:28.722561 kubelet[1591]: I0508 00:46:28.722457 1591 volume_manager.go:297] "Starting Kubelet Volume Manager" May 8 00:46:28.722646 kubelet[1591]: I0508 00:46:28.722618 1591 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 8 00:46:28.722687 kubelet[1591]: I0508 00:46:28.722671 1591 reconciler.go:26] "Reconciler: start to sync state" May 8 00:46:28.723191 kubelet[1591]: W0508 00:46:28.723145 1591 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.73:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.73:6443: connect: connection refused May 8 00:46:28.723258 kubelet[1591]: E0508 00:46:28.723202 1591 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.73:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.73:6443: connect: connection refused" logger="UnhandledError" May 8 00:46:28.723792 kubelet[1591]: E0508 00:46:28.723758 1591 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:46:28.724325 kubelet[1591]: E0508 00:46:28.724169 1591 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.73:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.73:6443: connect: connection refused" interval="200ms" May 8 00:46:28.725012 kubelet[1591]: E0508 00:46:28.723258 1591 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.73:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.73:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183d66bc22d99cc9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-08 00:46:28.714192073 +0000 UTC m=+0.610871771,LastTimestamp:2025-05-08 00:46:28.714192073 +0000 UTC m=+0.610871771,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 8 00:46:28.725330 kubelet[1591]: I0508 00:46:28.725294 1591 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 8 00:46:28.725638 kubelet[1591]: I0508 00:46:28.725612 1591 factory.go:221] Registration of the containerd container factory successfully May 8 00:46:28.725785 kubelet[1591]: I0508 00:46:28.725748 1591 factory.go:221] Registration of the systemd container factory successfully May 8 00:46:28.726278 kubelet[1591]: I0508 00:46:28.726241 1591 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 8 00:46:28.726924 kubelet[1591]: I0508 00:46:28.726890 1591 server.go:490] "Adding debug handlers to kubelet server" May 8 00:46:28.727690 kubelet[1591]: E0508 00:46:28.727660 1591 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 8 00:46:28.739282 kubelet[1591]: I0508 00:46:28.738362 1591 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 8 00:46:28.739875 kubelet[1591]: I0508 00:46:28.739773 1591 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 8 00:46:28.739875 kubelet[1591]: I0508 00:46:28.739813 1591 status_manager.go:227] "Starting to sync pod status with apiserver" May 8 00:46:28.739875 kubelet[1591]: I0508 00:46:28.739841 1591 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 8 00:46:28.739875 kubelet[1591]: I0508 00:46:28.739851 1591 kubelet.go:2388] "Starting kubelet main sync loop" May 8 00:46:28.740017 kubelet[1591]: E0508 00:46:28.739907 1591 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 8 00:46:28.743028 kubelet[1591]: W0508 00:46:28.742991 1591 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.73:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.73:6443: connect: connection refused May 8 00:46:28.743091 kubelet[1591]: E0508 00:46:28.743028 1591 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.73:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.73:6443: connect: connection refused" logger="UnhandledError" May 8 00:46:28.743633 kubelet[1591]: I0508 00:46:28.743440 1591 cpu_manager.go:221] "Starting CPU manager" policy="none" May 8 00:46:28.743633 kubelet[1591]: I0508 00:46:28.743453 1591 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 8 00:46:28.743633 kubelet[1591]: I0508 00:46:28.743474 1591 state_mem.go:36] "Initialized new in-memory state store" May 8 00:46:28.824565 kubelet[1591]: E0508 00:46:28.824507 1591 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:46:28.840979 kubelet[1591]: E0508 00:46:28.840132 1591 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 8 00:46:28.925304 kubelet[1591]: E0508 00:46:28.925249 1591 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:46:28.925882 kubelet[1591]: E0508 00:46:28.925828 1591 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.73:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.73:6443: connect: connection refused" interval="400ms" May 8 00:46:29.026372 kubelet[1591]: E0508 00:46:29.026290 1591 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:46:29.040508 kubelet[1591]: E0508 00:46:29.040446 1591 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 8 00:46:29.127640 kubelet[1591]: E0508 00:46:29.127379 1591 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:46:29.228487 kubelet[1591]: E0508 00:46:29.228404 1591 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:46:29.326346 kubelet[1591]: E0508 00:46:29.326293 1591 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.73:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.73:6443: connect: connection refused" interval="800ms" May 8 00:46:29.329392 kubelet[1591]: E0508 00:46:29.329360 1591 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:46:29.430459 kubelet[1591]: E0508 00:46:29.430297 1591 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:46:29.441651 kubelet[1591]: E0508 00:46:29.441568 1591 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 8 00:46:29.531411 kubelet[1591]: E0508 00:46:29.531316 1591 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:46:29.629333 kubelet[1591]: I0508 00:46:29.629273 1591 policy_none.go:49] "None policy: Start" May 8 00:46:29.629333 kubelet[1591]: I0508 00:46:29.629308 1591 memory_manager.go:186] "Starting memorymanager" policy="None" May 8 00:46:29.629333 kubelet[1591]: I0508 00:46:29.629321 1591 state_mem.go:35] "Initializing new in-memory state store" May 8 00:46:29.632349 kubelet[1591]: E0508 00:46:29.632316 1591 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:46:29.663689 systemd[1]: Created slice kubepods.slice. May 8 00:46:29.669030 systemd[1]: Created slice kubepods-burstable.slice. May 8 00:46:29.671957 systemd[1]: Created slice kubepods-besteffort.slice. May 8 00:46:29.683277 kubelet[1591]: I0508 00:46:29.682834 1591 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 8 00:46:29.683277 kubelet[1591]: I0508 00:46:29.683133 1591 eviction_manager.go:189] "Eviction manager: starting control loop" May 8 00:46:29.683277 kubelet[1591]: I0508 00:46:29.683168 1591 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 8 00:46:29.683673 kubelet[1591]: I0508 00:46:29.683648 1591 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 8 00:46:29.684922 kubelet[1591]: E0508 00:46:29.684902 1591 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 8 00:46:29.684977 kubelet[1591]: E0508 00:46:29.684963 1591 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 8 00:46:29.708654 kubelet[1591]: W0508 00:46:29.708497 1591 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.73:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.73:6443: connect: connection refused May 8 00:46:29.708823 kubelet[1591]: E0508 00:46:29.708654 1591 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.73:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.73:6443: connect: connection refused" logger="UnhandledError" May 8 00:46:29.785470 kubelet[1591]: I0508 00:46:29.785413 1591 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 8 00:46:29.785906 kubelet[1591]: E0508 00:46:29.785851 1591 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.73:6443/api/v1/nodes\": dial tcp 10.0.0.73:6443: connect: connection refused" node="localhost" May 8 00:46:29.809844 kubelet[1591]: W0508 00:46:29.809768 1591 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.73:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.73:6443: connect: connection refused May 8 00:46:29.809844 kubelet[1591]: E0508 00:46:29.809842 1591 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.73:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.73:6443: connect: connection refused" logger="UnhandledError" May 8 00:46:29.988469 kubelet[1591]: I0508 00:46:29.988327 1591 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 8 00:46:29.988915 kubelet[1591]: E0508 00:46:29.988850 1591 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.73:6443/api/v1/nodes\": dial tcp 10.0.0.73:6443: connect: connection refused" node="localhost" May 8 00:46:30.128030 kubelet[1591]: E0508 00:46:30.127958 1591 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.73:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.73:6443: connect: connection refused" interval="1.6s" May 8 00:46:30.129568 kubelet[1591]: W0508 00:46:30.129508 1591 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.73:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.73:6443: connect: connection refused May 8 00:46:30.129642 kubelet[1591]: E0508 00:46:30.129594 1591 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.73:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.73:6443: connect: connection refused" logger="UnhandledError" May 8 00:46:30.240144 kubelet[1591]: W0508 00:46:30.239958 1591 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.73:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.73:6443: connect: connection refused May 8 00:46:30.240144 kubelet[1591]: E0508 00:46:30.240028 1591 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.73:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.73:6443: connect: connection refused" logger="UnhandledError" May 8 00:46:30.252195 systemd[1]: Created slice kubepods-burstable-pod36bb1740346d3978c6a0e00983c0c341.slice. May 8 00:46:30.259555 kubelet[1591]: E0508 00:46:30.259489 1591 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 8 00:46:30.262276 systemd[1]: Created slice kubepods-burstable-pod5386fe11ed933ab82453de11903c7f47.slice. May 8 00:46:30.270098 kubelet[1591]: E0508 00:46:30.270030 1591 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 8 00:46:30.272653 systemd[1]: Created slice kubepods-burstable-pod2980a8ab51edc665be10a02e33130e15.slice. May 8 00:46:30.274502 kubelet[1591]: E0508 00:46:30.274477 1591 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 8 00:46:30.333196 kubelet[1591]: I0508 00:46:30.333113 1591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:46:30.333196 kubelet[1591]: I0508 00:46:30.333171 1591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2980a8ab51edc665be10a02e33130e15-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2980a8ab51edc665be10a02e33130e15\") " pod="kube-system/kube-scheduler-localhost" May 8 00:46:30.333196 kubelet[1591]: I0508 00:46:30.333189 1591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/36bb1740346d3978c6a0e00983c0c341-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"36bb1740346d3978c6a0e00983c0c341\") " pod="kube-system/kube-apiserver-localhost" May 8 00:46:30.333490 kubelet[1591]: I0508 00:46:30.333236 1591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/36bb1740346d3978c6a0e00983c0c341-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"36bb1740346d3978c6a0e00983c0c341\") " pod="kube-system/kube-apiserver-localhost" May 8 00:46:30.333490 kubelet[1591]: I0508 00:46:30.333254 1591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:46:30.333490 kubelet[1591]: I0508 00:46:30.333272 1591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:46:30.333490 kubelet[1591]: I0508 00:46:30.333336 1591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:46:30.333490 kubelet[1591]: I0508 00:46:30.333400 1591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:46:30.333701 kubelet[1591]: I0508 00:46:30.333423 1591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/36bb1740346d3978c6a0e00983c0c341-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"36bb1740346d3978c6a0e00983c0c341\") " pod="kube-system/kube-apiserver-localhost" May 8 00:46:30.391373 kubelet[1591]: I0508 00:46:30.391330 1591 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 8 00:46:30.391791 kubelet[1591]: E0508 00:46:30.391756 1591 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.73:6443/api/v1/nodes\": dial tcp 10.0.0.73:6443: connect: connection refused" node="localhost" May 8 00:46:30.560675 kubelet[1591]: E0508 00:46:30.560480 1591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:46:30.561774 env[1220]: time="2025-05-08T00:46:30.561716490Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:36bb1740346d3978c6a0e00983c0c341,Namespace:kube-system,Attempt:0,}" May 8 00:46:30.570864 kubelet[1591]: E0508 00:46:30.570823 1591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:46:30.571477 env[1220]: time="2025-05-08T00:46:30.571333146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5386fe11ed933ab82453de11903c7f47,Namespace:kube-system,Attempt:0,}" May 8 00:46:30.575784 kubelet[1591]: E0508 00:46:30.575743 1591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:46:30.576284 env[1220]: time="2025-05-08T00:46:30.576214163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2980a8ab51edc665be10a02e33130e15,Namespace:kube-system,Attempt:0,}" May 8 00:46:30.716009 kubelet[1591]: E0508 00:46:30.715950 1591 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.73:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.73:6443: connect: connection refused" logger="UnhandledError" May 8 00:46:31.155216 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2484350638.mount: Deactivated successfully. May 8 00:46:31.162882 env[1220]: time="2025-05-08T00:46:31.162799908Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:46:31.166722 env[1220]: time="2025-05-08T00:46:31.166657567Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:46:31.167921 env[1220]: time="2025-05-08T00:46:31.167886597Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:46:31.169271 env[1220]: time="2025-05-08T00:46:31.169211381Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:46:31.173067 env[1220]: time="2025-05-08T00:46:31.173006038Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:46:31.174337 env[1220]: time="2025-05-08T00:46:31.174288081Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:46:31.176120 env[1220]: time="2025-05-08T00:46:31.176051970Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:46:31.179492 env[1220]: time="2025-05-08T00:46:31.179425867Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:46:31.193937 kubelet[1591]: I0508 00:46:31.193896 1591 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 8 00:46:31.194319 kubelet[1591]: E0508 00:46:31.194281 1591 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.73:6443/api/v1/nodes\": dial tcp 10.0.0.73:6443: connect: connection refused" node="localhost" May 8 00:46:31.205845 env[1220]: time="2025-05-08T00:46:31.205764144Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:46:31.208005 env[1220]: time="2025-05-08T00:46:31.207938155Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:46:31.209688 env[1220]: time="2025-05-08T00:46:31.209637332Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:46:31.211332 env[1220]: time="2025-05-08T00:46:31.211268629Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:46:31.241095 env[1220]: time="2025-05-08T00:46:31.240980904Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:46:31.241095 env[1220]: time="2025-05-08T00:46:31.241058822Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:46:31.241331 env[1220]: time="2025-05-08T00:46:31.241077076Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:46:31.241696 env[1220]: time="2025-05-08T00:46:31.241543014Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/31d1d2c47497ff3abdda3d895f92d956d0a2e275c02c0021d7b8bf006a268d62 pid=1631 runtime=io.containerd.runc.v2 May 8 00:46:31.257536 systemd[1]: Started cri-containerd-31d1d2c47497ff3abdda3d895f92d956d0a2e275c02c0021d7b8bf006a268d62.scope. May 8 00:46:31.264368 env[1220]: time="2025-05-08T00:46:31.264134445Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:46:31.264368 env[1220]: time="2025-05-08T00:46:31.264185413Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:46:31.264368 env[1220]: time="2025-05-08T00:46:31.264199439Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:46:31.266523 env[1220]: time="2025-05-08T00:46:31.264442462Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/61ebb969cba539257ad21d75cd2dca4062a3f02b4696c9c65e91b6cbbc913c9f pid=1671 runtime=io.containerd.runc.v2 May 8 00:46:31.269522 env[1220]: time="2025-05-08T00:46:31.269315013Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:46:31.269522 env[1220]: time="2025-05-08T00:46:31.269363124Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:46:31.269522 env[1220]: time="2025-05-08T00:46:31.269375909Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:46:31.269998 env[1220]: time="2025-05-08T00:46:31.269933992Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/039e93c4e1bbc4dd8ce1d7e7101d595c72bf18da7860dd7bbd771e0d2a7a1c15 pid=1659 runtime=io.containerd.runc.v2 May 8 00:46:31.283813 systemd[1]: Started cri-containerd-61ebb969cba539257ad21d75cd2dca4062a3f02b4696c9c65e91b6cbbc913c9f.scope. May 8 00:46:31.290085 systemd[1]: Started cri-containerd-039e93c4e1bbc4dd8ce1d7e7101d595c72bf18da7860dd7bbd771e0d2a7a1c15.scope. May 8 00:46:31.308056 env[1220]: time="2025-05-08T00:46:31.307995720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5386fe11ed933ab82453de11903c7f47,Namespace:kube-system,Attempt:0,} returns sandbox id \"31d1d2c47497ff3abdda3d895f92d956d0a2e275c02c0021d7b8bf006a268d62\"" May 8 00:46:31.309654 kubelet[1591]: E0508 00:46:31.309616 1591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:46:31.312441 env[1220]: time="2025-05-08T00:46:31.312401412Z" level=info msg="CreateContainer within sandbox \"31d1d2c47497ff3abdda3d895f92d956d0a2e275c02c0021d7b8bf006a268d62\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 8 00:46:31.331167 env[1220]: time="2025-05-08T00:46:31.331107914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:36bb1740346d3978c6a0e00983c0c341,Namespace:kube-system,Attempt:0,} returns sandbox id \"61ebb969cba539257ad21d75cd2dca4062a3f02b4696c9c65e91b6cbbc913c9f\"" May 8 00:46:31.331927 kubelet[1591]: E0508 00:46:31.331896 1591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:46:31.332377 env[1220]: time="2025-05-08T00:46:31.332316596Z" level=info msg="CreateContainer within sandbox \"31d1d2c47497ff3abdda3d895f92d956d0a2e275c02c0021d7b8bf006a268d62\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0e19c71729eddfa48cab26377864ea861f242676897e0e5e18f50ae626d5edfa\"" May 8 00:46:31.333597 env[1220]: time="2025-05-08T00:46:31.333545527Z" level=info msg="StartContainer for \"0e19c71729eddfa48cab26377864ea861f242676897e0e5e18f50ae626d5edfa\"" May 8 00:46:31.334160 env[1220]: time="2025-05-08T00:46:31.334131693Z" level=info msg="CreateContainer within sandbox \"61ebb969cba539257ad21d75cd2dca4062a3f02b4696c9c65e91b6cbbc913c9f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 8 00:46:31.338543 env[1220]: time="2025-05-08T00:46:31.338487720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2980a8ab51edc665be10a02e33130e15,Namespace:kube-system,Attempt:0,} returns sandbox id \"039e93c4e1bbc4dd8ce1d7e7101d595c72bf18da7860dd7bbd771e0d2a7a1c15\"" May 8 00:46:31.339592 kubelet[1591]: E0508 00:46:31.339429 1591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:46:31.341001 env[1220]: time="2025-05-08T00:46:31.340967222Z" level=info msg="CreateContainer within sandbox \"039e93c4e1bbc4dd8ce1d7e7101d595c72bf18da7860dd7bbd771e0d2a7a1c15\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 8 00:46:31.351228 systemd[1]: Started cri-containerd-0e19c71729eddfa48cab26377864ea861f242676897e0e5e18f50ae626d5edfa.scope. May 8 00:46:31.357640 env[1220]: time="2025-05-08T00:46:31.357565719Z" level=info msg="CreateContainer within sandbox \"61ebb969cba539257ad21d75cd2dca4062a3f02b4696c9c65e91b6cbbc913c9f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b3efe49ad285184c2d491eede749d4c995e0a21a0a66a064e4e0d77dddc1c571\"" May 8 00:46:31.358108 env[1220]: time="2025-05-08T00:46:31.358072444Z" level=info msg="StartContainer for \"b3efe49ad285184c2d491eede749d4c995e0a21a0a66a064e4e0d77dddc1c571\"" May 8 00:46:31.366039 env[1220]: time="2025-05-08T00:46:31.365944017Z" level=info msg="CreateContainer within sandbox \"039e93c4e1bbc4dd8ce1d7e7101d595c72bf18da7860dd7bbd771e0d2a7a1c15\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9415075201777745469c0171d3c82a087d320629a6b2ad6c355afdeda7e18db4\"" May 8 00:46:31.366748 env[1220]: time="2025-05-08T00:46:31.366710447Z" level=info msg="StartContainer for \"9415075201777745469c0171d3c82a087d320629a6b2ad6c355afdeda7e18db4\"" May 8 00:46:31.375935 systemd[1]: Started cri-containerd-b3efe49ad285184c2d491eede749d4c995e0a21a0a66a064e4e0d77dddc1c571.scope. May 8 00:46:31.387793 systemd[1]: Started cri-containerd-9415075201777745469c0171d3c82a087d320629a6b2ad6c355afdeda7e18db4.scope. May 8 00:46:31.414799 env[1220]: time="2025-05-08T00:46:31.412936005Z" level=info msg="StartContainer for \"0e19c71729eddfa48cab26377864ea861f242676897e0e5e18f50ae626d5edfa\" returns successfully" May 8 00:46:31.424832 env[1220]: time="2025-05-08T00:46:31.424770257Z" level=info msg="StartContainer for \"b3efe49ad285184c2d491eede749d4c995e0a21a0a66a064e4e0d77dddc1c571\" returns successfully" May 8 00:46:31.442564 env[1220]: time="2025-05-08T00:46:31.442481352Z" level=info msg="StartContainer for \"9415075201777745469c0171d3c82a087d320629a6b2ad6c355afdeda7e18db4\" returns successfully" May 8 00:46:31.752734 kubelet[1591]: E0508 00:46:31.752597 1591 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 8 00:46:31.752888 kubelet[1591]: E0508 00:46:31.752747 1591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:46:31.754831 kubelet[1591]: E0508 00:46:31.754800 1591 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 8 00:46:31.754965 kubelet[1591]: E0508 00:46:31.754937 1591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:46:31.756976 kubelet[1591]: E0508 00:46:31.756944 1591 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 8 00:46:31.757098 kubelet[1591]: E0508 00:46:31.757072 1591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:46:32.759161 kubelet[1591]: E0508 00:46:32.759110 1591 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 8 00:46:32.759797 kubelet[1591]: E0508 00:46:32.759182 1591 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 8 00:46:32.759797 kubelet[1591]: E0508 00:46:32.759293 1591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:46:32.759797 kubelet[1591]: E0508 00:46:32.759293 1591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:46:32.795953 kubelet[1591]: I0508 00:46:32.795901 1591 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 8 00:46:33.779944 kubelet[1591]: E0508 00:46:33.779844 1591 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 8 00:46:33.792164 kubelet[1591]: I0508 00:46:33.792100 1591 kubelet_node_status.go:79] "Successfully registered node" node="localhost" May 8 00:46:33.792164 kubelet[1591]: E0508 00:46:33.792174 1591 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 8 00:46:33.824988 kubelet[1591]: I0508 00:46:33.824934 1591 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 8 00:46:33.866965 update_engine[1206]: I0508 00:46:33.864854 1206 update_attempter.cc:509] Updating boot flags... May 8 00:46:34.020903 kubelet[1591]: I0508 00:46:34.020805 1591 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 8 00:46:34.205016 kubelet[1591]: I0508 00:46:34.204951 1591 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 8 00:46:34.448087 kubelet[1591]: I0508 00:46:34.443692 1591 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 8 00:46:34.593682 kubelet[1591]: E0508 00:46:34.593638 1591 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 8 00:46:34.593869 kubelet[1591]: E0508 00:46:34.593861 1591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:46:34.778684 kubelet[1591]: I0508 00:46:34.778619 1591 apiserver.go:52] "Watching apiserver" May 8 00:46:34.780816 kubelet[1591]: E0508 00:46:34.780781 1591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:46:34.781158 kubelet[1591]: E0508 00:46:34.781132 1591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:46:34.782293 kubelet[1591]: E0508 00:46:34.782269 1591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:46:34.823707 kubelet[1591]: I0508 00:46:34.823628 1591 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 8 00:46:36.707983 systemd[1]: Reloading. May 8 00:46:36.782622 /usr/lib/systemd/system-generators/torcx-generator[1910]: time="2025-05-08T00:46:36Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 8 00:46:36.782652 /usr/lib/systemd/system-generators/torcx-generator[1910]: time="2025-05-08T00:46:36Z" level=info msg="torcx already run" May 8 00:46:36.859045 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 8 00:46:36.859065 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 8 00:46:36.879314 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:46:36.990070 systemd[1]: Stopping kubelet.service... May 8 00:46:37.016303 systemd[1]: kubelet.service: Deactivated successfully. May 8 00:46:37.016501 systemd[1]: Stopped kubelet.service. May 8 00:46:37.016584 systemd[1]: kubelet.service: Consumed 1.135s CPU time. May 8 00:46:37.018660 systemd[1]: Starting kubelet.service... May 8 00:46:37.125100 systemd[1]: Started kubelet.service. May 8 00:46:37.203204 kubelet[1954]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:46:37.203204 kubelet[1954]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 8 00:46:37.203204 kubelet[1954]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:46:37.203716 kubelet[1954]: I0508 00:46:37.203232 1954 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 8 00:46:37.213022 kubelet[1954]: I0508 00:46:37.212954 1954 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 8 00:46:37.213022 kubelet[1954]: I0508 00:46:37.212992 1954 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 8 00:46:37.213362 kubelet[1954]: I0508 00:46:37.213329 1954 server.go:954] "Client rotation is on, will bootstrap in background" May 8 00:46:37.214941 kubelet[1954]: I0508 00:46:37.214906 1954 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 8 00:46:37.217991 kubelet[1954]: I0508 00:46:37.217954 1954 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 00:46:37.222821 kubelet[1954]: E0508 00:46:37.222752 1954 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 8 00:46:37.222821 kubelet[1954]: I0508 00:46:37.222807 1954 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 8 00:46:37.228562 kubelet[1954]: I0508 00:46:37.228536 1954 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 8 00:46:37.228875 kubelet[1954]: I0508 00:46:37.228823 1954 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 8 00:46:37.229085 kubelet[1954]: I0508 00:46:37.228865 1954 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 8 00:46:37.229085 kubelet[1954]: I0508 00:46:37.229084 1954 topology_manager.go:138] "Creating topology manager with none policy" May 8 00:46:37.229247 kubelet[1954]: I0508 00:46:37.229099 1954 container_manager_linux.go:304] "Creating device plugin manager" May 8 00:46:37.229247 kubelet[1954]: I0508 00:46:37.229150 1954 state_mem.go:36] "Initialized new in-memory state store" May 8 00:46:37.229364 kubelet[1954]: I0508 00:46:37.229340 1954 kubelet.go:446] "Attempting to sync node with API server" May 8 00:46:37.229396 kubelet[1954]: I0508 00:46:37.229369 1954 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 8 00:46:37.229396 kubelet[1954]: I0508 00:46:37.229393 1954 kubelet.go:352] "Adding apiserver pod source" May 8 00:46:37.229475 kubelet[1954]: I0508 00:46:37.229406 1954 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 8 00:46:37.230848 kubelet[1954]: I0508 00:46:37.230815 1954 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 8 00:46:37.231603 kubelet[1954]: I0508 00:46:37.231548 1954 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 8 00:46:37.232474 kubelet[1954]: I0508 00:46:37.232447 1954 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 8 00:46:37.232557 kubelet[1954]: I0508 00:46:37.232533 1954 server.go:1287] "Started kubelet" May 8 00:46:37.236045 kubelet[1954]: I0508 00:46:37.236017 1954 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 8 00:46:37.242427 kubelet[1954]: E0508 00:46:37.240364 1954 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 8 00:46:37.242730 kubelet[1954]: I0508 00:46:37.242675 1954 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 8 00:46:37.244344 kubelet[1954]: I0508 00:46:37.244277 1954 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 8 00:46:37.244701 kubelet[1954]: I0508 00:46:37.244673 1954 volume_manager.go:297] "Starting Kubelet Volume Manager" May 8 00:46:37.245127 kubelet[1954]: I0508 00:46:37.245092 1954 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 8 00:46:37.246233 kubelet[1954]: I0508 00:46:37.246199 1954 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 8 00:46:37.246233 kubelet[1954]: I0508 00:46:37.244678 1954 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 8 00:46:37.247080 kubelet[1954]: I0508 00:46:37.246266 1954 server.go:490] "Adding debug handlers to kubelet server" May 8 00:46:37.247080 kubelet[1954]: I0508 00:46:37.246419 1954 reconciler.go:26] "Reconciler: start to sync state" May 8 00:46:37.249290 kubelet[1954]: I0508 00:46:37.249260 1954 factory.go:221] Registration of the systemd container factory successfully May 8 00:46:37.249630 kubelet[1954]: I0508 00:46:37.249604 1954 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 8 00:46:37.254042 kubelet[1954]: I0508 00:46:37.254001 1954 factory.go:221] Registration of the containerd container factory successfully May 8 00:46:37.264395 kubelet[1954]: I0508 00:46:37.264343 1954 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 8 00:46:37.266270 kubelet[1954]: I0508 00:46:37.265989 1954 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 8 00:46:37.266270 kubelet[1954]: I0508 00:46:37.266028 1954 status_manager.go:227] "Starting to sync pod status with apiserver" May 8 00:46:37.266270 kubelet[1954]: I0508 00:46:37.266074 1954 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 8 00:46:37.266270 kubelet[1954]: I0508 00:46:37.266084 1954 kubelet.go:2388] "Starting kubelet main sync loop" May 8 00:46:37.266270 kubelet[1954]: E0508 00:46:37.266174 1954 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 8 00:46:37.292408 kubelet[1954]: I0508 00:46:37.292361 1954 cpu_manager.go:221] "Starting CPU manager" policy="none" May 8 00:46:37.292408 kubelet[1954]: I0508 00:46:37.292390 1954 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 8 00:46:37.292408 kubelet[1954]: I0508 00:46:37.292417 1954 state_mem.go:36] "Initialized new in-memory state store" May 8 00:46:37.292710 kubelet[1954]: I0508 00:46:37.292662 1954 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 8 00:46:37.292710 kubelet[1954]: I0508 00:46:37.292680 1954 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 8 00:46:37.292710 kubelet[1954]: I0508 00:46:37.292711 1954 policy_none.go:49] "None policy: Start" May 8 00:46:37.292789 kubelet[1954]: I0508 00:46:37.292722 1954 memory_manager.go:186] "Starting memorymanager" policy="None" May 8 00:46:37.292789 kubelet[1954]: I0508 00:46:37.292731 1954 state_mem.go:35] "Initializing new in-memory state store" May 8 00:46:37.292873 kubelet[1954]: I0508 00:46:37.292825 1954 state_mem.go:75] "Updated machine memory state" May 8 00:46:37.297775 kubelet[1954]: I0508 00:46:37.297735 1954 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 8 00:46:37.297996 kubelet[1954]: I0508 00:46:37.297971 1954 eviction_manager.go:189] "Eviction manager: starting control loop" May 8 00:46:37.298064 kubelet[1954]: I0508 00:46:37.298001 1954 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 8 00:46:37.300389 kubelet[1954]: I0508 00:46:37.299936 1954 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 8 00:46:37.302860 kubelet[1954]: E0508 00:46:37.301952 1954 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 8 00:46:37.367841 kubelet[1954]: I0508 00:46:37.367778 1954 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 8 00:46:37.367841 kubelet[1954]: I0508 00:46:37.367849 1954 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 8 00:46:37.368609 kubelet[1954]: I0508 00:46:37.368560 1954 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 8 00:46:37.375379 kubelet[1954]: E0508 00:46:37.375312 1954 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 8 00:46:37.375729 kubelet[1954]: E0508 00:46:37.375685 1954 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 8 00:46:37.375829 kubelet[1954]: E0508 00:46:37.375794 1954 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 8 00:46:37.408550 kubelet[1954]: I0508 00:46:37.408481 1954 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 8 00:46:37.415611 kubelet[1954]: I0508 00:46:37.415530 1954 kubelet_node_status.go:125] "Node was previously registered" node="localhost" May 8 00:46:37.415811 kubelet[1954]: I0508 00:46:37.415687 1954 kubelet_node_status.go:79] "Successfully registered node" node="localhost" May 8 00:46:37.447970 kubelet[1954]: I0508 00:46:37.447914 1954 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/36bb1740346d3978c6a0e00983c0c341-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"36bb1740346d3978c6a0e00983c0c341\") " pod="kube-system/kube-apiserver-localhost" May 8 00:46:37.447970 kubelet[1954]: I0508 00:46:37.447969 1954 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/36bb1740346d3978c6a0e00983c0c341-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"36bb1740346d3978c6a0e00983c0c341\") " pod="kube-system/kube-apiserver-localhost" May 8 00:46:37.448207 kubelet[1954]: I0508 00:46:37.448001 1954 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:46:37.448207 kubelet[1954]: I0508 00:46:37.448095 1954 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:46:37.448207 kubelet[1954]: I0508 00:46:37.448160 1954 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:46:37.448207 kubelet[1954]: I0508 00:46:37.448189 1954 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:46:37.448336 kubelet[1954]: I0508 00:46:37.448228 1954 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2980a8ab51edc665be10a02e33130e15-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2980a8ab51edc665be10a02e33130e15\") " pod="kube-system/kube-scheduler-localhost" May 8 00:46:37.448336 kubelet[1954]: I0508 00:46:37.448251 1954 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/36bb1740346d3978c6a0e00983c0c341-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"36bb1740346d3978c6a0e00983c0c341\") " pod="kube-system/kube-apiserver-localhost" May 8 00:46:37.448336 kubelet[1954]: I0508 00:46:37.448276 1954 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:46:37.676451 kubelet[1954]: E0508 00:46:37.676406 1954 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:46:37.676817 kubelet[1954]: E0508 00:46:37.676419 1954 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:46:37.676940 kubelet[1954]: E0508 00:46:37.676594 1954 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:46:37.688376 sudo[1990]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 8 00:46:37.688649 sudo[1990]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) May 8 00:46:38.230494 kubelet[1954]: I0508 00:46:38.230434 1954 apiserver.go:52] "Watching apiserver" May 8 00:46:38.246727 kubelet[1954]: I0508 00:46:38.246654 1954 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 8 00:46:38.279588 kubelet[1954]: I0508 00:46:38.279534 1954 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 8 00:46:38.279768 kubelet[1954]: E0508 00:46:38.279542 1954 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:46:38.279979 kubelet[1954]: I0508 00:46:38.279963 1954 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 8 00:46:38.283776 sudo[1990]: pam_unix(sudo:session): session closed for user root May 8 00:46:38.367837 kubelet[1954]: E0508 00:46:38.367792 1954 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 8 00:46:38.368323 kubelet[1954]: E0508 00:46:38.368303 1954 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:46:38.368895 kubelet[1954]: E0508 00:46:38.368084 1954 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 8 00:46:38.369139 kubelet[1954]: E0508 00:46:38.369121 1954 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:46:38.424384 kubelet[1954]: I0508 00:46:38.424306 1954 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=4.424280443 podStartE2EDuration="4.424280443s" podCreationTimestamp="2025-05-08 00:46:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:46:38.414314375 +0000 UTC m=+1.284933714" watchObservedRunningTime="2025-05-08 00:46:38.424280443 +0000 UTC m=+1.294899812" May 8 00:46:38.424838 kubelet[1954]: I0508 00:46:38.424804 1954 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=5.424795588 podStartE2EDuration="5.424795588s" podCreationTimestamp="2025-05-08 00:46:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:46:38.423280248 +0000 UTC m=+1.293899597" watchObservedRunningTime="2025-05-08 00:46:38.424795588 +0000 UTC m=+1.295414967" May 8 00:46:39.281813 kubelet[1954]: E0508 00:46:39.281774 1954 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:46:39.283071 kubelet[1954]: E0508 00:46:39.283052 1954 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:46:39.284221 kubelet[1954]: E0508 00:46:39.283276 1954 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:46:39.926340 sudo[1319]: pam_unix(sudo:session): session closed for user root May 8 00:46:39.928085 sshd[1316]: pam_unix(sshd:session): session closed for user core May 8 00:46:39.930964 systemd[1]: sshd@4-10.0.0.73:22-10.0.0.1:47962.service: Deactivated successfully. May 8 00:46:39.931869 systemd[1]: session-5.scope: Deactivated successfully. May 8 00:46:39.932033 systemd[1]: session-5.scope: Consumed 4.901s CPU time. May 8 00:46:39.932745 systemd-logind[1205]: Session 5 logged out. Waiting for processes to exit. May 8 00:46:39.933781 systemd-logind[1205]: Removed session 5. May 8 00:46:41.657532 kubelet[1954]: I0508 00:46:41.657449 1954 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 8 00:46:41.658144 env[1220]: time="2025-05-08T00:46:41.658095262Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 8 00:46:41.658536 kubelet[1954]: I0508 00:46:41.658385 1954 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 8 00:46:42.739032 kubelet[1954]: I0508 00:46:42.738968 1954 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=8.738950658 podStartE2EDuration="8.738950658s" podCreationTimestamp="2025-05-08 00:46:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:46:38.432695723 +0000 UTC m=+1.303315062" watchObservedRunningTime="2025-05-08 00:46:42.738950658 +0000 UTC m=+5.609570007" May 8 00:46:42.745072 systemd[1]: Created slice kubepods-burstable-pod5d574d22_4fe9_420a_bd2e_137aa18e77e1.slice. May 8 00:46:42.761475 kubelet[1954]: W0508 00:46:42.760888 1954 reflector.go:569] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object May 8 00:46:42.761475 kubelet[1954]: E0508 00:46:42.760963 1954 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" May 8 00:46:42.762825 systemd[1]: Created slice kubepods-besteffort-podcf080e4b_2e2c_44f9_9e88_d752f54fb8b2.slice. May 8 00:46:42.806119 kubelet[1954]: I0508 00:46:42.806050 1954 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5d574d22-4fe9-420a-bd2e-137aa18e77e1-bpf-maps\") pod \"cilium-xtrm5\" (UID: \"5d574d22-4fe9-420a-bd2e-137aa18e77e1\") " pod="kube-system/cilium-xtrm5" May 8 00:46:42.806119 kubelet[1954]: I0508 00:46:42.806099 1954 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5d574d22-4fe9-420a-bd2e-137aa18e77e1-lib-modules\") pod \"cilium-xtrm5\" (UID: \"5d574d22-4fe9-420a-bd2e-137aa18e77e1\") " pod="kube-system/cilium-xtrm5" May 8 00:46:42.806119 kubelet[1954]: I0508 00:46:42.806129 1954 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngztt\" (UniqueName: \"kubernetes.io/projected/cf080e4b-2e2c-44f9-9e88-d752f54fb8b2-kube-api-access-ngztt\") pod \"kube-proxy-lrt59\" (UID: \"cf080e4b-2e2c-44f9-9e88-d752f54fb8b2\") " pod="kube-system/kube-proxy-lrt59" May 8 00:46:42.806366 kubelet[1954]: I0508 00:46:42.806154 1954 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdx8h\" (UniqueName: \"kubernetes.io/projected/5d574d22-4fe9-420a-bd2e-137aa18e77e1-kube-api-access-vdx8h\") pod \"cilium-xtrm5\" (UID: \"5d574d22-4fe9-420a-bd2e-137aa18e77e1\") " pod="kube-system/cilium-xtrm5" May 8 00:46:42.806366 kubelet[1954]: I0508 00:46:42.806183 1954 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cf080e4b-2e2c-44f9-9e88-d752f54fb8b2-xtables-lock\") pod \"kube-proxy-lrt59\" (UID: \"cf080e4b-2e2c-44f9-9e88-d752f54fb8b2\") " pod="kube-system/kube-proxy-lrt59" May 8 00:46:42.806366 kubelet[1954]: I0508 00:46:42.806206 1954 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5d574d22-4fe9-420a-bd2e-137aa18e77e1-xtables-lock\") pod \"cilium-xtrm5\" (UID: \"5d574d22-4fe9-420a-bd2e-137aa18e77e1\") " pod="kube-system/cilium-xtrm5" May 8 00:46:42.806366 kubelet[1954]: I0508 00:46:42.806228 1954 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5d574d22-4fe9-420a-bd2e-137aa18e77e1-etc-cni-netd\") pod \"cilium-xtrm5\" (UID: \"5d574d22-4fe9-420a-bd2e-137aa18e77e1\") " pod="kube-system/cilium-xtrm5" May 8 00:46:42.806366 kubelet[1954]: I0508 00:46:42.806246 1954 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5d574d22-4fe9-420a-bd2e-137aa18e77e1-cilium-run\") pod \"cilium-xtrm5\" (UID: \"5d574d22-4fe9-420a-bd2e-137aa18e77e1\") " pod="kube-system/cilium-xtrm5" May 8 00:46:42.806366 kubelet[1954]: I0508 00:46:42.806265 1954 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5d574d22-4fe9-420a-bd2e-137aa18e77e1-cni-path\") pod \"cilium-xtrm5\" (UID: \"5d574d22-4fe9-420a-bd2e-137aa18e77e1\") " pod="kube-system/cilium-xtrm5" May 8 00:46:42.806515 kubelet[1954]: I0508 00:46:42.806309 1954 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5d574d22-4fe9-420a-bd2e-137aa18e77e1-hubble-tls\") pod \"cilium-xtrm5\" (UID: \"5d574d22-4fe9-420a-bd2e-137aa18e77e1\") " pod="kube-system/cilium-xtrm5" May 8 00:46:42.806515 kubelet[1954]: I0508 00:46:42.806344 1954 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cf080e4b-2e2c-44f9-9e88-d752f54fb8b2-lib-modules\") pod \"kube-proxy-lrt59\" (UID: \"cf080e4b-2e2c-44f9-9e88-d752f54fb8b2\") " pod="kube-system/kube-proxy-lrt59" May 8 00:46:42.806515 kubelet[1954]: I0508 00:46:42.806371 1954 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5d574d22-4fe9-420a-bd2e-137aa18e77e1-hostproc\") pod \"cilium-xtrm5\" (UID: \"5d574d22-4fe9-420a-bd2e-137aa18e77e1\") " pod="kube-system/cilium-xtrm5" May 8 00:46:42.806515 kubelet[1954]: I0508 00:46:42.806402 1954 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5d574d22-4fe9-420a-bd2e-137aa18e77e1-clustermesh-secrets\") pod \"cilium-xtrm5\" (UID: \"5d574d22-4fe9-420a-bd2e-137aa18e77e1\") " pod="kube-system/cilium-xtrm5" May 8 00:46:42.806515 kubelet[1954]: I0508 00:46:42.806434 1954 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5d574d22-4fe9-420a-bd2e-137aa18e77e1-cilium-config-path\") pod \"cilium-xtrm5\" (UID: \"5d574d22-4fe9-420a-bd2e-137aa18e77e1\") " pod="kube-system/cilium-xtrm5" May 8 00:46:42.806515 kubelet[1954]: I0508 00:46:42.806452 1954 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5d574d22-4fe9-420a-bd2e-137aa18e77e1-host-proc-sys-net\") pod \"cilium-xtrm5\" (UID: \"5d574d22-4fe9-420a-bd2e-137aa18e77e1\") " pod="kube-system/cilium-xtrm5" May 8 00:46:42.806689 kubelet[1954]: I0508 00:46:42.806473 1954 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/cf080e4b-2e2c-44f9-9e88-d752f54fb8b2-kube-proxy\") pod \"kube-proxy-lrt59\" (UID: \"cf080e4b-2e2c-44f9-9e88-d752f54fb8b2\") " pod="kube-system/kube-proxy-lrt59" May 8 00:46:42.806689 kubelet[1954]: I0508 00:46:42.806509 1954 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5d574d22-4fe9-420a-bd2e-137aa18e77e1-host-proc-sys-kernel\") pod \"cilium-xtrm5\" (UID: \"5d574d22-4fe9-420a-bd2e-137aa18e77e1\") " pod="kube-system/cilium-xtrm5" May 8 00:46:42.806689 kubelet[1954]: I0508 00:46:42.806543 1954 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5d574d22-4fe9-420a-bd2e-137aa18e77e1-cilium-cgroup\") pod \"cilium-xtrm5\" (UID: \"5d574d22-4fe9-420a-bd2e-137aa18e77e1\") " pod="kube-system/cilium-xtrm5" May 8 00:46:42.898624 systemd[1]: Created slice kubepods-besteffort-pod54bcdab1_0cda_437c_b15d_2390515fe3fa.slice. May 8 00:46:42.904631 kubelet[1954]: I0508 00:46:42.902744 1954 status_manager.go:890] "Failed to get status for pod" podUID="54bcdab1-0cda-437c-b15d-2390515fe3fa" pod="kube-system/cilium-operator-6c4d7847fc-wgrq9" err="pods \"cilium-operator-6c4d7847fc-wgrq9\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" May 8 00:46:42.907213 kubelet[1954]: I0508 00:46:42.907155 1954 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vh2d\" (UniqueName: \"kubernetes.io/projected/54bcdab1-0cda-437c-b15d-2390515fe3fa-kube-api-access-2vh2d\") pod \"cilium-operator-6c4d7847fc-wgrq9\" (UID: \"54bcdab1-0cda-437c-b15d-2390515fe3fa\") " pod="kube-system/cilium-operator-6c4d7847fc-wgrq9" May 8 00:46:42.907335 kubelet[1954]: I0508 00:46:42.907300 1954 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/54bcdab1-0cda-437c-b15d-2390515fe3fa-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-wgrq9\" (UID: \"54bcdab1-0cda-437c-b15d-2390515fe3fa\") " pod="kube-system/cilium-operator-6c4d7847fc-wgrq9" May 8 00:46:42.908263 kubelet[1954]: I0508 00:46:42.907935 1954 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" May 8 00:46:43.048775 kubelet[1954]: E0508 00:46:43.047874 1954 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:46:43.048977 env[1220]: time="2025-05-08T00:46:43.048750274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xtrm5,Uid:5d574d22-4fe9-420a-bd2e-137aa18e77e1,Namespace:kube-system,Attempt:0,}" May 8 00:46:43.130805 env[1220]: time="2025-05-08T00:46:43.130696097Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:46:43.130805 env[1220]: time="2025-05-08T00:46:43.130745701Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:46:43.130805 env[1220]: time="2025-05-08T00:46:43.130763615Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:46:43.131104 env[1220]: time="2025-05-08T00:46:43.130960216Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d375c53112832e1fdba54e42efc9372238576613192da5b871bdd7ea9f1e6824 pid=2047 runtime=io.containerd.runc.v2 May 8 00:46:43.144229 systemd[1]: Started cri-containerd-d375c53112832e1fdba54e42efc9372238576613192da5b871bdd7ea9f1e6824.scope. May 8 00:46:43.172434 env[1220]: time="2025-05-08T00:46:43.172387176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xtrm5,Uid:5d574d22-4fe9-420a-bd2e-137aa18e77e1,Namespace:kube-system,Attempt:0,} returns sandbox id \"d375c53112832e1fdba54e42efc9372238576613192da5b871bdd7ea9f1e6824\"" May 8 00:46:43.173466 kubelet[1954]: E0508 00:46:43.173431 1954 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:46:43.175098 env[1220]: time="2025-05-08T00:46:43.175041070Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 8 00:46:43.202003 kubelet[1954]: E0508 00:46:43.201962 1954 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:46:43.202435 env[1220]: time="2025-05-08T00:46:43.202398902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-wgrq9,Uid:54bcdab1-0cda-437c-b15d-2390515fe3fa,Namespace:kube-system,Attempt:0,}" May 8 00:46:43.219886 env[1220]: time="2025-05-08T00:46:43.219783773Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:46:43.219886 env[1220]: time="2025-05-08T00:46:43.219824830Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:46:43.219886 env[1220]: time="2025-05-08T00:46:43.219834869Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:46:43.220471 env[1220]: time="2025-05-08T00:46:43.220385740Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2f10910824308e2c9daaea2e76e445ee3c74efff3057fce019d8f5351cc5b185 pid=2088 runtime=io.containerd.runc.v2 May 8 00:46:43.233291 systemd[1]: Started cri-containerd-2f10910824308e2c9daaea2e76e445ee3c74efff3057fce019d8f5351cc5b185.scope. May 8 00:46:43.278645 env[1220]: time="2025-05-08T00:46:43.278564145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-wgrq9,Uid:54bcdab1-0cda-437c-b15d-2390515fe3fa,Namespace:kube-system,Attempt:0,} returns sandbox id \"2f10910824308e2c9daaea2e76e445ee3c74efff3057fce019d8f5351cc5b185\"" May 8 00:46:43.279299 kubelet[1954]: E0508 00:46:43.279265 1954 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:46:43.317635 kubelet[1954]: E0508 00:46:43.317445 1954 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:46:43.913373 kubelet[1954]: E0508 00:46:43.913309 1954 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition May 8 00:46:43.913828 kubelet[1954]: E0508 00:46:43.913410 1954 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cf080e4b-2e2c-44f9-9e88-d752f54fb8b2-kube-proxy podName:cf080e4b-2e2c-44f9-9e88-d752f54fb8b2 nodeName:}" failed. No retries permitted until 2025-05-08 00:46:44.413389302 +0000 UTC m=+7.284008641 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/cf080e4b-2e2c-44f9-9e88-d752f54fb8b2-kube-proxy") pod "kube-proxy-lrt59" (UID: "cf080e4b-2e2c-44f9-9e88-d752f54fb8b2") : failed to sync configmap cache: timed out waiting for the condition May 8 00:46:44.291127 kubelet[1954]: E0508 00:46:44.291008 1954 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:46:44.574011 kubelet[1954]: E0508 00:46:44.573968 1954 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:46:44.574715 env[1220]: time="2025-05-08T00:46:44.574641665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lrt59,Uid:cf080e4b-2e2c-44f9-9e88-d752f54fb8b2,Namespace:kube-system,Attempt:0,}" May 8 00:46:44.591388 env[1220]: time="2025-05-08T00:46:44.591298037Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:46:44.591388 env[1220]: time="2025-05-08T00:46:44.591341749Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:46:44.591388 env[1220]: time="2025-05-08T00:46:44.591351528Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:46:44.591951 env[1220]: time="2025-05-08T00:46:44.591850941Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/711c7381fb8b6902bd98b51368b5f522b5b4849f44df2e70212446e37d4b04fb pid=2129 runtime=io.containerd.runc.v2 May 8 00:46:44.606785 systemd[1]: run-containerd-runc-k8s.io-711c7381fb8b6902bd98b51368b5f522b5b4849f44df2e70212446e37d4b04fb-runc.DBKYVs.mount: Deactivated successfully. May 8 00:46:44.610175 systemd[1]: Started cri-containerd-711c7381fb8b6902bd98b51368b5f522b5b4849f44df2e70212446e37d4b04fb.scope. May 8 00:46:44.634971 env[1220]: time="2025-05-08T00:46:44.634921560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lrt59,Uid:cf080e4b-2e2c-44f9-9e88-d752f54fb8b2,Namespace:kube-system,Attempt:0,} returns sandbox id \"711c7381fb8b6902bd98b51368b5f522b5b4849f44df2e70212446e37d4b04fb\"" May 8 00:46:44.635448 kubelet[1954]: E0508 00:46:44.635413 1954 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:46:44.642794 env[1220]: time="2025-05-08T00:46:44.642672135Z" level=info msg="CreateContainer within sandbox \"711c7381fb8b6902bd98b51368b5f522b5b4849f44df2e70212446e37d4b04fb\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 8 00:46:44.660696 env[1220]: time="2025-05-08T00:46:44.660610477Z" level=info msg="CreateContainer within sandbox \"711c7381fb8b6902bd98b51368b5f522b5b4849f44df2e70212446e37d4b04fb\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f9af1ebe4d1a2774429f5363342642d3dd01555f63ce8f42f39b6945afabdffc\"" May 8 00:46:44.661398 env[1220]: time="2025-05-08T00:46:44.661372015Z" level=info msg="StartContainer for \"f9af1ebe4d1a2774429f5363342642d3dd01555f63ce8f42f39b6945afabdffc\"" May 8 00:46:44.672093 kubelet[1954]: E0508 00:46:44.671993 1954 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:46:44.684374 systemd[1]: Started cri-containerd-f9af1ebe4d1a2774429f5363342642d3dd01555f63ce8f42f39b6945afabdffc.scope. May 8 00:46:44.714081 env[1220]: time="2025-05-08T00:46:44.714018756Z" level=info msg="StartContainer for \"f9af1ebe4d1a2774429f5363342642d3dd01555f63ce8f42f39b6945afabdffc\" returns successfully" May 8 00:46:45.295217 kubelet[1954]: E0508 00:46:45.294989 1954 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:46:45.295217 kubelet[1954]: E0508 00:46:45.295090 1954 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:46:45.308213 kubelet[1954]: I0508 00:46:45.308081 1954 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-lrt59" podStartSLOduration=3.308057397 podStartE2EDuration="3.308057397s" podCreationTimestamp="2025-05-08 00:46:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:46:45.308024745 +0000 UTC m=+8.178644094" watchObservedRunningTime="2025-05-08 00:46:45.308057397 +0000 UTC m=+8.178676776" May 8 00:46:46.297128 kubelet[1954]: E0508 00:46:46.297094 1954 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:46:48.371457 kubelet[1954]: E0508 00:46:48.371422 1954 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:46:54.681473 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3343023824.mount: Deactivated successfully. May 8 00:46:59.970715 env[1220]: time="2025-05-08T00:46:59.970556961Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:47:00.040806 env[1220]: time="2025-05-08T00:47:00.040735338Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:47:00.087304 env[1220]: time="2025-05-08T00:47:00.087220789Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:47:00.088186 env[1220]: time="2025-05-08T00:47:00.088126893Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 8 00:47:00.089821 env[1220]: time="2025-05-08T00:47:00.089768891Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 8 00:47:00.090878 env[1220]: time="2025-05-08T00:47:00.090846607Z" level=info msg="CreateContainer within sandbox \"d375c53112832e1fdba54e42efc9372238576613192da5b871bdd7ea9f1e6824\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 8 00:47:00.476769 env[1220]: time="2025-05-08T00:47:00.476683331Z" level=info msg="CreateContainer within sandbox \"d375c53112832e1fdba54e42efc9372238576613192da5b871bdd7ea9f1e6824\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"46755bd3ba49e9c4c971dbd8fe85f38d64318f77b09b4cfc10bba0462f8451c3\"" May 8 00:47:00.477343 env[1220]: time="2025-05-08T00:47:00.477311923Z" level=info msg="StartContainer for \"46755bd3ba49e9c4c971dbd8fe85f38d64318f77b09b4cfc10bba0462f8451c3\"" May 8 00:47:00.498933 systemd[1]: Started cri-containerd-46755bd3ba49e9c4c971dbd8fe85f38d64318f77b09b4cfc10bba0462f8451c3.scope. May 8 00:47:00.543141 systemd[1]: cri-containerd-46755bd3ba49e9c4c971dbd8fe85f38d64318f77b09b4cfc10bba0462f8451c3.scope: Deactivated successfully. May 8 00:47:01.428227 env[1220]: time="2025-05-08T00:47:01.428065435Z" level=info msg="StartContainer for \"46755bd3ba49e9c4c971dbd8fe85f38d64318f77b09b4cfc10bba0462f8451c3\" returns successfully" May 8 00:47:01.444842 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-46755bd3ba49e9c4c971dbd8fe85f38d64318f77b09b4cfc10bba0462f8451c3-rootfs.mount: Deactivated successfully. May 8 00:47:01.641555 env[1220]: time="2025-05-08T00:47:01.641460797Z" level=info msg="shim disconnected" id=46755bd3ba49e9c4c971dbd8fe85f38d64318f77b09b4cfc10bba0462f8451c3 May 8 00:47:01.641555 env[1220]: time="2025-05-08T00:47:01.641530619Z" level=warning msg="cleaning up after shim disconnected" id=46755bd3ba49e9c4c971dbd8fe85f38d64318f77b09b4cfc10bba0462f8451c3 namespace=k8s.io May 8 00:47:01.641555 env[1220]: time="2025-05-08T00:47:01.641548753Z" level=info msg="cleaning up dead shim" May 8 00:47:01.649604 env[1220]: time="2025-05-08T00:47:01.649490392Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:47:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2376 runtime=io.containerd.runc.v2\n" May 8 00:47:02.434009 kubelet[1954]: E0508 00:47:02.433967 1954 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:47:02.436072 env[1220]: time="2025-05-08T00:47:02.436017032Z" level=info msg="CreateContainer within sandbox \"d375c53112832e1fdba54e42efc9372238576613192da5b871bdd7ea9f1e6824\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 8 00:47:02.473905 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4071499915.mount: Deactivated successfully. May 8 00:47:02.484799 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2977797502.mount: Deactivated successfully. May 8 00:47:02.490149 env[1220]: time="2025-05-08T00:47:02.490063901Z" level=info msg="CreateContainer within sandbox \"d375c53112832e1fdba54e42efc9372238576613192da5b871bdd7ea9f1e6824\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8d3cded2624e0f1fba6344af1bb5bd4d3fec7bacc697ba59b419ab775e80623b\"" May 8 00:47:02.490738 env[1220]: time="2025-05-08T00:47:02.490713021Z" level=info msg="StartContainer for \"8d3cded2624e0f1fba6344af1bb5bd4d3fec7bacc697ba59b419ab775e80623b\"" May 8 00:47:02.509686 systemd[1]: Started cri-containerd-8d3cded2624e0f1fba6344af1bb5bd4d3fec7bacc697ba59b419ab775e80623b.scope. May 8 00:47:02.539670 env[1220]: time="2025-05-08T00:47:02.538657319Z" level=info msg="StartContainer for \"8d3cded2624e0f1fba6344af1bb5bd4d3fec7bacc697ba59b419ab775e80623b\" returns successfully" May 8 00:47:02.548959 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 8 00:47:02.549178 systemd[1]: Stopped systemd-sysctl.service. May 8 00:47:02.549353 systemd[1]: Stopping systemd-sysctl.service... May 8 00:47:02.550887 systemd[1]: Starting systemd-sysctl.service... May 8 00:47:02.554651 systemd[1]: cri-containerd-8d3cded2624e0f1fba6344af1bb5bd4d3fec7bacc697ba59b419ab775e80623b.scope: Deactivated successfully. May 8 00:47:02.561226 systemd[1]: Finished systemd-sysctl.service. May 8 00:47:02.641733 env[1220]: time="2025-05-08T00:47:02.641671519Z" level=info msg="shim disconnected" id=8d3cded2624e0f1fba6344af1bb5bd4d3fec7bacc697ba59b419ab775e80623b May 8 00:47:02.641733 env[1220]: time="2025-05-08T00:47:02.641721974Z" level=warning msg="cleaning up after shim disconnected" id=8d3cded2624e0f1fba6344af1bb5bd4d3fec7bacc697ba59b419ab775e80623b namespace=k8s.io May 8 00:47:02.641733 env[1220]: time="2025-05-08T00:47:02.641732093Z" level=info msg="cleaning up dead shim" May 8 00:47:02.650491 env[1220]: time="2025-05-08T00:47:02.650416487Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:47:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2439 runtime=io.containerd.runc.v2\n" May 8 00:47:03.136289 env[1220]: time="2025-05-08T00:47:03.136199691Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:47:03.138363 env[1220]: time="2025-05-08T00:47:03.138300129Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:47:03.140268 env[1220]: time="2025-05-08T00:47:03.140210079Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:47:03.140857 env[1220]: time="2025-05-08T00:47:03.140820325Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 8 00:47:03.143382 env[1220]: time="2025-05-08T00:47:03.143349048Z" level=info msg="CreateContainer within sandbox \"2f10910824308e2c9daaea2e76e445ee3c74efff3057fce019d8f5351cc5b185\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 8 00:47:03.157758 env[1220]: time="2025-05-08T00:47:03.157670002Z" level=info msg="CreateContainer within sandbox \"2f10910824308e2c9daaea2e76e445ee3c74efff3057fce019d8f5351cc5b185\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"dbfb30028dfda6f3f678f7082f29d60cb32acfa371780692a55408b205ed37ac\"" May 8 00:47:03.158318 env[1220]: time="2025-05-08T00:47:03.158282203Z" level=info msg="StartContainer for \"dbfb30028dfda6f3f678f7082f29d60cb32acfa371780692a55408b205ed37ac\"" May 8 00:47:03.175061 systemd[1]: Started cri-containerd-dbfb30028dfda6f3f678f7082f29d60cb32acfa371780692a55408b205ed37ac.scope. May 8 00:47:03.209044 env[1220]: time="2025-05-08T00:47:03.208981908Z" level=info msg="StartContainer for \"dbfb30028dfda6f3f678f7082f29d60cb32acfa371780692a55408b205ed37ac\" returns successfully" May 8 00:47:03.437599 kubelet[1954]: E0508 00:47:03.437427 1954 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:47:03.440404 kubelet[1954]: E0508 00:47:03.440207 1954 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:47:03.442244 env[1220]: time="2025-05-08T00:47:03.442183446Z" level=info msg="CreateContainer within sandbox \"d375c53112832e1fdba54e42efc9372238576613192da5b871bdd7ea9f1e6824\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 8 00:47:03.469543 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8d3cded2624e0f1fba6344af1bb5bd4d3fec7bacc697ba59b419ab775e80623b-rootfs.mount: Deactivated successfully. May 8 00:47:03.892712 kubelet[1954]: I0508 00:47:03.892641 1954 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-wgrq9" podStartSLOduration=2.030603619 podStartE2EDuration="21.892621598s" podCreationTimestamp="2025-05-08 00:46:42 +0000 UTC" firstStartedPulling="2025-05-08 00:46:43.279953991 +0000 UTC m=+6.150573340" lastFinishedPulling="2025-05-08 00:47:03.14197197 +0000 UTC m=+26.012591319" observedRunningTime="2025-05-08 00:47:03.484477654 +0000 UTC m=+26.355097003" watchObservedRunningTime="2025-05-08 00:47:03.892621598 +0000 UTC m=+26.763240947" May 8 00:47:03.986970 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount6301707.mount: Deactivated successfully. May 8 00:47:04.216853 env[1220]: time="2025-05-08T00:47:04.216675273Z" level=info msg="CreateContainer within sandbox \"d375c53112832e1fdba54e42efc9372238576613192da5b871bdd7ea9f1e6824\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"898a04be57e246df5f198f5aeaa3afcdec6114e829ccabdd2f5c8428f42bf95e\"" May 8 00:47:04.217534 env[1220]: time="2025-05-08T00:47:04.217473864Z" level=info msg="StartContainer for \"898a04be57e246df5f198f5aeaa3afcdec6114e829ccabdd2f5c8428f42bf95e\"" May 8 00:47:04.262719 systemd[1]: Started cri-containerd-898a04be57e246df5f198f5aeaa3afcdec6114e829ccabdd2f5c8428f42bf95e.scope. May 8 00:47:04.314606 systemd[1]: cri-containerd-898a04be57e246df5f198f5aeaa3afcdec6114e829ccabdd2f5c8428f42bf95e.scope: Deactivated successfully. May 8 00:47:04.531543 env[1220]: time="2025-05-08T00:47:04.531386059Z" level=info msg="StartContainer for \"898a04be57e246df5f198f5aeaa3afcdec6114e829ccabdd2f5c8428f42bf95e\" returns successfully" May 8 00:47:04.534261 kubelet[1954]: E0508 00:47:04.534197 1954 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:47:04.547852 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-898a04be57e246df5f198f5aeaa3afcdec6114e829ccabdd2f5c8428f42bf95e-rootfs.mount: Deactivated successfully. May 8 00:47:05.070701 env[1220]: time="2025-05-08T00:47:05.070634469Z" level=info msg="shim disconnected" id=898a04be57e246df5f198f5aeaa3afcdec6114e829ccabdd2f5c8428f42bf95e May 8 00:47:05.070701 env[1220]: time="2025-05-08T00:47:05.070687880Z" level=warning msg="cleaning up after shim disconnected" id=898a04be57e246df5f198f5aeaa3afcdec6114e829ccabdd2f5c8428f42bf95e namespace=k8s.io May 8 00:47:05.071063 env[1220]: time="2025-05-08T00:47:05.070715292Z" level=info msg="cleaning up dead shim" May 8 00:47:05.079249 env[1220]: time="2025-05-08T00:47:05.079177982Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:47:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2533 runtime=io.containerd.runc.v2\n" May 8 00:47:05.537448 kubelet[1954]: E0508 00:47:05.537304 1954 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:47:05.539097 env[1220]: time="2025-05-08T00:47:05.539054748Z" level=info msg="CreateContainer within sandbox \"d375c53112832e1fdba54e42efc9372238576613192da5b871bdd7ea9f1e6824\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 8 00:47:06.930621 env[1220]: time="2025-05-08T00:47:06.930542066Z" level=info msg="CreateContainer within sandbox \"d375c53112832e1fdba54e42efc9372238576613192da5b871bdd7ea9f1e6824\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a686362e457d09fcba5194ea835b5c8a20c994d032e8c5a83e40e80f855d2f1d\"" May 8 00:47:06.931174 env[1220]: time="2025-05-08T00:47:06.931143926Z" level=info msg="StartContainer for \"a686362e457d09fcba5194ea835b5c8a20c994d032e8c5a83e40e80f855d2f1d\"" May 8 00:47:06.949681 systemd[1]: Started cri-containerd-a686362e457d09fcba5194ea835b5c8a20c994d032e8c5a83e40e80f855d2f1d.scope. May 8 00:47:06.988268 systemd[1]: cri-containerd-a686362e457d09fcba5194ea835b5c8a20c994d032e8c5a83e40e80f855d2f1d.scope: Deactivated successfully. May 8 00:47:07.226133 env[1220]: time="2025-05-08T00:47:07.225969096Z" level=info msg="StartContainer for \"a686362e457d09fcba5194ea835b5c8a20c994d032e8c5a83e40e80f855d2f1d\" returns successfully" May 8 00:47:07.240493 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a686362e457d09fcba5194ea835b5c8a20c994d032e8c5a83e40e80f855d2f1d-rootfs.mount: Deactivated successfully. May 8 00:47:07.395978 env[1220]: time="2025-05-08T00:47:07.395870135Z" level=info msg="shim disconnected" id=a686362e457d09fcba5194ea835b5c8a20c994d032e8c5a83e40e80f855d2f1d May 8 00:47:07.395978 env[1220]: time="2025-05-08T00:47:07.395961256Z" level=warning msg="cleaning up after shim disconnected" id=a686362e457d09fcba5194ea835b5c8a20c994d032e8c5a83e40e80f855d2f1d namespace=k8s.io May 8 00:47:07.395978 env[1220]: time="2025-05-08T00:47:07.395983798Z" level=info msg="cleaning up dead shim" May 8 00:47:07.404377 env[1220]: time="2025-05-08T00:47:07.404307936Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:47:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2589 runtime=io.containerd.runc.v2\n" May 8 00:47:07.544384 kubelet[1954]: E0508 00:47:07.543858 1954 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:47:07.546162 env[1220]: time="2025-05-08T00:47:07.546100281Z" level=info msg="CreateContainer within sandbox \"d375c53112832e1fdba54e42efc9372238576613192da5b871bdd7ea9f1e6824\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 8 00:47:07.994478 env[1220]: time="2025-05-08T00:47:07.994333554Z" level=info msg="CreateContainer within sandbox \"d375c53112832e1fdba54e42efc9372238576613192da5b871bdd7ea9f1e6824\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"1466456acda9c8f926fea9e855e7d4bf548cd2efff985a8d0d407d905136c55e\"" May 8 00:47:07.995213 env[1220]: time="2025-05-08T00:47:07.995143626Z" level=info msg="StartContainer for \"1466456acda9c8f926fea9e855e7d4bf548cd2efff985a8d0d407d905136c55e\"" May 8 00:47:08.015671 systemd[1]: Started cri-containerd-1466456acda9c8f926fea9e855e7d4bf548cd2efff985a8d0d407d905136c55e.scope. May 8 00:47:08.042199 env[1220]: time="2025-05-08T00:47:08.042134152Z" level=info msg="StartContainer for \"1466456acda9c8f926fea9e855e7d4bf548cd2efff985a8d0d407d905136c55e\" returns successfully" May 8 00:47:08.108722 kubelet[1954]: I0508 00:47:08.108678 1954 kubelet_node_status.go:502] "Fast updating node status as it just became ready" May 8 00:47:08.142401 systemd[1]: Created slice kubepods-burstable-pod43de983d_01e1_4491_acf7_8dcc1fe05c50.slice. May 8 00:47:08.148266 kubelet[1954]: I0508 00:47:08.148072 1954 status_manager.go:890] "Failed to get status for pod" podUID="43de983d-01e1-4491-acf7-8dcc1fe05c50" pod="kube-system/coredns-668d6bf9bc-f9vd7" err="pods \"coredns-668d6bf9bc-f9vd7\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" May 8 00:47:08.150690 systemd[1]: Created slice kubepods-burstable-pod53f06ca1_7e30_4f81_a063_4b68c0efe2d8.slice. May 8 00:47:08.287760 kubelet[1954]: I0508 00:47:08.287607 1954 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/43de983d-01e1-4491-acf7-8dcc1fe05c50-config-volume\") pod \"coredns-668d6bf9bc-f9vd7\" (UID: \"43de983d-01e1-4491-acf7-8dcc1fe05c50\") " pod="kube-system/coredns-668d6bf9bc-f9vd7" May 8 00:47:08.288062 kubelet[1954]: I0508 00:47:08.288030 1954 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqxx6\" (UniqueName: \"kubernetes.io/projected/53f06ca1-7e30-4f81-a063-4b68c0efe2d8-kube-api-access-wqxx6\") pod \"coredns-668d6bf9bc-gnx7h\" (UID: \"53f06ca1-7e30-4f81-a063-4b68c0efe2d8\") " pod="kube-system/coredns-668d6bf9bc-gnx7h" May 8 00:47:08.288261 kubelet[1954]: I0508 00:47:08.288236 1954 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/53f06ca1-7e30-4f81-a063-4b68c0efe2d8-config-volume\") pod \"coredns-668d6bf9bc-gnx7h\" (UID: \"53f06ca1-7e30-4f81-a063-4b68c0efe2d8\") " pod="kube-system/coredns-668d6bf9bc-gnx7h" May 8 00:47:08.288426 kubelet[1954]: I0508 00:47:08.288401 1954 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swnj4\" (UniqueName: \"kubernetes.io/projected/43de983d-01e1-4491-acf7-8dcc1fe05c50-kube-api-access-swnj4\") pod \"coredns-668d6bf9bc-f9vd7\" (UID: \"43de983d-01e1-4491-acf7-8dcc1fe05c50\") " pod="kube-system/coredns-668d6bf9bc-f9vd7" May 8 00:47:08.447657 kubelet[1954]: E0508 00:47:08.447590 1954 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:47:08.448760 env[1220]: time="2025-05-08T00:47:08.448539597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-f9vd7,Uid:43de983d-01e1-4491-acf7-8dcc1fe05c50,Namespace:kube-system,Attempt:0,}" May 8 00:47:08.456246 kubelet[1954]: E0508 00:47:08.456205 1954 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:47:08.456898 env[1220]: time="2025-05-08T00:47:08.456834989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gnx7h,Uid:53f06ca1-7e30-4f81-a063-4b68c0efe2d8,Namespace:kube-system,Attempt:0,}" May 8 00:47:08.552014 kubelet[1954]: E0508 00:47:08.551813 1954 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:47:09.553514 kubelet[1954]: E0508 00:47:09.553468 1954 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:47:10.082131 systemd-networkd[1039]: cilium_host: Link UP May 8 00:47:10.082292 systemd-networkd[1039]: cilium_net: Link UP May 8 00:47:10.082296 systemd-networkd[1039]: cilium_net: Gained carrier May 8 00:47:10.082469 systemd-networkd[1039]: cilium_host: Gained carrier May 8 00:47:10.085107 systemd-networkd[1039]: cilium_host: Gained IPv6LL May 8 00:47:10.085641 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready May 8 00:47:10.179104 systemd-networkd[1039]: cilium_vxlan: Link UP May 8 00:47:10.179115 systemd-networkd[1039]: cilium_vxlan: Gained carrier May 8 00:47:10.386617 kernel: NET: Registered PF_ALG protocol family May 8 00:47:10.555675 kubelet[1954]: E0508 00:47:10.555602 1954 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:47:10.862706 systemd-networkd[1039]: cilium_net: Gained IPv6LL May 8 00:47:10.986064 systemd-networkd[1039]: lxc_health: Link UP May 8 00:47:10.996030 systemd-networkd[1039]: lxc_health: Gained carrier May 8 00:47:10.996601 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 8 00:47:11.065896 kubelet[1954]: I0508 00:47:11.065494 1954 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-xtrm5" podStartSLOduration=12.150450282 podStartE2EDuration="29.065473204s" podCreationTimestamp="2025-05-08 00:46:42 +0000 UTC" firstStartedPulling="2025-05-08 00:46:43.174507731 +0000 UTC m=+6.045127080" lastFinishedPulling="2025-05-08 00:47:00.089530653 +0000 UTC m=+22.960150002" observedRunningTime="2025-05-08 00:47:08.56610537 +0000 UTC m=+31.436724719" watchObservedRunningTime="2025-05-08 00:47:11.065473204 +0000 UTC m=+33.936092553" May 8 00:47:11.510220 systemd-networkd[1039]: lxc281662710e48: Link UP May 8 00:47:11.523724 systemd-networkd[1039]: lxcf5bca4ce2d82: Link UP May 8 00:47:11.532605 kernel: eth0: renamed from tmpfad1c May 8 00:47:11.540640 kernel: eth0: renamed from tmp03eb2 May 8 00:47:11.571679 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 8 00:47:11.571858 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcf5bca4ce2d82: link becomes ready May 8 00:47:11.573684 kubelet[1954]: E0508 00:47:11.573629 1954 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:47:11.574885 systemd-networkd[1039]: lxcf5bca4ce2d82: Gained carrier May 8 00:47:11.580044 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 8 00:47:11.581909 systemd-networkd[1039]: lxc281662710e48: Gained carrier May 8 00:47:11.582622 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc281662710e48: link becomes ready May 8 00:47:11.887747 systemd-networkd[1039]: cilium_vxlan: Gained IPv6LL May 8 00:47:12.575396 kubelet[1954]: E0508 00:47:12.575364 1954 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:47:12.846773 systemd-networkd[1039]: lxc_health: Gained IPv6LL May 8 00:47:13.295835 systemd-networkd[1039]: lxc281662710e48: Gained IPv6LL May 8 00:47:13.486849 systemd-networkd[1039]: lxcf5bca4ce2d82: Gained IPv6LL May 8 00:47:13.577052 kubelet[1954]: E0508 00:47:13.576992 1954 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:47:13.645610 systemd[1]: Started sshd@5-10.0.0.73:22-10.0.0.1:59744.service. May 8 00:47:13.684220 sshd[3138]: Accepted publickey for core from 10.0.0.1 port 59744 ssh2: RSA SHA256:7kNtsjKndXT6BkWKn4/gWKPiKnt49pZMw1UDp15jA9U May 8 00:47:13.685940 sshd[3138]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:47:13.690626 systemd-logind[1205]: New session 6 of user core. May 8 00:47:13.691140 systemd[1]: Started session-6.scope. May 8 00:47:13.848539 sshd[3138]: pam_unix(sshd:session): session closed for user core May 8 00:47:13.852711 systemd[1]: sshd@5-10.0.0.73:22-10.0.0.1:59744.service: Deactivated successfully. May 8 00:47:13.853630 systemd[1]: session-6.scope: Deactivated successfully. May 8 00:47:13.854306 systemd-logind[1205]: Session 6 logged out. Waiting for processes to exit. May 8 00:47:13.855278 systemd-logind[1205]: Removed session 6. May 8 00:47:15.664364 env[1220]: time="2025-05-08T00:47:15.664265667Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:47:15.664364 env[1220]: time="2025-05-08T00:47:15.664316533Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:47:15.664364 env[1220]: time="2025-05-08T00:47:15.664337783Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:47:15.664860 env[1220]: time="2025-05-08T00:47:15.664519544Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/03eb2cec3a51d7f63a2a51a953525af37b100838beeef2df6da82ad3af573000 pid=3172 runtime=io.containerd.runc.v2 May 8 00:47:15.677142 systemd[1]: Started cri-containerd-03eb2cec3a51d7f63a2a51a953525af37b100838beeef2df6da82ad3af573000.scope. May 8 00:47:15.689399 systemd-resolved[1156]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:47:15.710727 env[1220]: time="2025-05-08T00:47:15.710673243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-f9vd7,Uid:43de983d-01e1-4491-acf7-8dcc1fe05c50,Namespace:kube-system,Attempt:0,} returns sandbox id \"03eb2cec3a51d7f63a2a51a953525af37b100838beeef2df6da82ad3af573000\"" May 8 00:47:15.712961 kubelet[1954]: E0508 00:47:15.712935 1954 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:47:15.735389 env[1220]: time="2025-05-08T00:47:15.735293746Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:47:15.735389 env[1220]: time="2025-05-08T00:47:15.735334773Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:47:15.735389 env[1220]: time="2025-05-08T00:47:15.735344641Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:47:15.735600 env[1220]: time="2025-05-08T00:47:15.735466169Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fad1cfd9f98a09886a55903c8d942e60f8a689f7baa706ec60bde563bcffb592 pid=3212 runtime=io.containerd.runc.v2 May 8 00:47:15.749966 systemd[1]: run-containerd-runc-k8s.io-fad1cfd9f98a09886a55903c8d942e60f8a689f7baa706ec60bde563bcffb592-runc.LRz9oQ.mount: Deactivated successfully. May 8 00:47:15.752350 env[1220]: time="2025-05-08T00:47:15.752303791Z" level=info msg="CreateContainer within sandbox \"03eb2cec3a51d7f63a2a51a953525af37b100838beeef2df6da82ad3af573000\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 8 00:47:15.754962 systemd[1]: Started cri-containerd-fad1cfd9f98a09886a55903c8d942e60f8a689f7baa706ec60bde563bcffb592.scope. May 8 00:47:15.766682 systemd-resolved[1156]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:47:15.787979 env[1220]: time="2025-05-08T00:47:15.787904558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gnx7h,Uid:53f06ca1-7e30-4f81-a063-4b68c0efe2d8,Namespace:kube-system,Attempt:0,} returns sandbox id \"fad1cfd9f98a09886a55903c8d942e60f8a689f7baa706ec60bde563bcffb592\"" May 8 00:47:15.789038 kubelet[1954]: E0508 00:47:15.788828 1954 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:47:15.790734 env[1220]: time="2025-05-08T00:47:15.790705226Z" level=info msg="CreateContainer within sandbox \"fad1cfd9f98a09886a55903c8d942e60f8a689f7baa706ec60bde563bcffb592\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 8 00:47:15.990846 env[1220]: time="2025-05-08T00:47:15.990457954Z" level=info msg="CreateContainer within sandbox \"fad1cfd9f98a09886a55903c8d942e60f8a689f7baa706ec60bde563bcffb592\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"788b352bc7641824da6d53cdd9eec8371878a3ca1dd03efd00fb7ae963cd8502\"" May 8 00:47:15.991805 env[1220]: time="2025-05-08T00:47:15.991392419Z" level=info msg="StartContainer for \"788b352bc7641824da6d53cdd9eec8371878a3ca1dd03efd00fb7ae963cd8502\"" May 8 00:47:15.992070 env[1220]: time="2025-05-08T00:47:15.991998617Z" level=info msg="CreateContainer within sandbox \"03eb2cec3a51d7f63a2a51a953525af37b100838beeef2df6da82ad3af573000\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c1d7cf5078a996de6c07ed7612ff92f2270cfcdbd45b32ac1da12424257fe61e\"" May 8 00:47:15.992829 env[1220]: time="2025-05-08T00:47:15.992796564Z" level=info msg="StartContainer for \"c1d7cf5078a996de6c07ed7612ff92f2270cfcdbd45b32ac1da12424257fe61e\"" May 8 00:47:16.006313 systemd[1]: Started cri-containerd-788b352bc7641824da6d53cdd9eec8371878a3ca1dd03efd00fb7ae963cd8502.scope. May 8 00:47:16.013296 systemd[1]: Started cri-containerd-c1d7cf5078a996de6c07ed7612ff92f2270cfcdbd45b32ac1da12424257fe61e.scope. May 8 00:47:16.036396 env[1220]: time="2025-05-08T00:47:16.036294254Z" level=info msg="StartContainer for \"788b352bc7641824da6d53cdd9eec8371878a3ca1dd03efd00fb7ae963cd8502\" returns successfully" May 8 00:47:16.048862 env[1220]: time="2025-05-08T00:47:16.048789711Z" level=info msg="StartContainer for \"c1d7cf5078a996de6c07ed7612ff92f2270cfcdbd45b32ac1da12424257fe61e\" returns successfully" May 8 00:47:16.585219 kubelet[1954]: E0508 00:47:16.584868 1954 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:47:16.586142 kubelet[1954]: E0508 00:47:16.586118 1954 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:47:16.641652 kubelet[1954]: I0508 00:47:16.641544 1954 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-gnx7h" podStartSLOduration=34.641522261 podStartE2EDuration="34.641522261s" podCreationTimestamp="2025-05-08 00:46:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:47:16.641196541 +0000 UTC m=+39.511815900" watchObservedRunningTime="2025-05-08 00:47:16.641522261 +0000 UTC m=+39.512141611" May 8 00:47:16.690488 kubelet[1954]: I0508 00:47:16.690424 1954 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-f9vd7" podStartSLOduration=34.690400451 podStartE2EDuration="34.690400451s" podCreationTimestamp="2025-05-08 00:46:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:47:16.673485637 +0000 UTC m=+39.544104986" watchObservedRunningTime="2025-05-08 00:47:16.690400451 +0000 UTC m=+39.561019800" May 8 00:47:17.589018 kubelet[1954]: E0508 00:47:17.588952 1954 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:47:17.589676 kubelet[1954]: E0508 00:47:17.589074 1954 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:47:18.590831 kubelet[1954]: E0508 00:47:18.590776 1954 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:47:18.591566 kubelet[1954]: E0508 00:47:18.590980 1954 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:47:18.855699 systemd[1]: Started sshd@6-10.0.0.73:22-10.0.0.1:41604.service. May 8 00:47:18.894606 sshd[3326]: Accepted publickey for core from 10.0.0.1 port 41604 ssh2: RSA SHA256:7kNtsjKndXT6BkWKn4/gWKPiKnt49pZMw1UDp15jA9U May 8 00:47:18.896546 sshd[3326]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:47:18.901156 systemd-logind[1205]: New session 7 of user core. May 8 00:47:18.902154 systemd[1]: Started session-7.scope. May 8 00:47:19.041648 sshd[3326]: pam_unix(sshd:session): session closed for user core May 8 00:47:19.044731 systemd[1]: sshd@6-10.0.0.73:22-10.0.0.1:41604.service: Deactivated successfully. May 8 00:47:19.045658 systemd[1]: session-7.scope: Deactivated successfully. May 8 00:47:19.046445 systemd-logind[1205]: Session 7 logged out. Waiting for processes to exit. May 8 00:47:19.047434 systemd-logind[1205]: Removed session 7. May 8 00:47:24.047384 systemd[1]: Started sshd@7-10.0.0.73:22-10.0.0.1:41606.service. May 8 00:47:24.083672 sshd[3340]: Accepted publickey for core from 10.0.0.1 port 41606 ssh2: RSA SHA256:7kNtsjKndXT6BkWKn4/gWKPiKnt49pZMw1UDp15jA9U May 8 00:47:24.085220 sshd[3340]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:47:24.090029 systemd-logind[1205]: New session 8 of user core. May 8 00:47:24.091192 systemd[1]: Started session-8.scope. May 8 00:47:24.224991 sshd[3340]: pam_unix(sshd:session): session closed for user core May 8 00:47:24.228919 systemd[1]: sshd@7-10.0.0.73:22-10.0.0.1:41606.service: Deactivated successfully. May 8 00:47:24.229870 systemd[1]: session-8.scope: Deactivated successfully. May 8 00:47:24.230519 systemd-logind[1205]: Session 8 logged out. Waiting for processes to exit. May 8 00:47:24.231424 systemd-logind[1205]: Removed session 8. May 8 00:47:29.230378 systemd[1]: Started sshd@8-10.0.0.73:22-10.0.0.1:49428.service. May 8 00:47:29.269001 sshd[3354]: Accepted publickey for core from 10.0.0.1 port 49428 ssh2: RSA SHA256:7kNtsjKndXT6BkWKn4/gWKPiKnt49pZMw1UDp15jA9U May 8 00:47:29.270617 sshd[3354]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:47:29.275203 systemd-logind[1205]: New session 9 of user core. May 8 00:47:29.276282 systemd[1]: Started session-9.scope. May 8 00:47:29.426097 sshd[3354]: pam_unix(sshd:session): session closed for user core May 8 00:47:29.429859 systemd[1]: sshd@8-10.0.0.73:22-10.0.0.1:49428.service: Deactivated successfully. May 8 00:47:29.430767 systemd[1]: session-9.scope: Deactivated successfully. May 8 00:47:29.431742 systemd-logind[1205]: Session 9 logged out. Waiting for processes to exit. May 8 00:47:29.432830 systemd-logind[1205]: Removed session 9. May 8 00:47:34.431646 systemd[1]: Started sshd@9-10.0.0.73:22-10.0.0.1:49438.service. May 8 00:47:34.466654 sshd[3369]: Accepted publickey for core from 10.0.0.1 port 49438 ssh2: RSA SHA256:7kNtsjKndXT6BkWKn4/gWKPiKnt49pZMw1UDp15jA9U May 8 00:47:34.468505 sshd[3369]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:47:34.473109 systemd-logind[1205]: New session 10 of user core. May 8 00:47:34.474325 systemd[1]: Started session-10.scope. May 8 00:47:34.591100 sshd[3369]: pam_unix(sshd:session): session closed for user core May 8 00:47:34.593344 systemd[1]: sshd@9-10.0.0.73:22-10.0.0.1:49438.service: Deactivated successfully. May 8 00:47:34.594201 systemd[1]: session-10.scope: Deactivated successfully. May 8 00:47:34.594776 systemd-logind[1205]: Session 10 logged out. Waiting for processes to exit. May 8 00:47:34.595498 systemd-logind[1205]: Removed session 10. May 8 00:47:39.596346 systemd[1]: Started sshd@10-10.0.0.73:22-10.0.0.1:46234.service. May 8 00:47:39.629278 sshd[3388]: Accepted publickey for core from 10.0.0.1 port 46234 ssh2: RSA SHA256:7kNtsjKndXT6BkWKn4/gWKPiKnt49pZMw1UDp15jA9U May 8 00:47:39.630619 sshd[3388]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:47:39.634388 systemd-logind[1205]: New session 11 of user core. May 8 00:47:39.635651 systemd[1]: Started session-11.scope. May 8 00:47:39.762170 sshd[3388]: pam_unix(sshd:session): session closed for user core May 8 00:47:39.765097 systemd[1]: sshd@10-10.0.0.73:22-10.0.0.1:46234.service: Deactivated successfully. May 8 00:47:39.765810 systemd[1]: session-11.scope: Deactivated successfully. May 8 00:47:39.766567 systemd-logind[1205]: Session 11 logged out. Waiting for processes to exit. May 8 00:47:39.767406 systemd-logind[1205]: Removed session 11. May 8 00:47:44.767807 systemd[1]: Started sshd@11-10.0.0.73:22-10.0.0.1:46248.service. May 8 00:47:44.806352 sshd[3402]: Accepted publickey for core from 10.0.0.1 port 46248 ssh2: RSA SHA256:7kNtsjKndXT6BkWKn4/gWKPiKnt49pZMw1UDp15jA9U May 8 00:47:44.807651 sshd[3402]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:47:44.811168 systemd-logind[1205]: New session 12 of user core. May 8 00:47:44.811954 systemd[1]: Started session-12.scope. May 8 00:47:45.012107 sshd[3402]: pam_unix(sshd:session): session closed for user core May 8 00:47:45.014876 systemd[1]: sshd@11-10.0.0.73:22-10.0.0.1:46248.service: Deactivated successfully. May 8 00:47:45.015393 systemd[1]: session-12.scope: Deactivated successfully. May 8 00:47:45.017761 systemd[1]: Started sshd@12-10.0.0.73:22-10.0.0.1:52278.service. May 8 00:47:45.018336 systemd-logind[1205]: Session 12 logged out. Waiting for processes to exit. May 8 00:47:45.019133 systemd-logind[1205]: Removed session 12. May 8 00:47:45.050529 sshd[3418]: Accepted publickey for core from 10.0.0.1 port 52278 ssh2: RSA SHA256:7kNtsjKndXT6BkWKn4/gWKPiKnt49pZMw1UDp15jA9U May 8 00:47:45.052039 sshd[3418]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:47:45.055973 systemd-logind[1205]: New session 13 of user core. May 8 00:47:45.056868 systemd[1]: Started session-13.scope. May 8 00:47:45.311250 sshd[3418]: pam_unix(sshd:session): session closed for user core May 8 00:47:45.315614 systemd[1]: Started sshd@13-10.0.0.73:22-10.0.0.1:52284.service. May 8 00:47:45.318008 systemd[1]: sshd@12-10.0.0.73:22-10.0.0.1:52278.service: Deactivated successfully. May 8 00:47:45.320828 systemd[1]: session-13.scope: Deactivated successfully. May 8 00:47:45.321616 systemd-logind[1205]: Session 13 logged out. Waiting for processes to exit. May 8 00:47:45.322454 systemd-logind[1205]: Removed session 13. May 8 00:47:45.351989 sshd[3428]: Accepted publickey for core from 10.0.0.1 port 52284 ssh2: RSA SHA256:7kNtsjKndXT6BkWKn4/gWKPiKnt49pZMw1UDp15jA9U May 8 00:47:45.353301 sshd[3428]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:47:45.357085 systemd-logind[1205]: New session 14 of user core. May 8 00:47:45.357892 systemd[1]: Started session-14.scope. May 8 00:47:45.466853 sshd[3428]: pam_unix(sshd:session): session closed for user core May 8 00:47:45.470205 systemd[1]: sshd@13-10.0.0.73:22-10.0.0.1:52284.service: Deactivated successfully. May 8 00:47:45.470873 systemd[1]: session-14.scope: Deactivated successfully. May 8 00:47:45.471519 systemd-logind[1205]: Session 14 logged out. Waiting for processes to exit. May 8 00:47:45.472367 systemd-logind[1205]: Removed session 14. May 8 00:47:47.308484 kubelet[1954]: E0508 00:47:47.308431 1954 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:47:50.472295 systemd[1]: Started sshd@14-10.0.0.73:22-10.0.0.1:52298.service. May 8 00:47:50.512779 sshd[3443]: Accepted publickey for core from 10.0.0.1 port 52298 ssh2: RSA SHA256:7kNtsjKndXT6BkWKn4/gWKPiKnt49pZMw1UDp15jA9U May 8 00:47:50.514453 sshd[3443]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:47:50.518399 systemd-logind[1205]: New session 15 of user core. May 8 00:47:50.519223 systemd[1]: Started session-15.scope. May 8 00:47:50.643126 sshd[3443]: pam_unix(sshd:session): session closed for user core May 8 00:47:50.645591 systemd[1]: sshd@14-10.0.0.73:22-10.0.0.1:52298.service: Deactivated successfully. May 8 00:47:50.646529 systemd[1]: session-15.scope: Deactivated successfully. May 8 00:47:50.647359 systemd-logind[1205]: Session 15 logged out. Waiting for processes to exit. May 8 00:47:50.648313 systemd-logind[1205]: Removed session 15. May 8 00:47:51.267222 kubelet[1954]: E0508 00:47:51.267178 1954 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:47:55.648810 systemd[1]: Started sshd@15-10.0.0.73:22-10.0.0.1:51332.service. May 8 00:47:55.683332 sshd[3458]: Accepted publickey for core from 10.0.0.1 port 51332 ssh2: RSA SHA256:7kNtsjKndXT6BkWKn4/gWKPiKnt49pZMw1UDp15jA9U May 8 00:47:55.684707 sshd[3458]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:47:55.688624 systemd-logind[1205]: New session 16 of user core. May 8 00:47:55.689623 systemd[1]: Started session-16.scope. May 8 00:47:55.799889 sshd[3458]: pam_unix(sshd:session): session closed for user core May 8 00:47:55.802202 systemd[1]: sshd@15-10.0.0.73:22-10.0.0.1:51332.service: Deactivated successfully. May 8 00:47:55.802949 systemd[1]: session-16.scope: Deactivated successfully. May 8 00:47:55.803681 systemd-logind[1205]: Session 16 logged out. Waiting for processes to exit. May 8 00:47:55.804312 systemd-logind[1205]: Removed session 16. May 8 00:47:59.267340 kubelet[1954]: E0508 00:47:59.267295 1954 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:48:00.804775 systemd[1]: Started sshd@16-10.0.0.73:22-10.0.0.1:51338.service. May 8 00:48:00.837670 sshd[3471]: Accepted publickey for core from 10.0.0.1 port 51338 ssh2: RSA SHA256:7kNtsjKndXT6BkWKn4/gWKPiKnt49pZMw1UDp15jA9U May 8 00:48:00.921548 sshd[3471]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:48:00.924843 systemd-logind[1205]: New session 17 of user core. May 8 00:48:00.925664 systemd[1]: Started session-17.scope. May 8 00:48:01.378080 sshd[3471]: pam_unix(sshd:session): session closed for user core May 8 00:48:01.381136 systemd[1]: sshd@16-10.0.0.73:22-10.0.0.1:51338.service: Deactivated successfully. May 8 00:48:01.381743 systemd[1]: session-17.scope: Deactivated successfully. May 8 00:48:01.382304 systemd-logind[1205]: Session 17 logged out. Waiting for processes to exit. May 8 00:48:01.383461 systemd[1]: Started sshd@17-10.0.0.73:22-10.0.0.1:51350.service. May 8 00:48:01.384209 systemd-logind[1205]: Removed session 17. May 8 00:48:01.414555 sshd[3485]: Accepted publickey for core from 10.0.0.1 port 51350 ssh2: RSA SHA256:7kNtsjKndXT6BkWKn4/gWKPiKnt49pZMw1UDp15jA9U May 8 00:48:01.415555 sshd[3485]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:48:01.418697 systemd-logind[1205]: New session 18 of user core. May 8 00:48:01.419447 systemd[1]: Started session-18.scope. May 8 00:48:02.488691 sshd[3485]: pam_unix(sshd:session): session closed for user core May 8 00:48:02.492206 systemd[1]: sshd@17-10.0.0.73:22-10.0.0.1:51350.service: Deactivated successfully. May 8 00:48:02.492959 systemd[1]: session-18.scope: Deactivated successfully. May 8 00:48:02.493754 systemd-logind[1205]: Session 18 logged out. Waiting for processes to exit. May 8 00:48:02.495309 systemd[1]: Started sshd@18-10.0.0.73:22-10.0.0.1:51358.service. May 8 00:48:02.496320 systemd-logind[1205]: Removed session 18. May 8 00:48:02.531838 sshd[3497]: Accepted publickey for core from 10.0.0.1 port 51358 ssh2: RSA SHA256:7kNtsjKndXT6BkWKn4/gWKPiKnt49pZMw1UDp15jA9U May 8 00:48:02.533495 sshd[3497]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:48:02.539016 systemd-logind[1205]: New session 19 of user core. May 8 00:48:02.540304 systemd[1]: Started session-19.scope. May 8 00:48:03.761025 sshd[3497]: pam_unix(sshd:session): session closed for user core May 8 00:48:03.764858 systemd[1]: Started sshd@19-10.0.0.73:22-10.0.0.1:51370.service. May 8 00:48:03.765777 systemd[1]: sshd@18-10.0.0.73:22-10.0.0.1:51358.service: Deactivated successfully. May 8 00:48:03.766749 systemd[1]: session-19.scope: Deactivated successfully. May 8 00:48:03.767999 systemd-logind[1205]: Session 19 logged out. Waiting for processes to exit. May 8 00:48:03.769167 systemd-logind[1205]: Removed session 19. May 8 00:48:03.806803 sshd[3514]: Accepted publickey for core from 10.0.0.1 port 51370 ssh2: RSA SHA256:7kNtsjKndXT6BkWKn4/gWKPiKnt49pZMw1UDp15jA9U May 8 00:48:03.808417 sshd[3514]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:48:03.812593 systemd-logind[1205]: New session 20 of user core. May 8 00:48:03.813624 systemd[1]: Started session-20.scope. May 8 00:48:05.099376 sshd[3514]: pam_unix(sshd:session): session closed for user core May 8 00:48:05.102890 systemd[1]: sshd@19-10.0.0.73:22-10.0.0.1:51370.service: Deactivated successfully. May 8 00:48:05.103464 systemd[1]: session-20.scope: Deactivated successfully. May 8 00:48:05.104007 systemd-logind[1205]: Session 20 logged out. Waiting for processes to exit. May 8 00:48:05.105290 systemd[1]: Started sshd@20-10.0.0.73:22-10.0.0.1:45642.service. May 8 00:48:05.106210 systemd-logind[1205]: Removed session 20. May 8 00:48:05.139337 sshd[3528]: Accepted publickey for core from 10.0.0.1 port 45642 ssh2: RSA SHA256:7kNtsjKndXT6BkWKn4/gWKPiKnt49pZMw1UDp15jA9U May 8 00:48:05.140668 sshd[3528]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:48:05.144330 systemd-logind[1205]: New session 21 of user core. May 8 00:48:05.145378 systemd[1]: Started session-21.scope. May 8 00:48:05.267461 sshd[3528]: pam_unix(sshd:session): session closed for user core May 8 00:48:05.270900 systemd[1]: sshd@20-10.0.0.73:22-10.0.0.1:45642.service: Deactivated successfully. May 8 00:48:05.271660 systemd[1]: session-21.scope: Deactivated successfully. May 8 00:48:05.272257 systemd-logind[1205]: Session 21 logged out. Waiting for processes to exit. May 8 00:48:05.272954 systemd-logind[1205]: Removed session 21. May 8 00:48:10.267489 kubelet[1954]: E0508 00:48:10.267150 1954 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:48:10.273029 systemd[1]: Started sshd@21-10.0.0.73:22-10.0.0.1:45648.service. May 8 00:48:10.309862 sshd[3541]: Accepted publickey for core from 10.0.0.1 port 45648 ssh2: RSA SHA256:7kNtsjKndXT6BkWKn4/gWKPiKnt49pZMw1UDp15jA9U May 8 00:48:10.311229 sshd[3541]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:48:10.315190 systemd-logind[1205]: New session 22 of user core. May 8 00:48:10.316065 systemd[1]: Started session-22.scope. May 8 00:48:10.441755 sshd[3541]: pam_unix(sshd:session): session closed for user core May 8 00:48:10.444527 systemd[1]: sshd@21-10.0.0.73:22-10.0.0.1:45648.service: Deactivated successfully. May 8 00:48:10.445428 systemd[1]: session-22.scope: Deactivated successfully. May 8 00:48:10.446145 systemd-logind[1205]: Session 22 logged out. Waiting for processes to exit. May 8 00:48:10.447220 systemd-logind[1205]: Removed session 22. May 8 00:48:15.448472 systemd[1]: Started sshd@22-10.0.0.73:22-10.0.0.1:39634.service. May 8 00:48:15.506473 sshd[3557]: Accepted publickey for core from 10.0.0.1 port 39634 ssh2: RSA SHA256:7kNtsjKndXT6BkWKn4/gWKPiKnt49pZMw1UDp15jA9U May 8 00:48:15.508225 sshd[3557]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:48:15.512266 systemd-logind[1205]: New session 23 of user core. May 8 00:48:15.513285 systemd[1]: Started session-23.scope. May 8 00:48:15.621563 sshd[3557]: pam_unix(sshd:session): session closed for user core May 8 00:48:15.623778 systemd[1]: sshd@22-10.0.0.73:22-10.0.0.1:39634.service: Deactivated successfully. May 8 00:48:15.624519 systemd[1]: session-23.scope: Deactivated successfully. May 8 00:48:15.625163 systemd-logind[1205]: Session 23 logged out. Waiting for processes to exit. May 8 00:48:15.625991 systemd-logind[1205]: Removed session 23. May 8 00:48:20.626097 systemd[1]: Started sshd@23-10.0.0.73:22-10.0.0.1:39642.service. May 8 00:48:20.658659 sshd[3572]: Accepted publickey for core from 10.0.0.1 port 39642 ssh2: RSA SHA256:7kNtsjKndXT6BkWKn4/gWKPiKnt49pZMw1UDp15jA9U May 8 00:48:20.660040 sshd[3572]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:48:20.663501 systemd-logind[1205]: New session 24 of user core. May 8 00:48:20.664569 systemd[1]: Started session-24.scope. May 8 00:48:20.802561 sshd[3572]: pam_unix(sshd:session): session closed for user core May 8 00:48:20.804753 systemd[1]: sshd@23-10.0.0.73:22-10.0.0.1:39642.service: Deactivated successfully. May 8 00:48:20.805407 systemd[1]: session-24.scope: Deactivated successfully. May 8 00:48:20.806017 systemd-logind[1205]: Session 24 logged out. Waiting for processes to exit. May 8 00:48:20.806629 systemd-logind[1205]: Removed session 24. May 8 00:48:21.267634 kubelet[1954]: E0508 00:48:21.267548 1954 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:48:23.266988 kubelet[1954]: E0508 00:48:23.266940 1954 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:48:25.267393 kubelet[1954]: E0508 00:48:25.267350 1954 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:48:25.806919 systemd[1]: Started sshd@24-10.0.0.73:22-10.0.0.1:35904.service. May 8 00:48:25.926838 sshd[3586]: Accepted publickey for core from 10.0.0.1 port 35904 ssh2: RSA SHA256:7kNtsjKndXT6BkWKn4/gWKPiKnt49pZMw1UDp15jA9U May 8 00:48:25.928205 sshd[3586]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:48:25.932161 systemd-logind[1205]: New session 25 of user core. May 8 00:48:25.933051 systemd[1]: Started session-25.scope. May 8 00:48:26.039957 sshd[3586]: pam_unix(sshd:session): session closed for user core May 8 00:48:26.042037 systemd[1]: sshd@24-10.0.0.73:22-10.0.0.1:35904.service: Deactivated successfully. May 8 00:48:26.042788 systemd[1]: session-25.scope: Deactivated successfully. May 8 00:48:26.043290 systemd-logind[1205]: Session 25 logged out. Waiting for processes to exit. May 8 00:48:26.044116 systemd-logind[1205]: Removed session 25. May 8 00:48:31.045175 systemd[1]: Started sshd@25-10.0.0.73:22-10.0.0.1:35920.service. May 8 00:48:31.077871 sshd[3600]: Accepted publickey for core from 10.0.0.1 port 35920 ssh2: RSA SHA256:7kNtsjKndXT6BkWKn4/gWKPiKnt49pZMw1UDp15jA9U May 8 00:48:31.079142 sshd[3600]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:48:31.083020 systemd-logind[1205]: New session 26 of user core. May 8 00:48:31.084152 systemd[1]: Started session-26.scope. May 8 00:48:31.242062 sshd[3600]: pam_unix(sshd:session): session closed for user core May 8 00:48:31.245181 systemd[1]: sshd@25-10.0.0.73:22-10.0.0.1:35920.service: Deactivated successfully. May 8 00:48:31.245701 systemd[1]: session-26.scope: Deactivated successfully. May 8 00:48:31.247407 systemd[1]: Started sshd@26-10.0.0.73:22-10.0.0.1:35932.service. May 8 00:48:31.248085 systemd-logind[1205]: Session 26 logged out. Waiting for processes to exit. May 8 00:48:31.249119 systemd-logind[1205]: Removed session 26. May 8 00:48:31.280169 sshd[3614]: Accepted publickey for core from 10.0.0.1 port 35932 ssh2: RSA SHA256:7kNtsjKndXT6BkWKn4/gWKPiKnt49pZMw1UDp15jA9U May 8 00:48:31.281485 sshd[3614]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:48:31.285254 systemd-logind[1205]: New session 27 of user core. May 8 00:48:31.286077 systemd[1]: Started session-27.scope. May 8 00:48:32.724442 env[1220]: time="2025-05-08T00:48:32.724390451Z" level=info msg="StopContainer for \"dbfb30028dfda6f3f678f7082f29d60cb32acfa371780692a55408b205ed37ac\" with timeout 30 (s)" May 8 00:48:32.726134 env[1220]: time="2025-05-08T00:48:32.726099299Z" level=info msg="Stop container \"dbfb30028dfda6f3f678f7082f29d60cb32acfa371780692a55408b205ed37ac\" with signal terminated" May 8 00:48:32.737780 systemd[1]: cri-containerd-dbfb30028dfda6f3f678f7082f29d60cb32acfa371780692a55408b205ed37ac.scope: Deactivated successfully. May 8 00:48:32.748975 env[1220]: time="2025-05-08T00:48:32.748891955Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 8 00:48:32.754691 env[1220]: time="2025-05-08T00:48:32.754650814Z" level=info msg="StopContainer for \"1466456acda9c8f926fea9e855e7d4bf548cd2efff985a8d0d407d905136c55e\" with timeout 2 (s)" May 8 00:48:32.755104 env[1220]: time="2025-05-08T00:48:32.755087077Z" level=info msg="Stop container \"1466456acda9c8f926fea9e855e7d4bf548cd2efff985a8d0d407d905136c55e\" with signal terminated" May 8 00:48:32.758695 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dbfb30028dfda6f3f678f7082f29d60cb32acfa371780692a55408b205ed37ac-rootfs.mount: Deactivated successfully. May 8 00:48:32.765201 systemd-networkd[1039]: lxc_health: Link DOWN May 8 00:48:32.765210 systemd-networkd[1039]: lxc_health: Lost carrier May 8 00:48:32.769658 env[1220]: time="2025-05-08T00:48:32.769569406Z" level=info msg="shim disconnected" id=dbfb30028dfda6f3f678f7082f29d60cb32acfa371780692a55408b205ed37ac May 8 00:48:32.769658 env[1220]: time="2025-05-08T00:48:32.769653955Z" level=warning msg="cleaning up after shim disconnected" id=dbfb30028dfda6f3f678f7082f29d60cb32acfa371780692a55408b205ed37ac namespace=k8s.io May 8 00:48:32.769828 env[1220]: time="2025-05-08T00:48:32.769670246Z" level=info msg="cleaning up dead shim" May 8 00:48:32.787613 env[1220]: time="2025-05-08T00:48:32.780996401Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:48:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3666 runtime=io.containerd.runc.v2\n" May 8 00:48:32.787613 env[1220]: time="2025-05-08T00:48:32.784096437Z" level=info msg="StopContainer for \"dbfb30028dfda6f3f678f7082f29d60cb32acfa371780692a55408b205ed37ac\" returns successfully" May 8 00:48:32.787613 env[1220]: time="2025-05-08T00:48:32.784685189Z" level=info msg="StopPodSandbox for \"2f10910824308e2c9daaea2e76e445ee3c74efff3057fce019d8f5351cc5b185\"" May 8 00:48:32.787613 env[1220]: time="2025-05-08T00:48:32.784760942Z" level=info msg="Container to stop \"dbfb30028dfda6f3f678f7082f29d60cb32acfa371780692a55408b205ed37ac\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:48:32.786697 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2f10910824308e2c9daaea2e76e445ee3c74efff3057fce019d8f5351cc5b185-shm.mount: Deactivated successfully. May 8 00:48:32.795417 systemd[1]: cri-containerd-2f10910824308e2c9daaea2e76e445ee3c74efff3057fce019d8f5351cc5b185.scope: Deactivated successfully. May 8 00:48:32.810951 systemd[1]: cri-containerd-1466456acda9c8f926fea9e855e7d4bf548cd2efff985a8d0d407d905136c55e.scope: Deactivated successfully. May 8 00:48:32.811310 systemd[1]: cri-containerd-1466456acda9c8f926fea9e855e7d4bf548cd2efff985a8d0d407d905136c55e.scope: Consumed 6.815s CPU time. May 8 00:48:32.822546 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2f10910824308e2c9daaea2e76e445ee3c74efff3057fce019d8f5351cc5b185-rootfs.mount: Deactivated successfully. May 8 00:48:32.832189 env[1220]: time="2025-05-08T00:48:32.832087824Z" level=info msg="shim disconnected" id=2f10910824308e2c9daaea2e76e445ee3c74efff3057fce019d8f5351cc5b185 May 8 00:48:32.832189 env[1220]: time="2025-05-08T00:48:32.832150783Z" level=warning msg="cleaning up after shim disconnected" id=2f10910824308e2c9daaea2e76e445ee3c74efff3057fce019d8f5351cc5b185 namespace=k8s.io May 8 00:48:32.832189 env[1220]: time="2025-05-08T00:48:32.832161784Z" level=info msg="cleaning up dead shim" May 8 00:48:32.835251 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1466456acda9c8f926fea9e855e7d4bf548cd2efff985a8d0d407d905136c55e-rootfs.mount: Deactivated successfully. May 8 00:48:32.842914 env[1220]: time="2025-05-08T00:48:32.842841659Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:48:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3711 runtime=io.containerd.runc.v2\n" May 8 00:48:32.843379 env[1220]: time="2025-05-08T00:48:32.843327025Z" level=info msg="TearDown network for sandbox \"2f10910824308e2c9daaea2e76e445ee3c74efff3057fce019d8f5351cc5b185\" successfully" May 8 00:48:32.843434 env[1220]: time="2025-05-08T00:48:32.843378423Z" level=info msg="StopPodSandbox for \"2f10910824308e2c9daaea2e76e445ee3c74efff3057fce019d8f5351cc5b185\" returns successfully" May 8 00:48:32.849808 env[1220]: time="2025-05-08T00:48:32.849200641Z" level=info msg="shim disconnected" id=1466456acda9c8f926fea9e855e7d4bf548cd2efff985a8d0d407d905136c55e May 8 00:48:32.849808 env[1220]: time="2025-05-08T00:48:32.849250996Z" level=warning msg="cleaning up after shim disconnected" id=1466456acda9c8f926fea9e855e7d4bf548cd2efff985a8d0d407d905136c55e namespace=k8s.io May 8 00:48:32.849808 env[1220]: time="2025-05-08T00:48:32.849275503Z" level=info msg="cleaning up dead shim" May 8 00:48:32.860100 env[1220]: time="2025-05-08T00:48:32.860037051Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:48:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3724 runtime=io.containerd.runc.v2\n" May 8 00:48:32.862178 env[1220]: time="2025-05-08T00:48:32.862131898Z" level=info msg="StopContainer for \"1466456acda9c8f926fea9e855e7d4bf548cd2efff985a8d0d407d905136c55e\" returns successfully" May 8 00:48:32.862696 env[1220]: time="2025-05-08T00:48:32.862654295Z" level=info msg="StopPodSandbox for \"d375c53112832e1fdba54e42efc9372238576613192da5b871bdd7ea9f1e6824\"" May 8 00:48:32.862864 env[1220]: time="2025-05-08T00:48:32.862745918Z" level=info msg="Container to stop \"1466456acda9c8f926fea9e855e7d4bf548cd2efff985a8d0d407d905136c55e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:48:32.862864 env[1220]: time="2025-05-08T00:48:32.862767670Z" level=info msg="Container to stop \"8d3cded2624e0f1fba6344af1bb5bd4d3fec7bacc697ba59b419ab775e80623b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:48:32.862864 env[1220]: time="2025-05-08T00:48:32.862782498Z" level=info msg="Container to stop \"898a04be57e246df5f198f5aeaa3afcdec6114e829ccabdd2f5c8428f42bf95e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:48:32.862864 env[1220]: time="2025-05-08T00:48:32.862799339Z" level=info msg="Container to stop \"a686362e457d09fcba5194ea835b5c8a20c994d032e8c5a83e40e80f855d2f1d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:48:32.862864 env[1220]: time="2025-05-08T00:48:32.862813065Z" level=info msg="Container to stop \"46755bd3ba49e9c4c971dbd8fe85f38d64318f77b09b4cfc10bba0462f8451c3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:48:32.870070 systemd[1]: cri-containerd-d375c53112832e1fdba54e42efc9372238576613192da5b871bdd7ea9f1e6824.scope: Deactivated successfully. May 8 00:48:32.895049 env[1220]: time="2025-05-08T00:48:32.894947418Z" level=info msg="shim disconnected" id=d375c53112832e1fdba54e42efc9372238576613192da5b871bdd7ea9f1e6824 May 8 00:48:32.895049 env[1220]: time="2025-05-08T00:48:32.895009706Z" level=warning msg="cleaning up after shim disconnected" id=d375c53112832e1fdba54e42efc9372238576613192da5b871bdd7ea9f1e6824 namespace=k8s.io May 8 00:48:32.895049 env[1220]: time="2025-05-08T00:48:32.895019674Z" level=info msg="cleaning up dead shim" May 8 00:48:32.905028 env[1220]: time="2025-05-08T00:48:32.904955563Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:48:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3754 runtime=io.containerd.runc.v2\n" May 8 00:48:32.905806 env[1220]: time="2025-05-08T00:48:32.905768319Z" level=info msg="TearDown network for sandbox \"d375c53112832e1fdba54e42efc9372238576613192da5b871bdd7ea9f1e6824\" successfully" May 8 00:48:32.905913 env[1220]: time="2025-05-08T00:48:32.905807001Z" level=info msg="StopPodSandbox for \"d375c53112832e1fdba54e42efc9372238576613192da5b871bdd7ea9f1e6824\" returns successfully" May 8 00:48:33.028695 kubelet[1954]: I0508 00:48:33.027540 1954 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5d574d22-4fe9-420a-bd2e-137aa18e77e1-host-proc-sys-net\") pod \"5d574d22-4fe9-420a-bd2e-137aa18e77e1\" (UID: \"5d574d22-4fe9-420a-bd2e-137aa18e77e1\") " May 8 00:48:33.028695 kubelet[1954]: I0508 00:48:33.027631 1954 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5d574d22-4fe9-420a-bd2e-137aa18e77e1-hostproc\") pod \"5d574d22-4fe9-420a-bd2e-137aa18e77e1\" (UID: \"5d574d22-4fe9-420a-bd2e-137aa18e77e1\") " May 8 00:48:33.028695 kubelet[1954]: I0508 00:48:33.027649 1954 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5d574d22-4fe9-420a-bd2e-137aa18e77e1-host-proc-sys-kernel\") pod \"5d574d22-4fe9-420a-bd2e-137aa18e77e1\" (UID: \"5d574d22-4fe9-420a-bd2e-137aa18e77e1\") " May 8 00:48:33.028695 kubelet[1954]: I0508 00:48:33.027671 1954 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vdx8h\" (UniqueName: \"kubernetes.io/projected/5d574d22-4fe9-420a-bd2e-137aa18e77e1-kube-api-access-vdx8h\") pod \"5d574d22-4fe9-420a-bd2e-137aa18e77e1\" (UID: \"5d574d22-4fe9-420a-bd2e-137aa18e77e1\") " May 8 00:48:33.028695 kubelet[1954]: I0508 00:48:33.027684 1954 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5d574d22-4fe9-420a-bd2e-137aa18e77e1-etc-cni-netd\") pod \"5d574d22-4fe9-420a-bd2e-137aa18e77e1\" (UID: \"5d574d22-4fe9-420a-bd2e-137aa18e77e1\") " May 8 00:48:33.028695 kubelet[1954]: I0508 00:48:33.027678 1954 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d574d22-4fe9-420a-bd2e-137aa18e77e1-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "5d574d22-4fe9-420a-bd2e-137aa18e77e1" (UID: "5d574d22-4fe9-420a-bd2e-137aa18e77e1"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:48:33.029255 kubelet[1954]: I0508 00:48:33.027696 1954 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5d574d22-4fe9-420a-bd2e-137aa18e77e1-cilium-cgroup\") pod \"5d574d22-4fe9-420a-bd2e-137aa18e77e1\" (UID: \"5d574d22-4fe9-420a-bd2e-137aa18e77e1\") " May 8 00:48:33.029255 kubelet[1954]: I0508 00:48:33.027744 1954 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d574d22-4fe9-420a-bd2e-137aa18e77e1-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "5d574d22-4fe9-420a-bd2e-137aa18e77e1" (UID: "5d574d22-4fe9-420a-bd2e-137aa18e77e1"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:48:33.029255 kubelet[1954]: I0508 00:48:33.027778 1954 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5d574d22-4fe9-420a-bd2e-137aa18e77e1-xtables-lock\") pod \"5d574d22-4fe9-420a-bd2e-137aa18e77e1\" (UID: \"5d574d22-4fe9-420a-bd2e-137aa18e77e1\") " May 8 00:48:33.029255 kubelet[1954]: I0508 00:48:33.027789 1954 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d574d22-4fe9-420a-bd2e-137aa18e77e1-hostproc" (OuterVolumeSpecName: "hostproc") pod "5d574d22-4fe9-420a-bd2e-137aa18e77e1" (UID: "5d574d22-4fe9-420a-bd2e-137aa18e77e1"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:48:33.029255 kubelet[1954]: I0508 00:48:33.027807 1954 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d574d22-4fe9-420a-bd2e-137aa18e77e1-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "5d574d22-4fe9-420a-bd2e-137aa18e77e1" (UID: "5d574d22-4fe9-420a-bd2e-137aa18e77e1"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:48:33.029386 kubelet[1954]: I0508 00:48:33.027886 1954 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d574d22-4fe9-420a-bd2e-137aa18e77e1-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "5d574d22-4fe9-420a-bd2e-137aa18e77e1" (UID: "5d574d22-4fe9-420a-bd2e-137aa18e77e1"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:48:33.029386 kubelet[1954]: I0508 00:48:33.027927 1954 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d574d22-4fe9-420a-bd2e-137aa18e77e1-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "5d574d22-4fe9-420a-bd2e-137aa18e77e1" (UID: "5d574d22-4fe9-420a-bd2e-137aa18e77e1"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:48:33.029386 kubelet[1954]: I0508 00:48:33.027975 1954 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d574d22-4fe9-420a-bd2e-137aa18e77e1-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "5d574d22-4fe9-420a-bd2e-137aa18e77e1" (UID: "5d574d22-4fe9-420a-bd2e-137aa18e77e1"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:48:33.029386 kubelet[1954]: I0508 00:48:33.027953 1954 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5d574d22-4fe9-420a-bd2e-137aa18e77e1-cilium-run\") pod \"5d574d22-4fe9-420a-bd2e-137aa18e77e1\" (UID: \"5d574d22-4fe9-420a-bd2e-137aa18e77e1\") " May 8 00:48:33.029386 kubelet[1954]: I0508 00:48:33.028010 1954 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5d574d22-4fe9-420a-bd2e-137aa18e77e1-cilium-config-path\") pod \"5d574d22-4fe9-420a-bd2e-137aa18e77e1\" (UID: \"5d574d22-4fe9-420a-bd2e-137aa18e77e1\") " May 8 00:48:33.029514 kubelet[1954]: I0508 00:48:33.028024 1954 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5d574d22-4fe9-420a-bd2e-137aa18e77e1-lib-modules\") pod \"5d574d22-4fe9-420a-bd2e-137aa18e77e1\" (UID: \"5d574d22-4fe9-420a-bd2e-137aa18e77e1\") " May 8 00:48:33.029514 kubelet[1954]: I0508 00:48:33.028042 1954 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2vh2d\" (UniqueName: \"kubernetes.io/projected/54bcdab1-0cda-437c-b15d-2390515fe3fa-kube-api-access-2vh2d\") pod \"54bcdab1-0cda-437c-b15d-2390515fe3fa\" (UID: \"54bcdab1-0cda-437c-b15d-2390515fe3fa\") " May 8 00:48:33.029514 kubelet[1954]: I0508 00:48:33.028056 1954 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5d574d22-4fe9-420a-bd2e-137aa18e77e1-bpf-maps\") pod \"5d574d22-4fe9-420a-bd2e-137aa18e77e1\" (UID: \"5d574d22-4fe9-420a-bd2e-137aa18e77e1\") " May 8 00:48:33.029514 kubelet[1954]: I0508 00:48:33.028068 1954 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5d574d22-4fe9-420a-bd2e-137aa18e77e1-cni-path\") pod \"5d574d22-4fe9-420a-bd2e-137aa18e77e1\" (UID: \"5d574d22-4fe9-420a-bd2e-137aa18e77e1\") " May 8 00:48:33.029514 kubelet[1954]: I0508 00:48:33.028082 1954 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/54bcdab1-0cda-437c-b15d-2390515fe3fa-cilium-config-path\") pod \"54bcdab1-0cda-437c-b15d-2390515fe3fa\" (UID: \"54bcdab1-0cda-437c-b15d-2390515fe3fa\") " May 8 00:48:33.029514 kubelet[1954]: I0508 00:48:33.028097 1954 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5d574d22-4fe9-420a-bd2e-137aa18e77e1-hubble-tls\") pod \"5d574d22-4fe9-420a-bd2e-137aa18e77e1\" (UID: \"5d574d22-4fe9-420a-bd2e-137aa18e77e1\") " May 8 00:48:33.029679 kubelet[1954]: I0508 00:48:33.028118 1954 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5d574d22-4fe9-420a-bd2e-137aa18e77e1-clustermesh-secrets\") pod \"5d574d22-4fe9-420a-bd2e-137aa18e77e1\" (UID: \"5d574d22-4fe9-420a-bd2e-137aa18e77e1\") " May 8 00:48:33.029679 kubelet[1954]: I0508 00:48:33.028162 1954 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5d574d22-4fe9-420a-bd2e-137aa18e77e1-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 8 00:48:33.029679 kubelet[1954]: I0508 00:48:33.028169 1954 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5d574d22-4fe9-420a-bd2e-137aa18e77e1-cilium-run\") on node \"localhost\" DevicePath \"\"" May 8 00:48:33.029679 kubelet[1954]: I0508 00:48:33.028176 1954 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5d574d22-4fe9-420a-bd2e-137aa18e77e1-hostproc\") on node \"localhost\" DevicePath \"\"" May 8 00:48:33.029679 kubelet[1954]: I0508 00:48:33.028183 1954 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5d574d22-4fe9-420a-bd2e-137aa18e77e1-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 8 00:48:33.029679 kubelet[1954]: I0508 00:48:33.028190 1954 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5d574d22-4fe9-420a-bd2e-137aa18e77e1-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 8 00:48:33.029679 kubelet[1954]: I0508 00:48:33.028197 1954 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5d574d22-4fe9-420a-bd2e-137aa18e77e1-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 8 00:48:33.029679 kubelet[1954]: I0508 00:48:33.028203 1954 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5d574d22-4fe9-420a-bd2e-137aa18e77e1-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 8 00:48:33.029904 kubelet[1954]: I0508 00:48:33.028400 1954 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d574d22-4fe9-420a-bd2e-137aa18e77e1-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "5d574d22-4fe9-420a-bd2e-137aa18e77e1" (UID: "5d574d22-4fe9-420a-bd2e-137aa18e77e1"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:48:33.031841 kubelet[1954]: I0508 00:48:33.030909 1954 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54bcdab1-0cda-437c-b15d-2390515fe3fa-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "54bcdab1-0cda-437c-b15d-2390515fe3fa" (UID: "54bcdab1-0cda-437c-b15d-2390515fe3fa"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 8 00:48:33.031841 kubelet[1954]: I0508 00:48:33.030948 1954 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d574d22-4fe9-420a-bd2e-137aa18e77e1-cni-path" (OuterVolumeSpecName: "cni-path") pod "5d574d22-4fe9-420a-bd2e-137aa18e77e1" (UID: "5d574d22-4fe9-420a-bd2e-137aa18e77e1"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:48:33.031841 kubelet[1954]: I0508 00:48:33.030966 1954 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d574d22-4fe9-420a-bd2e-137aa18e77e1-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "5d574d22-4fe9-420a-bd2e-137aa18e77e1" (UID: "5d574d22-4fe9-420a-bd2e-137aa18e77e1"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:48:33.031841 kubelet[1954]: I0508 00:48:33.031071 1954 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d574d22-4fe9-420a-bd2e-137aa18e77e1-kube-api-access-vdx8h" (OuterVolumeSpecName: "kube-api-access-vdx8h") pod "5d574d22-4fe9-420a-bd2e-137aa18e77e1" (UID: "5d574d22-4fe9-420a-bd2e-137aa18e77e1"). InnerVolumeSpecName "kube-api-access-vdx8h". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 8 00:48:33.031841 kubelet[1954]: I0508 00:48:33.031767 1954 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d574d22-4fe9-420a-bd2e-137aa18e77e1-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5d574d22-4fe9-420a-bd2e-137aa18e77e1" (UID: "5d574d22-4fe9-420a-bd2e-137aa18e77e1"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 8 00:48:33.033219 kubelet[1954]: I0508 00:48:33.033176 1954 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54bcdab1-0cda-437c-b15d-2390515fe3fa-kube-api-access-2vh2d" (OuterVolumeSpecName: "kube-api-access-2vh2d") pod "54bcdab1-0cda-437c-b15d-2390515fe3fa" (UID: "54bcdab1-0cda-437c-b15d-2390515fe3fa"). InnerVolumeSpecName "kube-api-access-2vh2d". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 8 00:48:33.033291 kubelet[1954]: I0508 00:48:33.033261 1954 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d574d22-4fe9-420a-bd2e-137aa18e77e1-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "5d574d22-4fe9-420a-bd2e-137aa18e77e1" (UID: "5d574d22-4fe9-420a-bd2e-137aa18e77e1"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 8 00:48:33.033291 kubelet[1954]: I0508 00:48:33.033264 1954 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d574d22-4fe9-420a-bd2e-137aa18e77e1-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "5d574d22-4fe9-420a-bd2e-137aa18e77e1" (UID: "5d574d22-4fe9-420a-bd2e-137aa18e77e1"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 8 00:48:33.129094 kubelet[1954]: I0508 00:48:33.129057 1954 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vdx8h\" (UniqueName: \"kubernetes.io/projected/5d574d22-4fe9-420a-bd2e-137aa18e77e1-kube-api-access-vdx8h\") on node \"localhost\" DevicePath \"\"" May 8 00:48:33.129094 kubelet[1954]: I0508 00:48:33.129085 1954 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5d574d22-4fe9-420a-bd2e-137aa18e77e1-lib-modules\") on node \"localhost\" DevicePath \"\"" May 8 00:48:33.129094 kubelet[1954]: I0508 00:48:33.129092 1954 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5d574d22-4fe9-420a-bd2e-137aa18e77e1-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 8 00:48:33.129094 kubelet[1954]: I0508 00:48:33.129099 1954 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2vh2d\" (UniqueName: \"kubernetes.io/projected/54bcdab1-0cda-437c-b15d-2390515fe3fa-kube-api-access-2vh2d\") on node \"localhost\" DevicePath \"\"" May 8 00:48:33.129094 kubelet[1954]: I0508 00:48:33.129107 1954 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5d574d22-4fe9-420a-bd2e-137aa18e77e1-cni-path\") on node \"localhost\" DevicePath \"\"" May 8 00:48:33.129406 kubelet[1954]: I0508 00:48:33.129114 1954 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/54bcdab1-0cda-437c-b15d-2390515fe3fa-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 8 00:48:33.129406 kubelet[1954]: I0508 00:48:33.129121 1954 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5d574d22-4fe9-420a-bd2e-137aa18e77e1-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 8 00:48:33.129406 kubelet[1954]: I0508 00:48:33.129127 1954 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5d574d22-4fe9-420a-bd2e-137aa18e77e1-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 8 00:48:33.129406 kubelet[1954]: I0508 00:48:33.129134 1954 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5d574d22-4fe9-420a-bd2e-137aa18e77e1-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 8 00:48:33.273087 systemd[1]: Removed slice kubepods-burstable-pod5d574d22_4fe9_420a_bd2e_137aa18e77e1.slice. May 8 00:48:33.273199 systemd[1]: kubepods-burstable-pod5d574d22_4fe9_420a_bd2e_137aa18e77e1.slice: Consumed 6.921s CPU time. May 8 00:48:33.274686 systemd[1]: Removed slice kubepods-besteffort-pod54bcdab1_0cda_437c_b15d_2390515fe3fa.slice. May 8 00:48:33.723852 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d375c53112832e1fdba54e42efc9372238576613192da5b871bdd7ea9f1e6824-rootfs.mount: Deactivated successfully. May 8 00:48:33.723973 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d375c53112832e1fdba54e42efc9372238576613192da5b871bdd7ea9f1e6824-shm.mount: Deactivated successfully. May 8 00:48:33.724033 systemd[1]: var-lib-kubelet-pods-54bcdab1\x2d0cda\x2d437c\x2db15d\x2d2390515fe3fa-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2vh2d.mount: Deactivated successfully. May 8 00:48:33.724105 systemd[1]: var-lib-kubelet-pods-5d574d22\x2d4fe9\x2d420a\x2dbd2e\x2d137aa18e77e1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvdx8h.mount: Deactivated successfully. May 8 00:48:33.724185 systemd[1]: var-lib-kubelet-pods-5d574d22\x2d4fe9\x2d420a\x2dbd2e\x2d137aa18e77e1-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 8 00:48:33.724242 systemd[1]: var-lib-kubelet-pods-5d574d22\x2d4fe9\x2d420a\x2dbd2e\x2d137aa18e77e1-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 8 00:48:33.743483 kubelet[1954]: I0508 00:48:33.743443 1954 scope.go:117] "RemoveContainer" containerID="dbfb30028dfda6f3f678f7082f29d60cb32acfa371780692a55408b205ed37ac" May 8 00:48:33.744817 env[1220]: time="2025-05-08T00:48:33.744776086Z" level=info msg="RemoveContainer for \"dbfb30028dfda6f3f678f7082f29d60cb32acfa371780692a55408b205ed37ac\"" May 8 00:48:33.749120 env[1220]: time="2025-05-08T00:48:33.749084573Z" level=info msg="RemoveContainer for \"dbfb30028dfda6f3f678f7082f29d60cb32acfa371780692a55408b205ed37ac\" returns successfully" May 8 00:48:33.749372 kubelet[1954]: I0508 00:48:33.749345 1954 scope.go:117] "RemoveContainer" containerID="dbfb30028dfda6f3f678f7082f29d60cb32acfa371780692a55408b205ed37ac" May 8 00:48:33.749705 env[1220]: time="2025-05-08T00:48:33.749591220Z" level=error msg="ContainerStatus for \"dbfb30028dfda6f3f678f7082f29d60cb32acfa371780692a55408b205ed37ac\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dbfb30028dfda6f3f678f7082f29d60cb32acfa371780692a55408b205ed37ac\": not found" May 8 00:48:33.749854 kubelet[1954]: E0508 00:48:33.749827 1954 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dbfb30028dfda6f3f678f7082f29d60cb32acfa371780692a55408b205ed37ac\": not found" containerID="dbfb30028dfda6f3f678f7082f29d60cb32acfa371780692a55408b205ed37ac" May 8 00:48:33.749937 kubelet[1954]: I0508 00:48:33.749860 1954 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dbfb30028dfda6f3f678f7082f29d60cb32acfa371780692a55408b205ed37ac"} err="failed to get container status \"dbfb30028dfda6f3f678f7082f29d60cb32acfa371780692a55408b205ed37ac\": rpc error: code = NotFound desc = an error occurred when try to find container \"dbfb30028dfda6f3f678f7082f29d60cb32acfa371780692a55408b205ed37ac\": not found" May 8 00:48:33.749979 kubelet[1954]: I0508 00:48:33.749944 1954 scope.go:117] "RemoveContainer" containerID="1466456acda9c8f926fea9e855e7d4bf548cd2efff985a8d0d407d905136c55e" May 8 00:48:33.751227 env[1220]: time="2025-05-08T00:48:33.751115029Z" level=info msg="RemoveContainer for \"1466456acda9c8f926fea9e855e7d4bf548cd2efff985a8d0d407d905136c55e\"" May 8 00:48:33.754391 env[1220]: time="2025-05-08T00:48:33.754326023Z" level=info msg="RemoveContainer for \"1466456acda9c8f926fea9e855e7d4bf548cd2efff985a8d0d407d905136c55e\" returns successfully" May 8 00:48:33.754636 kubelet[1954]: I0508 00:48:33.754557 1954 scope.go:117] "RemoveContainer" containerID="a686362e457d09fcba5194ea835b5c8a20c994d032e8c5a83e40e80f855d2f1d" May 8 00:48:33.756131 env[1220]: time="2025-05-08T00:48:33.756087681Z" level=info msg="RemoveContainer for \"a686362e457d09fcba5194ea835b5c8a20c994d032e8c5a83e40e80f855d2f1d\"" May 8 00:48:33.759423 env[1220]: time="2025-05-08T00:48:33.759387834Z" level=info msg="RemoveContainer for \"a686362e457d09fcba5194ea835b5c8a20c994d032e8c5a83e40e80f855d2f1d\" returns successfully" May 8 00:48:33.759778 kubelet[1954]: I0508 00:48:33.759753 1954 scope.go:117] "RemoveContainer" containerID="898a04be57e246df5f198f5aeaa3afcdec6114e829ccabdd2f5c8428f42bf95e" May 8 00:48:33.761593 env[1220]: time="2025-05-08T00:48:33.761539569Z" level=info msg="RemoveContainer for \"898a04be57e246df5f198f5aeaa3afcdec6114e829ccabdd2f5c8428f42bf95e\"" May 8 00:48:33.771035 env[1220]: time="2025-05-08T00:48:33.770904336Z" level=info msg="RemoveContainer for \"898a04be57e246df5f198f5aeaa3afcdec6114e829ccabdd2f5c8428f42bf95e\" returns successfully" May 8 00:48:33.771262 kubelet[1954]: I0508 00:48:33.771165 1954 scope.go:117] "RemoveContainer" containerID="8d3cded2624e0f1fba6344af1bb5bd4d3fec7bacc697ba59b419ab775e80623b" May 8 00:48:33.772308 env[1220]: time="2025-05-08T00:48:33.772272450Z" level=info msg="RemoveContainer for \"8d3cded2624e0f1fba6344af1bb5bd4d3fec7bacc697ba59b419ab775e80623b\"" May 8 00:48:33.775937 env[1220]: time="2025-05-08T00:48:33.775897508Z" level=info msg="RemoveContainer for \"8d3cded2624e0f1fba6344af1bb5bd4d3fec7bacc697ba59b419ab775e80623b\" returns successfully" May 8 00:48:33.776140 kubelet[1954]: I0508 00:48:33.776102 1954 scope.go:117] "RemoveContainer" containerID="46755bd3ba49e9c4c971dbd8fe85f38d64318f77b09b4cfc10bba0462f8451c3" May 8 00:48:33.777055 env[1220]: time="2025-05-08T00:48:33.777020088Z" level=info msg="RemoveContainer for \"46755bd3ba49e9c4c971dbd8fe85f38d64318f77b09b4cfc10bba0462f8451c3\"" May 8 00:48:33.780128 env[1220]: time="2025-05-08T00:48:33.780087602Z" level=info msg="RemoveContainer for \"46755bd3ba49e9c4c971dbd8fe85f38d64318f77b09b4cfc10bba0462f8451c3\" returns successfully" May 8 00:48:33.780300 kubelet[1954]: I0508 00:48:33.780260 1954 scope.go:117] "RemoveContainer" containerID="1466456acda9c8f926fea9e855e7d4bf548cd2efff985a8d0d407d905136c55e" May 8 00:48:33.780486 env[1220]: time="2025-05-08T00:48:33.780429698Z" level=error msg="ContainerStatus for \"1466456acda9c8f926fea9e855e7d4bf548cd2efff985a8d0d407d905136c55e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1466456acda9c8f926fea9e855e7d4bf548cd2efff985a8d0d407d905136c55e\": not found" May 8 00:48:33.780594 kubelet[1954]: E0508 00:48:33.780550 1954 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1466456acda9c8f926fea9e855e7d4bf548cd2efff985a8d0d407d905136c55e\": not found" containerID="1466456acda9c8f926fea9e855e7d4bf548cd2efff985a8d0d407d905136c55e" May 8 00:48:33.780639 kubelet[1954]: I0508 00:48:33.780590 1954 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1466456acda9c8f926fea9e855e7d4bf548cd2efff985a8d0d407d905136c55e"} err="failed to get container status \"1466456acda9c8f926fea9e855e7d4bf548cd2efff985a8d0d407d905136c55e\": rpc error: code = NotFound desc = an error occurred when try to find container \"1466456acda9c8f926fea9e855e7d4bf548cd2efff985a8d0d407d905136c55e\": not found" May 8 00:48:33.780639 kubelet[1954]: I0508 00:48:33.780614 1954 scope.go:117] "RemoveContainer" containerID="a686362e457d09fcba5194ea835b5c8a20c994d032e8c5a83e40e80f855d2f1d" May 8 00:48:33.780859 env[1220]: time="2025-05-08T00:48:33.780802592Z" level=error msg="ContainerStatus for \"a686362e457d09fcba5194ea835b5c8a20c994d032e8c5a83e40e80f855d2f1d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a686362e457d09fcba5194ea835b5c8a20c994d032e8c5a83e40e80f855d2f1d\": not found" May 8 00:48:33.780971 kubelet[1954]: E0508 00:48:33.780949 1954 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a686362e457d09fcba5194ea835b5c8a20c994d032e8c5a83e40e80f855d2f1d\": not found" containerID="a686362e457d09fcba5194ea835b5c8a20c994d032e8c5a83e40e80f855d2f1d" May 8 00:48:33.780971 kubelet[1954]: I0508 00:48:33.780968 1954 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a686362e457d09fcba5194ea835b5c8a20c994d032e8c5a83e40e80f855d2f1d"} err="failed to get container status \"a686362e457d09fcba5194ea835b5c8a20c994d032e8c5a83e40e80f855d2f1d\": rpc error: code = NotFound desc = an error occurred when try to find container \"a686362e457d09fcba5194ea835b5c8a20c994d032e8c5a83e40e80f855d2f1d\": not found" May 8 00:48:33.781067 kubelet[1954]: I0508 00:48:33.780980 1954 scope.go:117] "RemoveContainer" containerID="898a04be57e246df5f198f5aeaa3afcdec6114e829ccabdd2f5c8428f42bf95e" May 8 00:48:33.781175 env[1220]: time="2025-05-08T00:48:33.781124991Z" level=error msg="ContainerStatus for \"898a04be57e246df5f198f5aeaa3afcdec6114e829ccabdd2f5c8428f42bf95e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"898a04be57e246df5f198f5aeaa3afcdec6114e829ccabdd2f5c8428f42bf95e\": not found" May 8 00:48:33.781259 kubelet[1954]: E0508 00:48:33.781240 1954 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"898a04be57e246df5f198f5aeaa3afcdec6114e829ccabdd2f5c8428f42bf95e\": not found" containerID="898a04be57e246df5f198f5aeaa3afcdec6114e829ccabdd2f5c8428f42bf95e" May 8 00:48:33.781292 kubelet[1954]: I0508 00:48:33.781258 1954 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"898a04be57e246df5f198f5aeaa3afcdec6114e829ccabdd2f5c8428f42bf95e"} err="failed to get container status \"898a04be57e246df5f198f5aeaa3afcdec6114e829ccabdd2f5c8428f42bf95e\": rpc error: code = NotFound desc = an error occurred when try to find container \"898a04be57e246df5f198f5aeaa3afcdec6114e829ccabdd2f5c8428f42bf95e\": not found" May 8 00:48:33.781292 kubelet[1954]: I0508 00:48:33.781269 1954 scope.go:117] "RemoveContainer" containerID="8d3cded2624e0f1fba6344af1bb5bd4d3fec7bacc697ba59b419ab775e80623b" May 8 00:48:33.781476 env[1220]: time="2025-05-08T00:48:33.781431440Z" level=error msg="ContainerStatus for \"8d3cded2624e0f1fba6344af1bb5bd4d3fec7bacc697ba59b419ab775e80623b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8d3cded2624e0f1fba6344af1bb5bd4d3fec7bacc697ba59b419ab775e80623b\": not found" May 8 00:48:33.781558 kubelet[1954]: E0508 00:48:33.781540 1954 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8d3cded2624e0f1fba6344af1bb5bd4d3fec7bacc697ba59b419ab775e80623b\": not found" containerID="8d3cded2624e0f1fba6344af1bb5bd4d3fec7bacc697ba59b419ab775e80623b" May 8 00:48:33.781601 kubelet[1954]: I0508 00:48:33.781558 1954 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8d3cded2624e0f1fba6344af1bb5bd4d3fec7bacc697ba59b419ab775e80623b"} err="failed to get container status \"8d3cded2624e0f1fba6344af1bb5bd4d3fec7bacc697ba59b419ab775e80623b\": rpc error: code = NotFound desc = an error occurred when try to find container \"8d3cded2624e0f1fba6344af1bb5bd4d3fec7bacc697ba59b419ab775e80623b\": not found" May 8 00:48:33.781601 kubelet[1954]: I0508 00:48:33.781584 1954 scope.go:117] "RemoveContainer" containerID="46755bd3ba49e9c4c971dbd8fe85f38d64318f77b09b4cfc10bba0462f8451c3" May 8 00:48:33.781783 env[1220]: time="2025-05-08T00:48:33.781737298Z" level=error msg="ContainerStatus for \"46755bd3ba49e9c4c971dbd8fe85f38d64318f77b09b4cfc10bba0462f8451c3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"46755bd3ba49e9c4c971dbd8fe85f38d64318f77b09b4cfc10bba0462f8451c3\": not found" May 8 00:48:33.781883 kubelet[1954]: E0508 00:48:33.781854 1954 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"46755bd3ba49e9c4c971dbd8fe85f38d64318f77b09b4cfc10bba0462f8451c3\": not found" containerID="46755bd3ba49e9c4c971dbd8fe85f38d64318f77b09b4cfc10bba0462f8451c3" May 8 00:48:33.781917 kubelet[1954]: I0508 00:48:33.781884 1954 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"46755bd3ba49e9c4c971dbd8fe85f38d64318f77b09b4cfc10bba0462f8451c3"} err="failed to get container status \"46755bd3ba49e9c4c971dbd8fe85f38d64318f77b09b4cfc10bba0462f8451c3\": rpc error: code = NotFound desc = an error occurred when try to find container \"46755bd3ba49e9c4c971dbd8fe85f38d64318f77b09b4cfc10bba0462f8451c3\": not found" May 8 00:48:34.683537 sshd[3614]: pam_unix(sshd:session): session closed for user core May 8 00:48:34.686955 systemd[1]: sshd@26-10.0.0.73:22-10.0.0.1:35932.service: Deactivated successfully. May 8 00:48:34.687670 systemd[1]: session-27.scope: Deactivated successfully. May 8 00:48:34.688305 systemd-logind[1205]: Session 27 logged out. Waiting for processes to exit. May 8 00:48:34.689777 systemd[1]: Started sshd@27-10.0.0.73:22-10.0.0.1:35938.service. May 8 00:48:34.690880 systemd-logind[1205]: Removed session 27. May 8 00:48:34.725432 sshd[3772]: Accepted publickey for core from 10.0.0.1 port 35938 ssh2: RSA SHA256:7kNtsjKndXT6BkWKn4/gWKPiKnt49pZMw1UDp15jA9U May 8 00:48:34.726474 sshd[3772]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:48:34.730429 systemd-logind[1205]: New session 28 of user core. May 8 00:48:34.731212 systemd[1]: Started session-28.scope. May 8 00:48:35.269328 kubelet[1954]: I0508 00:48:35.269262 1954 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54bcdab1-0cda-437c-b15d-2390515fe3fa" path="/var/lib/kubelet/pods/54bcdab1-0cda-437c-b15d-2390515fe3fa/volumes" May 8 00:48:35.269994 kubelet[1954]: I0508 00:48:35.269958 1954 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d574d22-4fe9-420a-bd2e-137aa18e77e1" path="/var/lib/kubelet/pods/5d574d22-4fe9-420a-bd2e-137aa18e77e1/volumes" May 8 00:48:35.436645 sshd[3772]: pam_unix(sshd:session): session closed for user core May 8 00:48:35.443036 systemd[1]: Started sshd@28-10.0.0.73:22-10.0.0.1:45650.service. May 8 00:48:35.447513 systemd-logind[1205]: Session 28 logged out. Waiting for processes to exit. May 8 00:48:35.449467 systemd[1]: sshd@27-10.0.0.73:22-10.0.0.1:35938.service: Deactivated successfully. May 8 00:48:35.450346 systemd[1]: session-28.scope: Deactivated successfully. May 8 00:48:35.452278 systemd-logind[1205]: Removed session 28. May 8 00:48:35.453362 kubelet[1954]: I0508 00:48:35.453320 1954 memory_manager.go:355] "RemoveStaleState removing state" podUID="54bcdab1-0cda-437c-b15d-2390515fe3fa" containerName="cilium-operator" May 8 00:48:35.453362 kubelet[1954]: I0508 00:48:35.453353 1954 memory_manager.go:355] "RemoveStaleState removing state" podUID="5d574d22-4fe9-420a-bd2e-137aa18e77e1" containerName="cilium-agent" May 8 00:48:35.460517 systemd[1]: Created slice kubepods-burstable-podfd8c11a4_dc9b_4c6d_9f78_6d039cc732a2.slice. May 8 00:48:35.490891 sshd[3783]: Accepted publickey for core from 10.0.0.1 port 45650 ssh2: RSA SHA256:7kNtsjKndXT6BkWKn4/gWKPiKnt49pZMw1UDp15jA9U May 8 00:48:35.492595 sshd[3783]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:48:35.497849 systemd[1]: Started session-29.scope. May 8 00:48:35.497984 systemd-logind[1205]: New session 29 of user core. May 8 00:48:35.548704 kubelet[1954]: I0508 00:48:35.548565 1954 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2-xtables-lock\") pod \"cilium-shsvf\" (UID: \"fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2\") " pod="kube-system/cilium-shsvf" May 8 00:48:35.548933 kubelet[1954]: I0508 00:48:35.548885 1954 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2-host-proc-sys-kernel\") pod \"cilium-shsvf\" (UID: \"fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2\") " pod="kube-system/cilium-shsvf" May 8 00:48:35.548933 kubelet[1954]: I0508 00:48:35.548919 1954 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2-hostproc\") pod \"cilium-shsvf\" (UID: \"fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2\") " pod="kube-system/cilium-shsvf" May 8 00:48:35.548933 kubelet[1954]: I0508 00:48:35.548934 1954 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2-etc-cni-netd\") pod \"cilium-shsvf\" (UID: \"fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2\") " pod="kube-system/cilium-shsvf" May 8 00:48:35.548933 kubelet[1954]: I0508 00:48:35.548946 1954 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2-clustermesh-secrets\") pod \"cilium-shsvf\" (UID: \"fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2\") " pod="kube-system/cilium-shsvf" May 8 00:48:35.549261 kubelet[1954]: I0508 00:48:35.548961 1954 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2-cilium-config-path\") pod \"cilium-shsvf\" (UID: \"fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2\") " pod="kube-system/cilium-shsvf" May 8 00:48:35.549261 kubelet[1954]: I0508 00:48:35.548981 1954 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2-bpf-maps\") pod \"cilium-shsvf\" (UID: \"fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2\") " pod="kube-system/cilium-shsvf" May 8 00:48:35.549261 kubelet[1954]: I0508 00:48:35.549001 1954 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2-cni-path\") pod \"cilium-shsvf\" (UID: \"fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2\") " pod="kube-system/cilium-shsvf" May 8 00:48:35.549261 kubelet[1954]: I0508 00:48:35.549016 1954 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2-cilium-ipsec-secrets\") pod \"cilium-shsvf\" (UID: \"fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2\") " pod="kube-system/cilium-shsvf" May 8 00:48:35.549261 kubelet[1954]: I0508 00:48:35.549029 1954 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2-cilium-cgroup\") pod \"cilium-shsvf\" (UID: \"fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2\") " pod="kube-system/cilium-shsvf" May 8 00:48:35.549261 kubelet[1954]: I0508 00:48:35.549044 1954 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2-lib-modules\") pod \"cilium-shsvf\" (UID: \"fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2\") " pod="kube-system/cilium-shsvf" May 8 00:48:35.549452 kubelet[1954]: I0508 00:48:35.549057 1954 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2-host-proc-sys-net\") pod \"cilium-shsvf\" (UID: \"fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2\") " pod="kube-system/cilium-shsvf" May 8 00:48:35.549452 kubelet[1954]: I0508 00:48:35.549071 1954 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cj9n\" (UniqueName: \"kubernetes.io/projected/fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2-kube-api-access-9cj9n\") pod \"cilium-shsvf\" (UID: \"fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2\") " pod="kube-system/cilium-shsvf" May 8 00:48:35.549452 kubelet[1954]: I0508 00:48:35.549087 1954 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2-cilium-run\") pod \"cilium-shsvf\" (UID: \"fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2\") " pod="kube-system/cilium-shsvf" May 8 00:48:35.549452 kubelet[1954]: I0508 00:48:35.549108 1954 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2-hubble-tls\") pod \"cilium-shsvf\" (UID: \"fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2\") " pod="kube-system/cilium-shsvf" May 8 00:48:35.637530 sshd[3783]: pam_unix(sshd:session): session closed for user core May 8 00:48:35.643266 systemd[1]: Started sshd@29-10.0.0.73:22-10.0.0.1:45660.service. May 8 00:48:35.644134 systemd[1]: sshd@28-10.0.0.73:22-10.0.0.1:45650.service: Deactivated successfully. May 8 00:48:35.645133 systemd[1]: session-29.scope: Deactivated successfully. May 8 00:48:35.649792 kubelet[1954]: E0508 00:48:35.649744 1954 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-9cj9n lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-shsvf" podUID="fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2" May 8 00:48:35.650775 systemd-logind[1205]: Session 29 logged out. Waiting for processes to exit. May 8 00:48:35.654712 systemd-logind[1205]: Removed session 29. May 8 00:48:35.689324 sshd[3799]: Accepted publickey for core from 10.0.0.1 port 45660 ssh2: RSA SHA256:7kNtsjKndXT6BkWKn4/gWKPiKnt49pZMw1UDp15jA9U May 8 00:48:35.691379 sshd[3799]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:48:35.697132 systemd-logind[1205]: New session 30 of user core. May 8 00:48:35.698532 systemd[1]: Started session-30.scope. May 8 00:48:35.852046 kubelet[1954]: I0508 00:48:35.852009 1954 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2-cni-path\") pod \"fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2\" (UID: \"fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2\") " May 8 00:48:35.852282 kubelet[1954]: I0508 00:48:35.852249 1954 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2-cilium-run\") pod \"fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2\" (UID: \"fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2\") " May 8 00:48:35.852282 kubelet[1954]: I0508 00:48:35.852276 1954 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2-host-proc-sys-kernel\") pod \"fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2\" (UID: \"fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2\") " May 8 00:48:35.852395 kubelet[1954]: I0508 00:48:35.852298 1954 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2-cilium-config-path\") pod \"fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2\" (UID: \"fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2\") " May 8 00:48:35.852395 kubelet[1954]: I0508 00:48:35.852315 1954 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2-hubble-tls\") pod \"fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2\" (UID: \"fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2\") " May 8 00:48:35.852395 kubelet[1954]: I0508 00:48:35.852146 1954 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2-cni-path" (OuterVolumeSpecName: "cni-path") pod "fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2" (UID: "fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:48:35.852395 kubelet[1954]: I0508 00:48:35.852330 1954 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2-cilium-ipsec-secrets\") pod \"fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2\" (UID: \"fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2\") " May 8 00:48:35.852395 kubelet[1954]: I0508 00:48:35.852344 1954 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2-hostproc\") pod \"fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2\" (UID: \"fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2\") " May 8 00:48:35.852395 kubelet[1954]: I0508 00:48:35.852356 1954 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2-cilium-cgroup\") pod \"fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2\" (UID: \"fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2\") " May 8 00:48:35.852643 kubelet[1954]: I0508 00:48:35.852369 1954 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2-xtables-lock\") pod \"fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2\" (UID: \"fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2\") " May 8 00:48:35.852643 kubelet[1954]: I0508 00:48:35.852382 1954 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9cj9n\" (UniqueName: \"kubernetes.io/projected/fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2-kube-api-access-9cj9n\") pod \"fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2\" (UID: \"fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2\") " May 8 00:48:35.852643 kubelet[1954]: I0508 00:48:35.852314 1954 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2" (UID: "fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:48:35.852643 kubelet[1954]: I0508 00:48:35.852412 1954 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2" (UID: "fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:48:35.852643 kubelet[1954]: I0508 00:48:35.852368 1954 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2" (UID: "fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:48:35.852836 kubelet[1954]: I0508 00:48:35.852433 1954 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2" (UID: "fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:48:35.852836 kubelet[1954]: I0508 00:48:35.852444 1954 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2" (UID: "fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:48:35.852836 kubelet[1954]: I0508 00:48:35.852780 1954 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2-hostproc" (OuterVolumeSpecName: "hostproc") pod "fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2" (UID: "fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:48:35.852836 kubelet[1954]: I0508 00:48:35.852396 1954 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2-etc-cni-netd\") pod \"fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2\" (UID: \"fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2\") " May 8 00:48:35.852836 kubelet[1954]: I0508 00:48:35.852829 1954 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2-clustermesh-secrets\") pod \"fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2\" (UID: \"fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2\") " May 8 00:48:35.853015 kubelet[1954]: I0508 00:48:35.852855 1954 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2-lib-modules\") pod \"fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2\" (UID: \"fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2\") " May 8 00:48:35.853015 kubelet[1954]: I0508 00:48:35.852874 1954 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2-host-proc-sys-net\") pod \"fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2\" (UID: \"fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2\") " May 8 00:48:35.853015 kubelet[1954]: I0508 00:48:35.852896 1954 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2-bpf-maps\") pod \"fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2\" (UID: \"fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2\") " May 8 00:48:35.853015 kubelet[1954]: I0508 00:48:35.852938 1954 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 8 00:48:35.853015 kubelet[1954]: I0508 00:48:35.852951 1954 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2-cni-path\") on node \"localhost\" DevicePath \"\"" May 8 00:48:35.853015 kubelet[1954]: I0508 00:48:35.852962 1954 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2-cilium-run\") on node \"localhost\" DevicePath \"\"" May 8 00:48:35.853015 kubelet[1954]: I0508 00:48:35.852973 1954 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 8 00:48:35.853260 kubelet[1954]: I0508 00:48:35.852984 1954 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2-hostproc\") on node \"localhost\" DevicePath \"\"" May 8 00:48:35.853260 kubelet[1954]: I0508 00:48:35.852995 1954 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 8 00:48:35.853260 kubelet[1954]: I0508 00:48:35.853005 1954 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 8 00:48:35.853260 kubelet[1954]: I0508 00:48:35.853030 1954 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2" (UID: "fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:48:35.854650 kubelet[1954]: I0508 00:48:35.854613 1954 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2" (UID: "fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 8 00:48:35.854765 kubelet[1954]: I0508 00:48:35.854734 1954 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2" (UID: "fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:48:35.854838 kubelet[1954]: I0508 00:48:35.854737 1954 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2" (UID: "fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:48:35.856767 systemd[1]: var-lib-kubelet-pods-fd8c11a4\x2ddc9b\x2d4c6d\x2d9f78\x2d6d039cc732a2-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. May 8 00:48:35.857517 kubelet[1954]: I0508 00:48:35.857394 1954 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2-kube-api-access-9cj9n" (OuterVolumeSpecName: "kube-api-access-9cj9n") pod "fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2" (UID: "fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2"). InnerVolumeSpecName "kube-api-access-9cj9n". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 8 00:48:35.857906 kubelet[1954]: I0508 00:48:35.857488 1954 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2" (UID: "fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 8 00:48:35.857906 kubelet[1954]: I0508 00:48:35.857855 1954 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2" (UID: "fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 8 00:48:35.858807 kubelet[1954]: I0508 00:48:35.858761 1954 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2" (UID: "fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 8 00:48:35.858948 systemd[1]: var-lib-kubelet-pods-fd8c11a4\x2ddc9b\x2d4c6d\x2d9f78\x2d6d039cc732a2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9cj9n.mount: Deactivated successfully. May 8 00:48:35.859020 systemd[1]: var-lib-kubelet-pods-fd8c11a4\x2ddc9b\x2d4c6d\x2d9f78\x2d6d039cc732a2-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 8 00:48:35.859072 systemd[1]: var-lib-kubelet-pods-fd8c11a4\x2ddc9b\x2d4c6d\x2d9f78\x2d6d039cc732a2-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 8 00:48:35.953314 kubelet[1954]: I0508 00:48:35.953253 1954 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" May 8 00:48:35.953314 kubelet[1954]: I0508 00:48:35.953290 1954 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 8 00:48:35.953314 kubelet[1954]: I0508 00:48:35.953301 1954 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2-lib-modules\") on node \"localhost\" DevicePath \"\"" May 8 00:48:35.953314 kubelet[1954]: I0508 00:48:35.953313 1954 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9cj9n\" (UniqueName: \"kubernetes.io/projected/fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2-kube-api-access-9cj9n\") on node \"localhost\" DevicePath \"\"" May 8 00:48:35.953314 kubelet[1954]: I0508 00:48:35.953324 1954 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 8 00:48:35.953314 kubelet[1954]: I0508 00:48:35.953334 1954 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 8 00:48:35.953687 kubelet[1954]: I0508 00:48:35.953343 1954 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 8 00:48:35.953687 kubelet[1954]: I0508 00:48:35.953351 1954 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 8 00:48:36.759133 systemd[1]: Removed slice kubepods-burstable-podfd8c11a4_dc9b_4c6d_9f78_6d039cc732a2.slice. May 8 00:48:36.794884 systemd[1]: Created slice kubepods-burstable-pod751c9fa7_9158_4d64_8593_a81364570b34.slice. May 8 00:48:36.960716 kubelet[1954]: I0508 00:48:36.960666 1954 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/751c9fa7-9158-4d64-8593-a81364570b34-lib-modules\") pod \"cilium-tjtnm\" (UID: \"751c9fa7-9158-4d64-8593-a81364570b34\") " pod="kube-system/cilium-tjtnm" May 8 00:48:36.960716 kubelet[1954]: I0508 00:48:36.960711 1954 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/751c9fa7-9158-4d64-8593-a81364570b34-cni-path\") pod \"cilium-tjtnm\" (UID: \"751c9fa7-9158-4d64-8593-a81364570b34\") " pod="kube-system/cilium-tjtnm" May 8 00:48:36.960716 kubelet[1954]: I0508 00:48:36.960728 1954 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/751c9fa7-9158-4d64-8593-a81364570b34-etc-cni-netd\") pod \"cilium-tjtnm\" (UID: \"751c9fa7-9158-4d64-8593-a81364570b34\") " pod="kube-system/cilium-tjtnm" May 8 00:48:36.961210 kubelet[1954]: I0508 00:48:36.960741 1954 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/751c9fa7-9158-4d64-8593-a81364570b34-clustermesh-secrets\") pod \"cilium-tjtnm\" (UID: \"751c9fa7-9158-4d64-8593-a81364570b34\") " pod="kube-system/cilium-tjtnm" May 8 00:48:36.961210 kubelet[1954]: I0508 00:48:36.960756 1954 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/751c9fa7-9158-4d64-8593-a81364570b34-cilium-run\") pod \"cilium-tjtnm\" (UID: \"751c9fa7-9158-4d64-8593-a81364570b34\") " pod="kube-system/cilium-tjtnm" May 8 00:48:36.961210 kubelet[1954]: I0508 00:48:36.960769 1954 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/751c9fa7-9158-4d64-8593-a81364570b34-cilium-config-path\") pod \"cilium-tjtnm\" (UID: \"751c9fa7-9158-4d64-8593-a81364570b34\") " pod="kube-system/cilium-tjtnm" May 8 00:48:36.961210 kubelet[1954]: I0508 00:48:36.960782 1954 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/751c9fa7-9158-4d64-8593-a81364570b34-host-proc-sys-kernel\") pod \"cilium-tjtnm\" (UID: \"751c9fa7-9158-4d64-8593-a81364570b34\") " pod="kube-system/cilium-tjtnm" May 8 00:48:36.961210 kubelet[1954]: I0508 00:48:36.960793 1954 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/751c9fa7-9158-4d64-8593-a81364570b34-hubble-tls\") pod \"cilium-tjtnm\" (UID: \"751c9fa7-9158-4d64-8593-a81364570b34\") " pod="kube-system/cilium-tjtnm" May 8 00:48:36.961340 kubelet[1954]: I0508 00:48:36.960819 1954 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9krpb\" (UniqueName: \"kubernetes.io/projected/751c9fa7-9158-4d64-8593-a81364570b34-kube-api-access-9krpb\") pod \"cilium-tjtnm\" (UID: \"751c9fa7-9158-4d64-8593-a81364570b34\") " pod="kube-system/cilium-tjtnm" May 8 00:48:36.961340 kubelet[1954]: I0508 00:48:36.960832 1954 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/751c9fa7-9158-4d64-8593-a81364570b34-bpf-maps\") pod \"cilium-tjtnm\" (UID: \"751c9fa7-9158-4d64-8593-a81364570b34\") " pod="kube-system/cilium-tjtnm" May 8 00:48:36.961340 kubelet[1954]: I0508 00:48:36.960848 1954 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/751c9fa7-9158-4d64-8593-a81364570b34-hostproc\") pod \"cilium-tjtnm\" (UID: \"751c9fa7-9158-4d64-8593-a81364570b34\") " pod="kube-system/cilium-tjtnm" May 8 00:48:36.961340 kubelet[1954]: I0508 00:48:36.960861 1954 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/751c9fa7-9158-4d64-8593-a81364570b34-cilium-cgroup\") pod \"cilium-tjtnm\" (UID: \"751c9fa7-9158-4d64-8593-a81364570b34\") " pod="kube-system/cilium-tjtnm" May 8 00:48:36.961340 kubelet[1954]: I0508 00:48:36.960873 1954 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/751c9fa7-9158-4d64-8593-a81364570b34-xtables-lock\") pod \"cilium-tjtnm\" (UID: \"751c9fa7-9158-4d64-8593-a81364570b34\") " pod="kube-system/cilium-tjtnm" May 8 00:48:36.961340 kubelet[1954]: I0508 00:48:36.960886 1954 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/751c9fa7-9158-4d64-8593-a81364570b34-cilium-ipsec-secrets\") pod \"cilium-tjtnm\" (UID: \"751c9fa7-9158-4d64-8593-a81364570b34\") " pod="kube-system/cilium-tjtnm" May 8 00:48:36.961472 kubelet[1954]: I0508 00:48:36.960902 1954 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/751c9fa7-9158-4d64-8593-a81364570b34-host-proc-sys-net\") pod \"cilium-tjtnm\" (UID: \"751c9fa7-9158-4d64-8593-a81364570b34\") " pod="kube-system/cilium-tjtnm" May 8 00:48:37.097818 kubelet[1954]: E0508 00:48:37.097771 1954 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:48:37.098312 env[1220]: time="2025-05-08T00:48:37.098263840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tjtnm,Uid:751c9fa7-9158-4d64-8593-a81364570b34,Namespace:kube-system,Attempt:0,}" May 8 00:48:37.116410 env[1220]: time="2025-05-08T00:48:37.116337665Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:48:37.116410 env[1220]: time="2025-05-08T00:48:37.116376538Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:48:37.116410 env[1220]: time="2025-05-08T00:48:37.116387219Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:48:37.116656 env[1220]: time="2025-05-08T00:48:37.116528114Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b5d7a2b41020e3c70993442390898a5cc6f915dd2558f69445c332cb2de4ae6f pid=3831 runtime=io.containerd.runc.v2 May 8 00:48:37.130857 systemd[1]: Started cri-containerd-b5d7a2b41020e3c70993442390898a5cc6f915dd2558f69445c332cb2de4ae6f.scope. May 8 00:48:37.157714 env[1220]: time="2025-05-08T00:48:37.157639455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tjtnm,Uid:751c9fa7-9158-4d64-8593-a81364570b34,Namespace:kube-system,Attempt:0,} returns sandbox id \"b5d7a2b41020e3c70993442390898a5cc6f915dd2558f69445c332cb2de4ae6f\"" May 8 00:48:37.158687 kubelet[1954]: E0508 00:48:37.158418 1954 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:48:37.160723 env[1220]: time="2025-05-08T00:48:37.160671470Z" level=info msg="CreateContainer within sandbox \"b5d7a2b41020e3c70993442390898a5cc6f915dd2558f69445c332cb2de4ae6f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 8 00:48:37.173014 env[1220]: time="2025-05-08T00:48:37.172959441Z" level=info msg="CreateContainer within sandbox \"b5d7a2b41020e3c70993442390898a5cc6f915dd2558f69445c332cb2de4ae6f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"965403b88c3eaaf3202a793492812a8a4882fbbdecf99a9ab7c9c73a8246d329\"" May 8 00:48:37.173539 env[1220]: time="2025-05-08T00:48:37.173412887Z" level=info msg="StartContainer for \"965403b88c3eaaf3202a793492812a8a4882fbbdecf99a9ab7c9c73a8246d329\"" May 8 00:48:37.188925 systemd[1]: Started cri-containerd-965403b88c3eaaf3202a793492812a8a4882fbbdecf99a9ab7c9c73a8246d329.scope. May 8 00:48:37.212909 env[1220]: time="2025-05-08T00:48:37.212843395Z" level=info msg="StartContainer for \"965403b88c3eaaf3202a793492812a8a4882fbbdecf99a9ab7c9c73a8246d329\" returns successfully" May 8 00:48:37.222437 systemd[1]: cri-containerd-965403b88c3eaaf3202a793492812a8a4882fbbdecf99a9ab7c9c73a8246d329.scope: Deactivated successfully. May 8 00:48:37.246502 env[1220]: time="2025-05-08T00:48:37.246465145Z" level=info msg="StopPodSandbox for \"d375c53112832e1fdba54e42efc9372238576613192da5b871bdd7ea9f1e6824\"" May 8 00:48:37.246908 env[1220]: time="2025-05-08T00:48:37.246816909Z" level=info msg="TearDown network for sandbox \"d375c53112832e1fdba54e42efc9372238576613192da5b871bdd7ea9f1e6824\" successfully" May 8 00:48:37.246908 env[1220]: time="2025-05-08T00:48:37.246886580Z" level=info msg="StopPodSandbox for \"d375c53112832e1fdba54e42efc9372238576613192da5b871bdd7ea9f1e6824\" returns successfully" May 8 00:48:37.247435 env[1220]: time="2025-05-08T00:48:37.247384600Z" level=info msg="RemovePodSandbox for \"d375c53112832e1fdba54e42efc9372238576613192da5b871bdd7ea9f1e6824\"" May 8 00:48:37.247527 env[1220]: time="2025-05-08T00:48:37.247447289Z" level=info msg="Forcibly stopping sandbox \"d375c53112832e1fdba54e42efc9372238576613192da5b871bdd7ea9f1e6824\"" May 8 00:48:37.247567 env[1220]: time="2025-05-08T00:48:37.247538751Z" level=info msg="TearDown network for sandbox \"d375c53112832e1fdba54e42efc9372238576613192da5b871bdd7ea9f1e6824\" successfully" May 8 00:48:37.257145 env[1220]: time="2025-05-08T00:48:37.257087791Z" level=info msg="RemovePodSandbox \"d375c53112832e1fdba54e42efc9372238576613192da5b871bdd7ea9f1e6824\" returns successfully" May 8 00:48:37.257661 env[1220]: time="2025-05-08T00:48:37.257626658Z" level=info msg="StopPodSandbox for \"2f10910824308e2c9daaea2e76e445ee3c74efff3057fce019d8f5351cc5b185\"" May 8 00:48:37.257789 env[1220]: time="2025-05-08T00:48:37.257742838Z" level=info msg="TearDown network for sandbox \"2f10910824308e2c9daaea2e76e445ee3c74efff3057fce019d8f5351cc5b185\" successfully" May 8 00:48:37.257822 env[1220]: time="2025-05-08T00:48:37.257787602Z" level=info msg="StopPodSandbox for \"2f10910824308e2c9daaea2e76e445ee3c74efff3057fce019d8f5351cc5b185\" returns successfully" May 8 00:48:37.258013 env[1220]: time="2025-05-08T00:48:37.257978613Z" level=info msg="shim disconnected" id=965403b88c3eaaf3202a793492812a8a4882fbbdecf99a9ab7c9c73a8246d329 May 8 00:48:37.258193 env[1220]: time="2025-05-08T00:48:37.258167920Z" level=warning msg="cleaning up after shim disconnected" id=965403b88c3eaaf3202a793492812a8a4882fbbdecf99a9ab7c9c73a8246d329 namespace=k8s.io May 8 00:48:37.258193 env[1220]: time="2025-05-08T00:48:37.258185273Z" level=info msg="cleaning up dead shim" May 8 00:48:37.258286 env[1220]: time="2025-05-08T00:48:37.258057572Z" level=info msg="RemovePodSandbox for \"2f10910824308e2c9daaea2e76e445ee3c74efff3057fce019d8f5351cc5b185\"" May 8 00:48:37.258286 env[1220]: time="2025-05-08T00:48:37.258257620Z" level=info msg="Forcibly stopping sandbox \"2f10910824308e2c9daaea2e76e445ee3c74efff3057fce019d8f5351cc5b185\"" May 8 00:48:37.258400 env[1220]: time="2025-05-08T00:48:37.258314427Z" level=info msg="TearDown network for sandbox \"2f10910824308e2c9daaea2e76e445ee3c74efff3057fce019d8f5351cc5b185\" successfully" May 8 00:48:37.262069 env[1220]: time="2025-05-08T00:48:37.262029031Z" level=info msg="RemovePodSandbox \"2f10910824308e2c9daaea2e76e445ee3c74efff3057fce019d8f5351cc5b185\" returns successfully" May 8 00:48:37.266407 env[1220]: time="2025-05-08T00:48:37.266345460Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:48:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3918 runtime=io.containerd.runc.v2\n" May 8 00:48:37.270194 kubelet[1954]: I0508 00:48:37.270162 1954 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2" path="/var/lib/kubelet/pods/fd8c11a4-dc9b-4c6d-9f78-6d039cc732a2/volumes" May 8 00:48:37.331071 kubelet[1954]: E0508 00:48:37.331021 1954 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 8 00:48:37.757651 kubelet[1954]: E0508 00:48:37.757594 1954 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:48:37.758993 env[1220]: time="2025-05-08T00:48:37.758959402Z" level=info msg="CreateContainer within sandbox \"b5d7a2b41020e3c70993442390898a5cc6f915dd2558f69445c332cb2de4ae6f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 8 00:48:37.779706 env[1220]: time="2025-05-08T00:48:37.779647001Z" level=info msg="CreateContainer within sandbox \"b5d7a2b41020e3c70993442390898a5cc6f915dd2558f69445c332cb2de4ae6f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"835ed572e6c79841306e460b04d410e0268d3f24748da84820288b60293b6207\"" May 8 00:48:37.780138 env[1220]: time="2025-05-08T00:48:37.780114744Z" level=info msg="StartContainer for \"835ed572e6c79841306e460b04d410e0268d3f24748da84820288b60293b6207\"" May 8 00:48:37.792916 systemd[1]: Started cri-containerd-835ed572e6c79841306e460b04d410e0268d3f24748da84820288b60293b6207.scope. May 8 00:48:37.821607 env[1220]: time="2025-05-08T00:48:37.821537563Z" level=info msg="StartContainer for \"835ed572e6c79841306e460b04d410e0268d3f24748da84820288b60293b6207\" returns successfully" May 8 00:48:37.826177 systemd[1]: cri-containerd-835ed572e6c79841306e460b04d410e0268d3f24748da84820288b60293b6207.scope: Deactivated successfully. May 8 00:48:37.848194 env[1220]: time="2025-05-08T00:48:37.848140752Z" level=info msg="shim disconnected" id=835ed572e6c79841306e460b04d410e0268d3f24748da84820288b60293b6207 May 8 00:48:37.848465 env[1220]: time="2025-05-08T00:48:37.848441681Z" level=warning msg="cleaning up after shim disconnected" id=835ed572e6c79841306e460b04d410e0268d3f24748da84820288b60293b6207 namespace=k8s.io May 8 00:48:37.848465 env[1220]: time="2025-05-08T00:48:37.848462009Z" level=info msg="cleaning up dead shim" May 8 00:48:37.854910 env[1220]: time="2025-05-08T00:48:37.854867823Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:48:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3982 runtime=io.containerd.runc.v2\n" May 8 00:48:38.761448 kubelet[1954]: E0508 00:48:38.761387 1954 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:48:38.763132 env[1220]: time="2025-05-08T00:48:38.763059262Z" level=info msg="CreateContainer within sandbox \"b5d7a2b41020e3c70993442390898a5cc6f915dd2558f69445c332cb2de4ae6f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 8 00:48:38.782322 env[1220]: time="2025-05-08T00:48:38.782253389Z" level=info msg="CreateContainer within sandbox \"b5d7a2b41020e3c70993442390898a5cc6f915dd2558f69445c332cb2de4ae6f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f0f1b98386c3e591941b7b2ba1fc054f376b877cf84f1f17aa5ae510b2b616e1\"" May 8 00:48:38.782824 env[1220]: time="2025-05-08T00:48:38.782802164Z" level=info msg="StartContainer for \"f0f1b98386c3e591941b7b2ba1fc054f376b877cf84f1f17aa5ae510b2b616e1\"" May 8 00:48:38.803799 systemd[1]: Started cri-containerd-f0f1b98386c3e591941b7b2ba1fc054f376b877cf84f1f17aa5ae510b2b616e1.scope. May 8 00:48:38.835023 env[1220]: time="2025-05-08T00:48:38.834951537Z" level=info msg="StartContainer for \"f0f1b98386c3e591941b7b2ba1fc054f376b877cf84f1f17aa5ae510b2b616e1\" returns successfully" May 8 00:48:38.841866 systemd[1]: cri-containerd-f0f1b98386c3e591941b7b2ba1fc054f376b877cf84f1f17aa5ae510b2b616e1.scope: Deactivated successfully. May 8 00:48:38.871188 env[1220]: time="2025-05-08T00:48:38.871096535Z" level=info msg="shim disconnected" id=f0f1b98386c3e591941b7b2ba1fc054f376b877cf84f1f17aa5ae510b2b616e1 May 8 00:48:38.871188 env[1220]: time="2025-05-08T00:48:38.871177298Z" level=warning msg="cleaning up after shim disconnected" id=f0f1b98386c3e591941b7b2ba1fc054f376b877cf84f1f17aa5ae510b2b616e1 namespace=k8s.io May 8 00:48:38.871188 env[1220]: time="2025-05-08T00:48:38.871191795Z" level=info msg="cleaning up dead shim" May 8 00:48:38.881612 env[1220]: time="2025-05-08T00:48:38.881535904Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:48:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4038 runtime=io.containerd.runc.v2\n" May 8 00:48:39.066996 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f0f1b98386c3e591941b7b2ba1fc054f376b877cf84f1f17aa5ae510b2b616e1-rootfs.mount: Deactivated successfully. May 8 00:48:39.650513 kubelet[1954]: I0508 00:48:39.650415 1954 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-08T00:48:39Z","lastTransitionTime":"2025-05-08T00:48:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 8 00:48:39.765165 kubelet[1954]: E0508 00:48:39.765125 1954 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:48:39.766520 env[1220]: time="2025-05-08T00:48:39.766471655Z" level=info msg="CreateContainer within sandbox \"b5d7a2b41020e3c70993442390898a5cc6f915dd2558f69445c332cb2de4ae6f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 8 00:48:40.418497 env[1220]: time="2025-05-08T00:48:40.418387801Z" level=info msg="CreateContainer within sandbox \"b5d7a2b41020e3c70993442390898a5cc6f915dd2558f69445c332cb2de4ae6f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"81421767b8192c384ede84922179b0cd08e50d30f5e4825ed98891c34852ac66\"" May 8 00:48:40.419190 env[1220]: time="2025-05-08T00:48:40.419143928Z" level=info msg="StartContainer for \"81421767b8192c384ede84922179b0cd08e50d30f5e4825ed98891c34852ac66\"" May 8 00:48:40.448083 systemd[1]: run-containerd-runc-k8s.io-81421767b8192c384ede84922179b0cd08e50d30f5e4825ed98891c34852ac66-runc.tiCExX.mount: Deactivated successfully. May 8 00:48:40.452769 systemd[1]: Started cri-containerd-81421767b8192c384ede84922179b0cd08e50d30f5e4825ed98891c34852ac66.scope. May 8 00:48:40.476566 systemd[1]: cri-containerd-81421767b8192c384ede84922179b0cd08e50d30f5e4825ed98891c34852ac66.scope: Deactivated successfully. May 8 00:48:40.537755 env[1220]: time="2025-05-08T00:48:40.537654103Z" level=info msg="StartContainer for \"81421767b8192c384ede84922179b0cd08e50d30f5e4825ed98891c34852ac66\" returns successfully" May 8 00:48:40.584938 env[1220]: time="2025-05-08T00:48:40.584869606Z" level=info msg="shim disconnected" id=81421767b8192c384ede84922179b0cd08e50d30f5e4825ed98891c34852ac66 May 8 00:48:40.584938 env[1220]: time="2025-05-08T00:48:40.584931684Z" level=warning msg="cleaning up after shim disconnected" id=81421767b8192c384ede84922179b0cd08e50d30f5e4825ed98891c34852ac66 namespace=k8s.io May 8 00:48:40.584938 env[1220]: time="2025-05-08T00:48:40.584943095Z" level=info msg="cleaning up dead shim" May 8 00:48:40.596161 env[1220]: time="2025-05-08T00:48:40.596087141Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:48:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4092 runtime=io.containerd.runc.v2\n" May 8 00:48:40.772933 kubelet[1954]: E0508 00:48:40.772325 1954 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:48:40.774240 env[1220]: time="2025-05-08T00:48:40.774194260Z" level=info msg="CreateContainer within sandbox \"b5d7a2b41020e3c70993442390898a5cc6f915dd2558f69445c332cb2de4ae6f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 8 00:48:40.856542 env[1220]: time="2025-05-08T00:48:40.856459992Z" level=info msg="CreateContainer within sandbox \"b5d7a2b41020e3c70993442390898a5cc6f915dd2558f69445c332cb2de4ae6f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"cdc31298de88539b6c5d75d0e6abb74ebe7a83f797e29615f9b28977b8e7a3a3\"" May 8 00:48:40.857221 env[1220]: time="2025-05-08T00:48:40.857179590Z" level=info msg="StartContainer for \"cdc31298de88539b6c5d75d0e6abb74ebe7a83f797e29615f9b28977b8e7a3a3\"" May 8 00:48:40.872317 systemd[1]: Started cri-containerd-cdc31298de88539b6c5d75d0e6abb74ebe7a83f797e29615f9b28977b8e7a3a3.scope. May 8 00:48:41.043807 env[1220]: time="2025-05-08T00:48:41.043661792Z" level=info msg="StartContainer for \"cdc31298de88539b6c5d75d0e6abb74ebe7a83f797e29615f9b28977b8e7a3a3\" returns successfully" May 8 00:48:41.326437 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-81421767b8192c384ede84922179b0cd08e50d30f5e4825ed98891c34852ac66-rootfs.mount: Deactivated successfully. May 8 00:48:41.334652 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) May 8 00:48:41.778480 kubelet[1954]: E0508 00:48:41.778329 1954 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:48:41.794529 kubelet[1954]: I0508 00:48:41.794453 1954 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-tjtnm" podStartSLOduration=5.794426233 podStartE2EDuration="5.794426233s" podCreationTimestamp="2025-05-08 00:48:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:48:41.794133391 +0000 UTC m=+124.664752770" watchObservedRunningTime="2025-05-08 00:48:41.794426233 +0000 UTC m=+124.665045612" May 8 00:48:43.099366 kubelet[1954]: E0508 00:48:43.099234 1954 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:48:44.078371 systemd-networkd[1039]: lxc_health: Link UP May 8 00:48:44.090095 systemd-networkd[1039]: lxc_health: Gained carrier May 8 00:48:44.090604 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 8 00:48:44.977819 systemd[1]: run-containerd-runc-k8s.io-cdc31298de88539b6c5d75d0e6abb74ebe7a83f797e29615f9b28977b8e7a3a3-runc.jMqFpB.mount: Deactivated successfully. May 8 00:48:45.099465 kubelet[1954]: E0508 00:48:45.099420 1954 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:48:45.786215 kubelet[1954]: E0508 00:48:45.786168 1954 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:48:46.030780 systemd-networkd[1039]: lxc_health: Gained IPv6LL May 8 00:48:46.787595 kubelet[1954]: E0508 00:48:46.787528 1954 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:48:47.122395 systemd[1]: run-containerd-runc-k8s.io-cdc31298de88539b6c5d75d0e6abb74ebe7a83f797e29615f9b28977b8e7a3a3-runc.HL6EUX.mount: Deactivated successfully. May 8 00:48:49.256713 systemd[1]: run-containerd-runc-k8s.io-cdc31298de88539b6c5d75d0e6abb74ebe7a83f797e29615f9b28977b8e7a3a3-runc.yhxEwT.mount: Deactivated successfully. May 8 00:48:51.457387 sshd[3799]: pam_unix(sshd:session): session closed for user core May 8 00:48:51.460676 systemd[1]: sshd@29-10.0.0.73:22-10.0.0.1:45660.service: Deactivated successfully. May 8 00:48:51.461457 systemd[1]: session-30.scope: Deactivated successfully. May 8 00:48:51.462182 systemd-logind[1205]: Session 30 logged out. Waiting for processes to exit. May 8 00:48:51.463148 systemd-logind[1205]: Removed session 30. May 8 00:48:52.266953 kubelet[1954]: E0508 00:48:52.266887 1954 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"