May 13 00:41:52.247689 kernel: Linux version 5.15.181-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon May 12 23:08:12 -00 2025 May 13 00:41:52.247727 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=b36b4a233fdb797f33aa4a04cfdf4a35ceaebd893b04da45dfb96d44a18c6166 May 13 00:41:52.247740 kernel: BIOS-provided physical RAM map: May 13 00:41:52.247749 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 13 00:41:52.247757 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 13 00:41:52.247764 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 13 00:41:52.247774 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable May 13 00:41:52.247783 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved May 13 00:41:52.247795 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 13 00:41:52.247803 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved May 13 00:41:52.247811 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 13 00:41:52.247820 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 13 00:41:52.247827 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 13 00:41:52.247836 kernel: NX (Execute Disable) protection: active May 13 00:41:52.247849 kernel: SMBIOS 2.8 present. May 13 00:41:52.247859 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 May 13 00:41:52.247868 kernel: Hypervisor detected: KVM May 13 00:41:52.247876 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 13 00:41:52.247891 kernel: kvm-clock: cpu 0, msr 2c196001, primary cpu clock May 13 00:41:52.247914 kernel: kvm-clock: using sched offset of 4287977549 cycles May 13 00:41:52.247944 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 13 00:41:52.247981 kernel: tsc: Detected 2794.746 MHz processor May 13 00:41:52.248003 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 13 00:41:52.248019 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 13 00:41:52.248028 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 May 13 00:41:52.248038 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 13 00:41:52.248047 kernel: Using GB pages for direct mapping May 13 00:41:52.248056 kernel: ACPI: Early table checksum verification disabled May 13 00:41:52.248065 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) May 13 00:41:52.248074 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:41:52.248084 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:41:52.248093 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:41:52.248106 kernel: ACPI: FACS 0x000000009CFE0000 000040 May 13 00:41:52.248115 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:41:52.248123 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:41:52.248132 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:41:52.248142 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:41:52.248151 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] May 13 00:41:52.248160 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] May 13 00:41:52.248170 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] May 13 00:41:52.248187 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] May 13 00:41:52.248197 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] May 13 00:41:52.248206 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] May 13 00:41:52.248228 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] May 13 00:41:52.248251 kernel: No NUMA configuration found May 13 00:41:52.248262 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] May 13 00:41:52.248276 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] May 13 00:41:52.248287 kernel: Zone ranges: May 13 00:41:52.248297 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 13 00:41:52.248307 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] May 13 00:41:52.248316 kernel: Normal empty May 13 00:41:52.248325 kernel: Movable zone start for each node May 13 00:41:52.248334 kernel: Early memory node ranges May 13 00:41:52.248343 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 13 00:41:52.248352 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] May 13 00:41:52.248365 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] May 13 00:41:52.248379 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 13 00:41:52.248388 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 13 00:41:52.248413 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges May 13 00:41:52.248422 kernel: ACPI: PM-Timer IO Port: 0x608 May 13 00:41:52.248431 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 13 00:41:52.248441 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 13 00:41:52.248460 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 13 00:41:52.248471 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 13 00:41:52.248481 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 13 00:41:52.248499 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 13 00:41:52.248509 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 13 00:41:52.248519 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 13 00:41:52.248528 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 13 00:41:52.248538 kernel: TSC deadline timer available May 13 00:41:52.248548 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs May 13 00:41:52.248558 kernel: kvm-guest: KVM setup pv remote TLB flush May 13 00:41:52.248567 kernel: kvm-guest: setup PV sched yield May 13 00:41:52.248577 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices May 13 00:41:52.248590 kernel: Booting paravirtualized kernel on KVM May 13 00:41:52.248600 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 13 00:41:52.248610 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 May 13 00:41:52.248620 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 May 13 00:41:52.248630 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 May 13 00:41:52.248639 kernel: pcpu-alloc: [0] 0 1 2 3 May 13 00:41:52.248649 kernel: kvm-guest: setup async PF for cpu 0 May 13 00:41:52.248658 kernel: kvm-guest: stealtime: cpu 0, msr 9a41c0c0 May 13 00:41:52.248668 kernel: kvm-guest: PV spinlocks enabled May 13 00:41:52.248682 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 13 00:41:52.248692 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 May 13 00:41:52.248702 kernel: Policy zone: DMA32 May 13 00:41:52.248713 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=b36b4a233fdb797f33aa4a04cfdf4a35ceaebd893b04da45dfb96d44a18c6166 May 13 00:41:52.248724 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 13 00:41:52.248733 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 13 00:41:52.248743 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 13 00:41:52.248753 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 13 00:41:52.248766 kernel: Memory: 2436696K/2571752K available (12294K kernel code, 2276K rwdata, 13724K rodata, 47456K init, 4124K bss, 134796K reserved, 0K cma-reserved) May 13 00:41:52.248775 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 13 00:41:52.248785 kernel: ftrace: allocating 34584 entries in 136 pages May 13 00:41:52.248795 kernel: ftrace: allocated 136 pages with 2 groups May 13 00:41:52.248805 kernel: rcu: Hierarchical RCU implementation. May 13 00:41:52.248816 kernel: rcu: RCU event tracing is enabled. May 13 00:41:52.248826 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 13 00:41:52.248836 kernel: Rude variant of Tasks RCU enabled. May 13 00:41:52.248846 kernel: Tracing variant of Tasks RCU enabled. May 13 00:41:52.248860 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 13 00:41:52.248870 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 13 00:41:52.248880 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 13 00:41:52.248890 kernel: random: crng init done May 13 00:41:52.248900 kernel: Console: colour VGA+ 80x25 May 13 00:41:52.248910 kernel: printk: console [ttyS0] enabled May 13 00:41:52.248920 kernel: ACPI: Core revision 20210730 May 13 00:41:52.248930 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 13 00:41:52.248941 kernel: APIC: Switch to symmetric I/O mode setup May 13 00:41:52.248954 kernel: x2apic enabled May 13 00:41:52.248964 kernel: Switched APIC routing to physical x2apic. May 13 00:41:52.248978 kernel: kvm-guest: setup PV IPIs May 13 00:41:52.248989 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 13 00:41:52.248999 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 13 00:41:52.249013 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794746) May 13 00:41:52.249024 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 13 00:41:52.249034 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 13 00:41:52.249044 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 13 00:41:52.249066 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 13 00:41:52.249076 kernel: Spectre V2 : Mitigation: Retpolines May 13 00:41:52.249087 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 13 00:41:52.249099 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 13 00:41:52.249110 kernel: RETBleed: Mitigation: untrained return thunk May 13 00:41:52.249120 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 13 00:41:52.249130 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp May 13 00:41:52.249141 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 13 00:41:52.249151 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 13 00:41:52.249165 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 13 00:41:52.249176 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 13 00:41:52.249187 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. May 13 00:41:52.249198 kernel: Freeing SMP alternatives memory: 32K May 13 00:41:52.249208 kernel: pid_max: default: 32768 minimum: 301 May 13 00:41:52.249218 kernel: LSM: Security Framework initializing May 13 00:41:52.249229 kernel: SELinux: Initializing. May 13 00:41:52.249243 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 00:41:52.249254 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 00:41:52.249265 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 13 00:41:52.249275 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 13 00:41:52.249285 kernel: ... version: 0 May 13 00:41:52.249295 kernel: ... bit width: 48 May 13 00:41:52.249305 kernel: ... generic registers: 6 May 13 00:41:52.249316 kernel: ... value mask: 0000ffffffffffff May 13 00:41:52.249327 kernel: ... max period: 00007fffffffffff May 13 00:41:52.249341 kernel: ... fixed-purpose events: 0 May 13 00:41:52.249352 kernel: ... event mask: 000000000000003f May 13 00:41:52.249362 kernel: signal: max sigframe size: 1776 May 13 00:41:52.249372 kernel: rcu: Hierarchical SRCU implementation. May 13 00:41:52.249382 kernel: smp: Bringing up secondary CPUs ... May 13 00:41:52.249410 kernel: x86: Booting SMP configuration: May 13 00:41:52.249421 kernel: .... node #0, CPUs: #1 May 13 00:41:52.249431 kernel: kvm-clock: cpu 1, msr 2c196041, secondary cpu clock May 13 00:41:52.249441 kernel: kvm-guest: setup async PF for cpu 1 May 13 00:41:52.249461 kernel: kvm-guest: stealtime: cpu 1, msr 9a49c0c0 May 13 00:41:52.249476 kernel: #2 May 13 00:41:52.249486 kernel: kvm-clock: cpu 2, msr 2c196081, secondary cpu clock May 13 00:41:52.249496 kernel: kvm-guest: setup async PF for cpu 2 May 13 00:41:52.249507 kernel: kvm-guest: stealtime: cpu 2, msr 9a51c0c0 May 13 00:41:52.249517 kernel: #3 May 13 00:41:52.249532 kernel: kvm-clock: cpu 3, msr 2c1960c1, secondary cpu clock May 13 00:41:52.249542 kernel: kvm-guest: setup async PF for cpu 3 May 13 00:41:52.249553 kernel: kvm-guest: stealtime: cpu 3, msr 9a59c0c0 May 13 00:41:52.249563 kernel: smp: Brought up 1 node, 4 CPUs May 13 00:41:52.249577 kernel: smpboot: Max logical packages: 1 May 13 00:41:52.249587 kernel: smpboot: Total of 4 processors activated (22357.96 BogoMIPS) May 13 00:41:52.249598 kernel: devtmpfs: initialized May 13 00:41:52.249608 kernel: x86/mm: Memory block size: 128MB May 13 00:41:52.249618 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 13 00:41:52.249628 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 13 00:41:52.249638 kernel: pinctrl core: initialized pinctrl subsystem May 13 00:41:52.249648 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 13 00:41:52.249659 kernel: audit: initializing netlink subsys (disabled) May 13 00:41:52.249673 kernel: thermal_sys: Registered thermal governor 'step_wise' May 13 00:41:52.249683 kernel: thermal_sys: Registered thermal governor 'user_space' May 13 00:41:52.249693 kernel: audit: type=2000 audit(1747096910.968:1): state=initialized audit_enabled=0 res=1 May 13 00:41:52.249704 kernel: cpuidle: using governor menu May 13 00:41:52.249714 kernel: ACPI: bus type PCI registered May 13 00:41:52.249724 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 13 00:41:52.249735 kernel: dca service started, version 1.12.1 May 13 00:41:52.249745 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) May 13 00:41:52.249756 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 May 13 00:41:52.249771 kernel: PCI: Using configuration type 1 for base access May 13 00:41:52.249782 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 13 00:41:52.249793 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages May 13 00:41:52.249804 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages May 13 00:41:52.249814 kernel: ACPI: Added _OSI(Module Device) May 13 00:41:52.249824 kernel: ACPI: Added _OSI(Processor Device) May 13 00:41:52.249834 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 13 00:41:52.249845 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 13 00:41:52.249855 kernel: ACPI: Added _OSI(Linux-Dell-Video) May 13 00:41:52.249870 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) May 13 00:41:52.249881 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) May 13 00:41:52.249891 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 13 00:41:52.249902 kernel: ACPI: Interpreter enabled May 13 00:41:52.249912 kernel: ACPI: PM: (supports S0 S3 S5) May 13 00:41:52.249922 kernel: ACPI: Using IOAPIC for interrupt routing May 13 00:41:52.249933 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 13 00:41:52.249943 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 13 00:41:52.249953 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 13 00:41:52.250215 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 13 00:41:52.250331 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 13 00:41:52.250490 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 13 00:41:52.250507 kernel: PCI host bridge to bus 0000:00 May 13 00:41:52.250639 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 13 00:41:52.250744 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 13 00:41:52.250855 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 13 00:41:52.250960 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] May 13 00:41:52.251066 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 13 00:41:52.251170 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] May 13 00:41:52.251322 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 13 00:41:52.251512 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 13 00:41:52.251718 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 May 13 00:41:52.253522 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] May 13 00:41:52.253675 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] May 13 00:41:52.253815 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] May 13 00:41:52.253956 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 13 00:41:52.254128 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 May 13 00:41:52.254289 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] May 13 00:41:52.254487 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] May 13 00:41:52.254685 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] May 13 00:41:52.254876 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 May 13 00:41:52.255043 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] May 13 00:41:52.255211 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] May 13 00:41:52.255325 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] May 13 00:41:52.255488 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 May 13 00:41:52.255644 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] May 13 00:41:52.255753 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] May 13 00:41:52.255965 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] May 13 00:41:52.256093 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] May 13 00:41:52.256239 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 13 00:41:52.256361 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 13 00:41:52.256526 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 13 00:41:52.256643 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] May 13 00:41:52.256746 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] May 13 00:41:52.256886 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 13 00:41:52.256997 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] May 13 00:41:52.257012 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 13 00:41:52.257023 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 13 00:41:52.257033 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 13 00:41:52.257042 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 13 00:41:52.257056 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 13 00:41:52.257065 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 13 00:41:52.257075 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 13 00:41:52.257085 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 13 00:41:52.257095 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 13 00:41:52.257104 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 13 00:41:52.257114 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 13 00:41:52.257123 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 13 00:41:52.257132 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 13 00:41:52.257145 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 13 00:41:52.257154 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 13 00:41:52.257164 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 13 00:41:52.257174 kernel: iommu: Default domain type: Translated May 13 00:41:52.257183 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 13 00:41:52.257298 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 13 00:41:52.257431 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 13 00:41:52.257555 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 13 00:41:52.257575 kernel: vgaarb: loaded May 13 00:41:52.257585 kernel: pps_core: LinuxPPS API ver. 1 registered May 13 00:41:52.257595 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 13 00:41:52.257605 kernel: PTP clock support registered May 13 00:41:52.257614 kernel: PCI: Using ACPI for IRQ routing May 13 00:41:52.257624 kernel: PCI: pci_cache_line_size set to 64 bytes May 13 00:41:52.257633 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 13 00:41:52.257643 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] May 13 00:41:52.257652 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 13 00:41:52.257662 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 13 00:41:52.257675 kernel: clocksource: Switched to clocksource kvm-clock May 13 00:41:52.257685 kernel: VFS: Disk quotas dquot_6.6.0 May 13 00:41:52.257696 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 13 00:41:52.257705 kernel: pnp: PnP ACPI init May 13 00:41:52.257850 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved May 13 00:41:52.257868 kernel: pnp: PnP ACPI: found 6 devices May 13 00:41:52.257879 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 13 00:41:52.257890 kernel: NET: Registered PF_INET protocol family May 13 00:41:52.257904 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 13 00:41:52.257915 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 13 00:41:52.257925 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 13 00:41:52.257935 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 13 00:41:52.257945 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) May 13 00:41:52.257956 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 13 00:41:52.257965 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 00:41:52.257975 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 00:41:52.257984 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 13 00:41:52.257998 kernel: NET: Registered PF_XDP protocol family May 13 00:41:52.258117 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 13 00:41:52.258224 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 13 00:41:52.258326 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 13 00:41:52.258462 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] May 13 00:41:52.258568 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 13 00:41:52.258664 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] May 13 00:41:52.258680 kernel: PCI: CLS 0 bytes, default 64 May 13 00:41:52.258695 kernel: Initialise system trusted keyrings May 13 00:41:52.258705 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 13 00:41:52.258714 kernel: Key type asymmetric registered May 13 00:41:52.258723 kernel: Asymmetric key parser 'x509' registered May 13 00:41:52.258733 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 13 00:41:52.258743 kernel: io scheduler mq-deadline registered May 13 00:41:52.258752 kernel: io scheduler kyber registered May 13 00:41:52.258762 kernel: io scheduler bfq registered May 13 00:41:52.258772 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 13 00:41:52.258785 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 13 00:41:52.258794 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 13 00:41:52.258804 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 13 00:41:52.258813 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 13 00:41:52.258823 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 13 00:41:52.258833 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 13 00:41:52.258843 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 13 00:41:52.258853 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 13 00:41:52.258974 kernel: rtc_cmos 00:04: RTC can wake from S4 May 13 00:41:52.259090 kernel: rtc_cmos 00:04: registered as rtc0 May 13 00:41:52.259106 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 13 00:41:52.259202 kernel: rtc_cmos 00:04: setting system clock to 2025-05-13T00:41:51 UTC (1747096911) May 13 00:41:52.259301 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 13 00:41:52.259316 kernel: NET: Registered PF_INET6 protocol family May 13 00:41:52.259326 kernel: Segment Routing with IPv6 May 13 00:41:52.259336 kernel: In-situ OAM (IOAM) with IPv6 May 13 00:41:52.259346 kernel: NET: Registered PF_PACKET protocol family May 13 00:41:52.259359 kernel: Key type dns_resolver registered May 13 00:41:52.259369 kernel: IPI shorthand broadcast: enabled May 13 00:41:52.259379 kernel: sched_clock: Marking stable (608093289, 128260460)->(800835661, -64481912) May 13 00:41:52.259403 kernel: registered taskstats version 1 May 13 00:41:52.259414 kernel: Loading compiled-in X.509 certificates May 13 00:41:52.259424 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.181-flatcar: 52373c12592f53b0567bb941a0a0fec888191095' May 13 00:41:52.259434 kernel: Key type .fscrypt registered May 13 00:41:52.259443 kernel: Key type fscrypt-provisioning registered May 13 00:41:52.259465 kernel: ima: No TPM chip found, activating TPM-bypass! May 13 00:41:52.259478 kernel: ima: Allocated hash algorithm: sha1 May 13 00:41:52.259488 kernel: ima: No architecture policies found May 13 00:41:52.259498 kernel: clk: Disabling unused clocks May 13 00:41:52.259508 kernel: Freeing unused kernel image (initmem) memory: 47456K May 13 00:41:52.259518 kernel: Write protecting the kernel read-only data: 28672k May 13 00:41:52.259527 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K May 13 00:41:52.259537 kernel: Freeing unused kernel image (rodata/data gap) memory: 612K May 13 00:41:52.259547 kernel: Run /init as init process May 13 00:41:52.259556 kernel: with arguments: May 13 00:41:52.259568 kernel: /init May 13 00:41:52.259577 kernel: with environment: May 13 00:41:52.259587 kernel: HOME=/ May 13 00:41:52.259596 kernel: TERM=linux May 13 00:41:52.259605 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 13 00:41:52.259618 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 13 00:41:52.259631 systemd[1]: Detected virtualization kvm. May 13 00:41:52.259642 systemd[1]: Detected architecture x86-64. May 13 00:41:52.259654 systemd[1]: Running in initrd. May 13 00:41:52.259664 systemd[1]: No hostname configured, using default hostname. May 13 00:41:52.259673 systemd[1]: Hostname set to . May 13 00:41:52.259684 systemd[1]: Initializing machine ID from VM UUID. May 13 00:41:52.259695 systemd[1]: Queued start job for default target initrd.target. May 13 00:41:52.259705 systemd[1]: Started systemd-ask-password-console.path. May 13 00:41:52.259716 systemd[1]: Reached target cryptsetup.target. May 13 00:41:52.259726 systemd[1]: Reached target paths.target. May 13 00:41:52.259740 systemd[1]: Reached target slices.target. May 13 00:41:52.259757 systemd[1]: Reached target swap.target. May 13 00:41:52.259769 systemd[1]: Reached target timers.target. May 13 00:41:52.259781 systemd[1]: Listening on iscsid.socket. May 13 00:41:52.259790 systemd[1]: Listening on iscsiuio.socket. May 13 00:41:52.259802 systemd[1]: Listening on systemd-journald-audit.socket. May 13 00:41:52.259813 systemd[1]: Listening on systemd-journald-dev-log.socket. May 13 00:41:52.259823 systemd[1]: Listening on systemd-journald.socket. May 13 00:41:52.259834 systemd[1]: Listening on systemd-networkd.socket. May 13 00:41:52.259844 systemd[1]: Listening on systemd-udevd-control.socket. May 13 00:41:52.259854 systemd[1]: Listening on systemd-udevd-kernel.socket. May 13 00:41:52.259863 systemd[1]: Reached target sockets.target. May 13 00:41:52.259874 systemd[1]: Starting kmod-static-nodes.service... May 13 00:41:52.259884 systemd[1]: Finished network-cleanup.service. May 13 00:41:52.259898 systemd[1]: Starting systemd-fsck-usr.service... May 13 00:41:52.259908 systemd[1]: Starting systemd-journald.service... May 13 00:41:52.259919 systemd[1]: Starting systemd-modules-load.service... May 13 00:41:52.259930 systemd[1]: Starting systemd-resolved.service... May 13 00:41:52.259940 systemd[1]: Starting systemd-vconsole-setup.service... May 13 00:41:52.259951 systemd[1]: Finished kmod-static-nodes.service. May 13 00:41:52.259967 systemd-journald[197]: Journal started May 13 00:41:52.260036 systemd-journald[197]: Runtime Journal (/run/log/journal/2e4cfe9f63dd44daba82bb7a08ceaca7) is 6.0M, max 48.5M, 42.5M free. May 13 00:41:52.248285 systemd-modules-load[198]: Inserted module 'overlay' May 13 00:41:52.292186 kernel: audit: type=1130 audit(1747096912.285:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:52.292230 systemd[1]: Started systemd-journald.service. May 13 00:41:52.285000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:52.291000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:52.265908 systemd-resolved[199]: Positive Trust Anchors: May 13 00:41:52.300131 kernel: audit: type=1130 audit(1747096912.291:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:52.300151 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 13 00:41:52.300167 kernel: audit: type=1130 audit(1747096912.299:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:52.299000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:52.265935 systemd-resolved[199]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 00:41:52.265975 systemd-resolved[199]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 13 00:41:52.269006 systemd-resolved[199]: Defaulting to hostname 'linux'. May 13 00:41:52.292509 systemd[1]: Started systemd-resolved.service. May 13 00:41:52.300405 systemd[1]: Finished systemd-fsck-usr.service. May 13 00:41:52.314000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:52.315153 systemd[1]: Finished systemd-vconsole-setup.service. May 13 00:41:52.320866 kernel: audit: type=1130 audit(1747096912.314:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:52.320910 kernel: Bridge firewalling registered May 13 00:41:52.320921 kernel: audit: type=1130 audit(1747096912.320:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:52.320000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:52.319087 systemd-modules-load[198]: Inserted module 'br_netfilter' May 13 00:41:52.321067 systemd[1]: Reached target nss-lookup.target. May 13 00:41:52.326566 systemd[1]: Starting dracut-cmdline-ask.service... May 13 00:41:52.328039 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 13 00:41:52.335184 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 13 00:41:52.337000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:52.340425 kernel: audit: type=1130 audit(1747096912.337:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:52.342427 kernel: SCSI subsystem initialized May 13 00:41:52.342969 systemd[1]: Finished dracut-cmdline-ask.service. May 13 00:41:52.345000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:52.346215 systemd[1]: Starting dracut-cmdline.service... May 13 00:41:52.348901 kernel: audit: type=1130 audit(1747096912.345:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:52.357914 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 13 00:41:52.358017 kernel: device-mapper: uevent: version 1.0.3 May 13 00:41:52.360439 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com May 13 00:41:52.364254 systemd-modules-load[198]: Inserted module 'dm_multipath' May 13 00:41:52.366000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:52.366899 dracut-cmdline[215]: dracut-dracut-053 May 13 00:41:52.365217 systemd[1]: Finished systemd-modules-load.service. May 13 00:41:52.381506 kernel: audit: type=1130 audit(1747096912.366:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:52.381587 dracut-cmdline[215]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=b36b4a233fdb797f33aa4a04cfdf4a35ceaebd893b04da45dfb96d44a18c6166 May 13 00:41:52.370309 systemd[1]: Starting systemd-sysctl.service... May 13 00:41:52.418993 systemd[1]: Finished systemd-sysctl.service. May 13 00:41:52.421000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:52.426435 kernel: audit: type=1130 audit(1747096912.421:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:52.504463 kernel: Loading iSCSI transport class v2.0-870. May 13 00:41:52.530652 kernel: iscsi: registered transport (tcp) May 13 00:41:52.562950 kernel: iscsi: registered transport (qla4xxx) May 13 00:41:52.563066 kernel: QLogic iSCSI HBA Driver May 13 00:41:52.613908 systemd[1]: Finished dracut-cmdline.service. May 13 00:41:52.615000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:52.617673 systemd[1]: Starting dracut-pre-udev.service... May 13 00:41:52.681535 kernel: raid6: avx2x4 gen() 18846 MB/s May 13 00:41:52.698435 kernel: raid6: avx2x4 xor() 5915 MB/s May 13 00:41:52.715533 kernel: raid6: avx2x2 gen() 28700 MB/s May 13 00:41:52.732435 kernel: raid6: avx2x2 xor() 15518 MB/s May 13 00:41:52.749470 kernel: raid6: avx2x1 gen() 20506 MB/s May 13 00:41:52.766967 kernel: raid6: avx2x1 xor() 12824 MB/s May 13 00:41:52.783468 kernel: raid6: sse2x4 gen() 12563 MB/s May 13 00:41:52.810459 kernel: raid6: sse2x4 xor() 6747 MB/s May 13 00:41:52.827464 kernel: raid6: sse2x2 gen() 11085 MB/s May 13 00:41:52.844454 kernel: raid6: sse2x2 xor() 6888 MB/s May 13 00:41:52.861462 kernel: raid6: sse2x1 gen() 8610 MB/s May 13 00:41:52.879313 kernel: raid6: sse2x1 xor() 5470 MB/s May 13 00:41:52.879426 kernel: raid6: using algorithm avx2x2 gen() 28700 MB/s May 13 00:41:52.879456 kernel: raid6: .... xor() 15518 MB/s, rmw enabled May 13 00:41:52.880269 kernel: raid6: using avx2x2 recovery algorithm May 13 00:41:52.897467 kernel: xor: automatically using best checksumming function avx May 13 00:41:52.991714 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no May 13 00:41:53.006772 systemd[1]: Finished dracut-pre-udev.service. May 13 00:41:53.006000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:53.007000 audit: BPF prog-id=7 op=LOAD May 13 00:41:53.008000 audit: BPF prog-id=8 op=LOAD May 13 00:41:53.009460 systemd[1]: Starting systemd-udevd.service... May 13 00:41:53.024229 systemd-udevd[401]: Using default interface naming scheme 'v252'. May 13 00:41:53.029617 systemd[1]: Started systemd-udevd.service. May 13 00:41:53.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:53.033624 systemd[1]: Starting dracut-pre-trigger.service... May 13 00:41:53.047000 dracut-pre-trigger[409]: rd.md=0: removing MD RAID activation May 13 00:41:53.076372 systemd[1]: Finished dracut-pre-trigger.service. May 13 00:41:53.078000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:53.079846 systemd[1]: Starting systemd-udev-trigger.service... May 13 00:41:53.117703 systemd[1]: Finished systemd-udev-trigger.service. May 13 00:41:53.119000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:53.151425 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 13 00:41:53.195841 kernel: cryptd: max_cpu_qlen set to 1000 May 13 00:41:53.195860 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 13 00:41:53.195871 kernel: GPT:9289727 != 19775487 May 13 00:41:53.195882 kernel: GPT:Alternate GPT header not at the end of the disk. May 13 00:41:53.195893 kernel: GPT:9289727 != 19775487 May 13 00:41:53.195910 kernel: GPT: Use GNU Parted to correct GPT errors. May 13 00:41:53.195920 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:41:53.195931 kernel: AVX2 version of gcm_enc/dec engaged. May 13 00:41:53.196782 kernel: AES CTR mode by8 optimization enabled May 13 00:41:53.199422 kernel: libata version 3.00 loaded. May 13 00:41:53.209422 kernel: ahci 0000:00:1f.2: version 3.0 May 13 00:41:53.229888 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 13 00:41:53.229906 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 13 00:41:53.230050 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 13 00:41:53.230152 kernel: scsi host0: ahci May 13 00:41:53.230251 kernel: scsi host1: ahci May 13 00:41:53.230373 kernel: scsi host2: ahci May 13 00:41:53.230535 kernel: scsi host3: ahci May 13 00:41:53.230653 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (442) May 13 00:41:53.230664 kernel: scsi host4: ahci May 13 00:41:53.230757 kernel: scsi host5: ahci May 13 00:41:53.230863 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 May 13 00:41:53.230874 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 May 13 00:41:53.230883 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 May 13 00:41:53.230891 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 May 13 00:41:53.230900 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 May 13 00:41:53.230909 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 May 13 00:41:53.225836 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. May 13 00:41:53.274310 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. May 13 00:41:53.277720 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. May 13 00:41:53.278795 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. May 13 00:41:53.283858 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 13 00:41:53.285745 systemd[1]: Starting disk-uuid.service... May 13 00:41:53.297762 disk-uuid[520]: Primary Header is updated. May 13 00:41:53.297762 disk-uuid[520]: Secondary Entries is updated. May 13 00:41:53.297762 disk-uuid[520]: Secondary Header is updated. May 13 00:41:53.302447 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:41:53.306435 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:41:53.539474 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 13 00:41:53.539598 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 13 00:41:53.543776 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 13 00:41:53.543804 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 13 00:41:53.543814 kernel: ata3.00: applying bridge limits May 13 00:41:53.544709 kernel: ata3.00: configured for UDMA/100 May 13 00:41:53.545435 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 13 00:41:53.550420 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 13 00:41:53.550465 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 13 00:41:53.551440 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 13 00:41:53.586645 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 13 00:41:53.604385 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 13 00:41:53.604432 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 13 00:41:54.307278 disk-uuid[521]: The operation has completed successfully. May 13 00:41:54.308790 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:41:54.333195 systemd[1]: disk-uuid.service: Deactivated successfully. May 13 00:41:54.333319 systemd[1]: Finished disk-uuid.service. May 13 00:41:54.334000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:54.334000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:54.349448 systemd[1]: Starting verity-setup.service... May 13 00:41:54.365429 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" May 13 00:41:54.386143 systemd[1]: Found device dev-mapper-usr.device. May 13 00:41:54.388600 systemd[1]: Mounting sysusr-usr.mount... May 13 00:41:54.390867 systemd[1]: Finished verity-setup.service. May 13 00:41:54.390000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:54.449432 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. May 13 00:41:54.449941 systemd[1]: Mounted sysusr-usr.mount. May 13 00:41:54.450135 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. May 13 00:41:54.451381 systemd[1]: Starting ignition-setup.service... May 13 00:41:54.454098 systemd[1]: Starting parse-ip-for-networkd.service... May 13 00:41:54.464048 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 13 00:41:54.464099 kernel: BTRFS info (device vda6): using free space tree May 13 00:41:54.464109 kernel: BTRFS info (device vda6): has skinny extents May 13 00:41:54.471827 systemd[1]: mnt-oem.mount: Deactivated successfully. May 13 00:41:54.480760 systemd[1]: Finished ignition-setup.service. May 13 00:41:54.480000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:54.481530 systemd[1]: Starting ignition-fetch-offline.service... May 13 00:41:54.519724 systemd[1]: Finished parse-ip-for-networkd.service. May 13 00:41:54.519000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:54.521000 audit: BPF prog-id=9 op=LOAD May 13 00:41:54.522218 systemd[1]: Starting systemd-networkd.service... May 13 00:41:54.522703 ignition[645]: Ignition 2.14.0 May 13 00:41:54.522712 ignition[645]: Stage: fetch-offline May 13 00:41:54.522780 ignition[645]: no configs at "/usr/lib/ignition/base.d" May 13 00:41:54.522792 ignition[645]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:41:54.522921 ignition[645]: parsed url from cmdline: "" May 13 00:41:54.522926 ignition[645]: no config URL provided May 13 00:41:54.522932 ignition[645]: reading system config file "/usr/lib/ignition/user.ign" May 13 00:41:54.522941 ignition[645]: no config at "/usr/lib/ignition/user.ign" May 13 00:41:54.522961 ignition[645]: op(1): [started] loading QEMU firmware config module May 13 00:41:54.522967 ignition[645]: op(1): executing: "modprobe" "qemu_fw_cfg" May 13 00:41:54.525504 ignition[645]: op(1): [finished] loading QEMU firmware config module May 13 00:41:54.525525 ignition[645]: QEMU firmware config was not found. Ignoring... May 13 00:41:54.534270 unknown[645]: fetched base config from "system" May 13 00:41:54.536000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:54.528547 ignition[645]: parsing config with SHA512: 0e122f266948695c90c477255112b3a9a7ae4026069e91b444dbf0f6f88cd267b908428009b3327b6dc0bc58ac8b82274332cc3f852b683aed3137ef58ab7447 May 13 00:41:54.534276 unknown[645]: fetched user config from "qemu" May 13 00:41:54.534587 ignition[645]: fetch-offline: fetch-offline passed May 13 00:41:54.535297 systemd[1]: Finished ignition-fetch-offline.service. May 13 00:41:54.534637 ignition[645]: Ignition finished successfully May 13 00:41:54.557890 systemd-networkd[716]: lo: Link UP May 13 00:41:54.560000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:54.557899 systemd-networkd[716]: lo: Gained carrier May 13 00:41:54.558517 systemd-networkd[716]: Enumeration completed May 13 00:41:54.558615 systemd[1]: Started systemd-networkd.service. May 13 00:41:54.558811 systemd-networkd[716]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 00:41:54.560300 systemd[1]: Reached target network.target. May 13 00:41:54.560970 systemd-networkd[716]: eth0: Link UP May 13 00:41:54.560974 systemd-networkd[716]: eth0: Gained carrier May 13 00:41:54.568000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:54.561326 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 13 00:41:54.561950 systemd[1]: Starting ignition-kargs.service... May 13 00:41:54.563375 systemd[1]: Starting iscsiuio.service... May 13 00:41:54.567993 systemd[1]: Started iscsiuio.service. May 13 00:41:54.575075 iscsid[724]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi May 13 00:41:54.575075 iscsid[724]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. May 13 00:41:54.575075 iscsid[724]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. May 13 00:41:54.575075 iscsid[724]: If using hardware iscsi like qla4xxx this message can be ignored. May 13 00:41:54.575075 iscsid[724]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi May 13 00:41:54.575075 iscsid[724]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf May 13 00:41:54.575000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:54.579000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:54.587000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:54.570128 systemd[1]: Starting iscsid.service... May 13 00:41:54.576364 ignition[720]: Ignition 2.14.0 May 13 00:41:54.575095 systemd[1]: Started iscsid.service. May 13 00:41:54.576369 ignition[720]: Stage: kargs May 13 00:41:54.576629 systemd[1]: Starting dracut-initqueue.service... May 13 00:41:54.576489 ignition[720]: no configs at "/usr/lib/ignition/base.d" May 13 00:41:54.579043 systemd[1]: Finished ignition-kargs.service. May 13 00:41:54.576498 ignition[720]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:41:54.580507 systemd[1]: Starting ignition-disks.service... May 13 00:41:54.577265 ignition[720]: kargs: kargs passed May 13 00:41:54.586472 systemd[1]: Finished dracut-initqueue.service. May 13 00:41:54.577300 ignition[720]: Ignition finished successfully May 13 00:41:54.587550 systemd[1]: Reached target remote-fs-pre.target. May 13 00:41:54.586383 ignition[732]: Ignition 2.14.0 May 13 00:41:54.588483 systemd[1]: Reached target remote-cryptsetup.target. May 13 00:41:54.586429 ignition[732]: Stage: disks May 13 00:41:54.590476 systemd[1]: Reached target remote-fs.target. May 13 00:41:54.586543 ignition[732]: no configs at "/usr/lib/ignition/base.d" May 13 00:41:54.586551 ignition[732]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:41:54.587294 ignition[732]: disks: disks passed May 13 00:41:54.587325 ignition[732]: Ignition finished successfully May 13 00:41:54.608926 systemd[1]: Starting dracut-pre-mount.service... May 13 00:41:54.610546 systemd-networkd[716]: eth0: DHCPv4 address 10.0.0.58/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 00:41:54.610696 systemd[1]: Finished ignition-disks.service. May 13 00:41:54.613000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:54.613700 systemd[1]: Reached target initrd-root-device.target. May 13 00:41:54.615577 systemd[1]: Reached target local-fs-pre.target. May 13 00:41:54.617163 systemd[1]: Reached target local-fs.target. May 13 00:41:54.618700 systemd[1]: Reached target sysinit.target. May 13 00:41:54.620170 systemd[1]: Reached target basic.target. May 13 00:41:54.621836 systemd[1]: Finished dracut-pre-mount.service. May 13 00:41:54.623000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:54.623986 systemd[1]: Starting systemd-fsck-root.service... May 13 00:41:54.632860 systemd-fsck[753]: ROOT: clean, 619/553520 files, 56023/553472 blocks May 13 00:41:54.638132 systemd[1]: Finished systemd-fsck-root.service. May 13 00:41:54.640000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:54.640707 systemd[1]: Mounting sysroot.mount... May 13 00:41:54.647225 systemd[1]: Mounted sysroot.mount. May 13 00:41:54.648500 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. May 13 00:41:54.647345 systemd[1]: Reached target initrd-root-fs.target. May 13 00:41:54.650157 systemd[1]: Mounting sysroot-usr.mount... May 13 00:41:54.651133 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. May 13 00:41:54.651162 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 13 00:41:54.651181 systemd[1]: Reached target ignition-diskful.target. May 13 00:41:54.653120 systemd[1]: Mounted sysroot-usr.mount. May 13 00:41:54.656119 systemd[1]: Starting initrd-setup-root.service... May 13 00:41:54.662417 initrd-setup-root[763]: cut: /sysroot/etc/passwd: No such file or directory May 13 00:41:54.666146 initrd-setup-root[771]: cut: /sysroot/etc/group: No such file or directory May 13 00:41:54.669352 initrd-setup-root[779]: cut: /sysroot/etc/shadow: No such file or directory May 13 00:41:54.672632 initrd-setup-root[787]: cut: /sysroot/etc/gshadow: No such file or directory May 13 00:41:54.696229 systemd[1]: Finished initrd-setup-root.service. May 13 00:41:54.697000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:54.698568 systemd[1]: Starting ignition-mount.service... May 13 00:41:54.700710 systemd[1]: Starting sysroot-boot.service... May 13 00:41:54.703222 bash[804]: umount: /sysroot/usr/share/oem: not mounted. May 13 00:41:54.710872 ignition[805]: INFO : Ignition 2.14.0 May 13 00:41:54.710872 ignition[805]: INFO : Stage: mount May 13 00:41:54.713636 ignition[805]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 00:41:54.713636 ignition[805]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:41:54.713636 ignition[805]: INFO : mount: mount passed May 13 00:41:54.713636 ignition[805]: INFO : Ignition finished successfully May 13 00:41:54.713000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:54.718000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:54.712589 systemd[1]: Finished ignition-mount.service. May 13 00:41:54.717249 systemd[1]: Finished sysroot-boot.service. May 13 00:41:55.398928 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 13 00:41:55.404431 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (815) May 13 00:41:55.406517 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 13 00:41:55.406537 kernel: BTRFS info (device vda6): using free space tree May 13 00:41:55.406547 kernel: BTRFS info (device vda6): has skinny extents May 13 00:41:55.410266 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 13 00:41:55.411915 systemd[1]: Starting ignition-files.service... May 13 00:41:55.426245 ignition[835]: INFO : Ignition 2.14.0 May 13 00:41:55.426245 ignition[835]: INFO : Stage: files May 13 00:41:55.428068 ignition[835]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 00:41:55.428068 ignition[835]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:41:55.428068 ignition[835]: DEBUG : files: compiled without relabeling support, skipping May 13 00:41:55.431768 ignition[835]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 13 00:41:55.431768 ignition[835]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 13 00:41:55.435698 ignition[835]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 13 00:41:55.437100 ignition[835]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 13 00:41:55.438714 unknown[835]: wrote ssh authorized keys file for user: core May 13 00:41:55.439856 ignition[835]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 13 00:41:55.441314 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" May 13 00:41:55.441314 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" May 13 00:41:55.441314 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 13 00:41:55.441314 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 13 00:41:55.441314 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/flatcar/update.conf" May 13 00:41:55.441314 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 13 00:41:55.441314 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 13 00:41:55.441314 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 13 00:41:55.441314 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 13 00:41:55.441314 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 May 13 00:41:55.871575 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK May 13 00:41:56.421323 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 13 00:41:56.421323 ignition[835]: INFO : files: op(8): [started] processing unit "containerd.service" May 13 00:41:56.426375 ignition[835]: INFO : files: op(8): op(9): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 13 00:41:56.429537 ignition[835]: INFO : files: op(8): op(9): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 13 00:41:56.429537 ignition[835]: INFO : files: op(8): [finished] processing unit "containerd.service" May 13 00:41:56.429537 ignition[835]: INFO : files: op(a): [started] processing unit "coreos-metadata.service" May 13 00:41:56.435787 ignition[835]: INFO : files: op(a): op(b): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 00:41:56.435787 ignition[835]: INFO : files: op(a): op(b): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 00:41:56.435787 ignition[835]: INFO : files: op(a): [finished] processing unit "coreos-metadata.service" May 13 00:41:56.435787 ignition[835]: INFO : files: op(c): [started] setting preset to disabled for "coreos-metadata.service" May 13 00:41:56.435787 ignition[835]: INFO : files: op(c): op(d): [started] removing enablement symlink(s) for "coreos-metadata.service" May 13 00:41:56.471170 ignition[835]: INFO : files: op(c): op(d): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 13 00:41:56.473238 ignition[835]: INFO : files: op(c): [finished] setting preset to disabled for "coreos-metadata.service" May 13 00:41:56.475067 ignition[835]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" May 13 00:41:56.477154 ignition[835]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" May 13 00:41:56.479208 ignition[835]: INFO : files: files passed May 13 00:41:56.479208 ignition[835]: INFO : Ignition finished successfully May 13 00:41:56.480879 systemd[1]: Finished ignition-files.service. May 13 00:41:56.489283 kernel: kauditd_printk_skb: 25 callbacks suppressed May 13 00:41:56.489324 kernel: audit: type=1130 audit(1747096916.482:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:56.482000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:56.484105 systemd[1]: Starting initrd-setup-root-after-ignition.service... May 13 00:41:56.487722 systemd-networkd[716]: eth0: Gained IPv6LL May 13 00:41:56.493904 initrd-setup-root-after-ignition[859]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory May 13 00:41:56.489266 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). May 13 00:41:56.506114 kernel: audit: type=1130 audit(1747096916.496:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:56.506148 kernel: audit: type=1131 audit(1747096916.496:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:56.506161 kernel: audit: type=1130 audit(1747096916.505:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:56.496000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:56.496000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:56.505000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:56.506434 initrd-setup-root-after-ignition[862]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 00:41:56.490578 systemd[1]: Starting ignition-quench.service... May 13 00:41:56.494114 systemd[1]: ignition-quench.service: Deactivated successfully. May 13 00:41:56.494215 systemd[1]: Finished ignition-quench.service. May 13 00:41:56.497343 systemd[1]: Finished initrd-setup-root-after-ignition.service. May 13 00:41:56.506257 systemd[1]: Reached target ignition-complete.target. May 13 00:41:56.523669 systemd[1]: Starting initrd-parse-etc.service... May 13 00:41:56.536976 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 13 00:41:56.537093 systemd[1]: Finished initrd-parse-etc.service. May 13 00:41:56.558304 kernel: audit: type=1130 audit(1747096916.549:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:56.558340 kernel: audit: type=1131 audit(1747096916.549:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:56.549000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:56.549000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:56.550158 systemd[1]: Reached target initrd-fs.target. May 13 00:41:56.558380 systemd[1]: Reached target initrd.target. May 13 00:41:56.559384 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. May 13 00:41:56.560683 systemd[1]: Starting dracut-pre-pivot.service... May 13 00:41:56.575904 systemd[1]: Finished dracut-pre-pivot.service. May 13 00:41:56.582527 kernel: audit: type=1130 audit(1747096916.576:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:56.576000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:56.578325 systemd[1]: Starting initrd-cleanup.service... May 13 00:41:56.589110 systemd[1]: Stopped target nss-lookup.target. May 13 00:41:56.590741 systemd[1]: Stopped target remote-cryptsetup.target. May 13 00:41:56.592757 systemd[1]: Stopped target timers.target. May 13 00:41:56.594890 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 13 00:41:56.602954 kernel: audit: type=1131 audit(1747096916.596:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:56.596000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:56.595059 systemd[1]: Stopped dracut-pre-pivot.service. May 13 00:41:56.597017 systemd[1]: Stopped target initrd.target. May 13 00:41:56.603165 systemd[1]: Stopped target basic.target. May 13 00:41:56.605666 systemd[1]: Stopped target ignition-complete.target. May 13 00:41:56.607996 systemd[1]: Stopped target ignition-diskful.target. May 13 00:41:56.610297 systemd[1]: Stopped target initrd-root-device.target. May 13 00:41:56.612612 systemd[1]: Stopped target remote-fs.target. May 13 00:41:56.614731 systemd[1]: Stopped target remote-fs-pre.target. May 13 00:41:56.617014 systemd[1]: Stopped target sysinit.target. May 13 00:41:56.619063 systemd[1]: Stopped target local-fs.target. May 13 00:41:56.621052 systemd[1]: Stopped target local-fs-pre.target. May 13 00:41:56.623083 systemd[1]: Stopped target swap.target. May 13 00:41:56.633798 kernel: audit: type=1131 audit(1747096916.627:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:56.627000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:56.624945 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 13 00:41:56.625085 systemd[1]: Stopped dracut-pre-mount.service. May 13 00:41:56.643294 kernel: audit: type=1131 audit(1747096916.636:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:56.636000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:56.627739 systemd[1]: Stopped target cryptsetup.target. May 13 00:41:56.643000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:56.633936 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 13 00:41:56.634146 systemd[1]: Stopped dracut-initqueue.service. May 13 00:41:56.636614 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 13 00:41:56.636785 systemd[1]: Stopped ignition-fetch-offline.service. May 13 00:41:56.643720 systemd[1]: Stopped target paths.target. May 13 00:41:56.645943 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 13 00:41:56.649575 systemd[1]: Stopped systemd-ask-password-console.path. May 13 00:41:56.651239 systemd[1]: Stopped target slices.target. May 13 00:41:56.659000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:56.653045 systemd[1]: Stopped target sockets.target. May 13 00:41:56.661000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:56.655047 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 13 00:41:56.655274 systemd[1]: Stopped initrd-setup-root-after-ignition.service. May 13 00:41:56.667800 iscsid[724]: iscsid shutting down. May 13 00:41:56.660247 systemd[1]: ignition-files.service: Deactivated successfully. May 13 00:41:56.660416 systemd[1]: Stopped ignition-files.service. May 13 00:41:56.664233 systemd[1]: Stopping ignition-mount.service... May 13 00:41:56.675000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:56.676724 ignition[875]: INFO : Ignition 2.14.0 May 13 00:41:56.676724 ignition[875]: INFO : Stage: umount May 13 00:41:56.676724 ignition[875]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 00:41:56.676724 ignition[875]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:41:56.676724 ignition[875]: INFO : umount: umount passed May 13 00:41:56.676724 ignition[875]: INFO : Ignition finished successfully May 13 00:41:56.678000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:56.665928 systemd[1]: Stopping iscsid.service... May 13 00:41:56.670108 systemd[1]: Stopping sysroot-boot.service... May 13 00:41:56.672250 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 13 00:41:56.672631 systemd[1]: Stopped systemd-udev-trigger.service. May 13 00:41:56.676088 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 13 00:41:56.676266 systemd[1]: Stopped dracut-pre-trigger.service. May 13 00:41:56.700301 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 13 00:41:56.703386 systemd[1]: iscsid.service: Deactivated successfully. May 13 00:41:56.703564 systemd[1]: Stopped iscsid.service. May 13 00:41:56.706000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:56.707572 systemd[1]: ignition-mount.service: Deactivated successfully. May 13 00:41:56.707670 systemd[1]: Stopped ignition-mount.service. May 13 00:41:56.709000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:56.711126 systemd[1]: iscsid.socket: Deactivated successfully. May 13 00:41:56.713243 systemd[1]: Closed iscsid.socket. May 13 00:41:56.715206 systemd[1]: ignition-disks.service: Deactivated successfully. May 13 00:41:56.715279 systemd[1]: Stopped ignition-disks.service. May 13 00:41:56.722000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:56.723036 systemd[1]: ignition-kargs.service: Deactivated successfully. May 13 00:41:56.723000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:56.723275 systemd[1]: Stopped ignition-kargs.service. May 13 00:41:56.727000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:56.724476 systemd[1]: ignition-setup.service: Deactivated successfully. May 13 00:41:56.724538 systemd[1]: Stopped ignition-setup.service. May 13 00:41:56.729054 systemd[1]: Stopping iscsiuio.service... May 13 00:41:56.733280 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 13 00:41:56.734000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:56.735000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:56.733442 systemd[1]: Finished initrd-cleanup.service. May 13 00:41:56.737031 systemd[1]: iscsiuio.service: Deactivated successfully. May 13 00:41:56.738245 systemd[1]: Stopped iscsiuio.service. May 13 00:41:56.740000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:56.742449 systemd[1]: Stopped target network.target. May 13 00:41:56.744793 systemd[1]: iscsiuio.socket: Deactivated successfully. May 13 00:41:56.744895 systemd[1]: Closed iscsiuio.socket. May 13 00:41:56.748017 systemd[1]: Stopping systemd-networkd.service... May 13 00:41:56.750460 systemd[1]: Stopping systemd-resolved.service... May 13 00:41:56.761412 systemd[1]: systemd-resolved.service: Deactivated successfully. May 13 00:41:56.761562 systemd[1]: Stopped systemd-resolved.service. May 13 00:41:56.765000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:56.766512 systemd-networkd[716]: eth0: DHCPv6 lease lost May 13 00:41:56.768902 systemd[1]: systemd-networkd.service: Deactivated successfully. May 13 00:41:56.769109 systemd[1]: Stopped systemd-networkd.service. May 13 00:41:56.770522 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 13 00:41:56.770569 systemd[1]: Closed systemd-networkd.socket. May 13 00:41:56.768000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:56.777862 systemd[1]: Stopping network-cleanup.service... May 13 00:41:56.778000 audit: BPF prog-id=6 op=UNLOAD May 13 00:41:56.779059 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 13 00:41:56.781000 audit: BPF prog-id=9 op=UNLOAD May 13 00:41:56.783000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:56.780241 systemd[1]: Stopped parse-ip-for-networkd.service. May 13 00:41:56.783462 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 00:41:56.786000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:56.784573 systemd[1]: Stopped systemd-sysctl.service. May 13 00:41:56.787117 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 13 00:41:56.788291 systemd[1]: Stopped systemd-modules-load.service. May 13 00:41:56.791000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:56.792028 systemd[1]: Stopping systemd-udevd.service... May 13 00:41:56.795367 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 13 00:41:56.797886 systemd[1]: sysroot-boot.service: Deactivated successfully. May 13 00:41:56.800000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:56.798021 systemd[1]: Stopped sysroot-boot.service. May 13 00:41:56.801621 systemd[1]: systemd-udevd.service: Deactivated successfully. May 13 00:41:56.801789 systemd[1]: Stopped systemd-udevd.service. May 13 00:41:56.805000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:56.806964 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 13 00:41:56.807038 systemd[1]: Closed systemd-udevd-control.socket. May 13 00:41:56.810761 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 13 00:41:56.810840 systemd[1]: Closed systemd-udevd-kernel.socket. May 13 00:41:56.813705 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 13 00:41:56.813800 systemd[1]: Stopped dracut-pre-udev.service. May 13 00:41:56.816000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:56.816753 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 13 00:41:56.817000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:56.816924 systemd[1]: Stopped dracut-cmdline.service. May 13 00:41:56.820000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:56.818044 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 00:41:56.819036 systemd[1]: Stopped dracut-cmdline-ask.service. May 13 00:41:56.822000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:56.820893 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 13 00:41:56.820939 systemd[1]: Stopped initrd-setup-root.service. May 13 00:41:56.826470 systemd[1]: Starting initrd-udevadm-cleanup-db.service... May 13 00:41:56.828815 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 13 00:41:56.831000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:56.828916 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. May 13 00:41:56.831787 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 13 00:41:56.832000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:56.831861 systemd[1]: Stopped kmod-static-nodes.service. May 13 00:41:56.835000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:56.833037 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 00:41:56.833081 systemd[1]: Stopped systemd-vconsole-setup.service. May 13 00:41:56.838000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:56.836736 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 13 00:41:56.840000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:56.840000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:56.837197 systemd[1]: network-cleanup.service: Deactivated successfully. May 13 00:41:56.837296 systemd[1]: Stopped network-cleanup.service. May 13 00:41:56.838981 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 13 00:41:56.839056 systemd[1]: Finished initrd-udevadm-cleanup-db.service. May 13 00:41:56.840700 systemd[1]: Reached target initrd-switch-root.target. May 13 00:41:56.844529 systemd[1]: Starting initrd-switch-root.service... May 13 00:41:56.855116 systemd[1]: Switching root. May 13 00:41:56.857000 audit: BPF prog-id=5 op=UNLOAD May 13 00:41:56.857000 audit: BPF prog-id=4 op=UNLOAD May 13 00:41:56.857000 audit: BPF prog-id=3 op=UNLOAD May 13 00:41:56.860000 audit: BPF prog-id=8 op=UNLOAD May 13 00:41:56.860000 audit: BPF prog-id=7 op=UNLOAD May 13 00:41:56.879113 systemd-journald[197]: Journal stopped May 13 00:42:00.732598 systemd-journald[197]: Received SIGTERM from PID 1 (systemd). May 13 00:42:00.732655 kernel: SELinux: Class mctp_socket not defined in policy. May 13 00:42:00.732672 kernel: SELinux: Class anon_inode not defined in policy. May 13 00:42:00.732682 kernel: SELinux: the above unknown classes and permissions will be allowed May 13 00:42:00.732696 kernel: SELinux: policy capability network_peer_controls=1 May 13 00:42:00.732706 kernel: SELinux: policy capability open_perms=1 May 13 00:42:00.732719 kernel: SELinux: policy capability extended_socket_class=1 May 13 00:42:00.732731 kernel: SELinux: policy capability always_check_network=0 May 13 00:42:00.732740 kernel: SELinux: policy capability cgroup_seclabel=1 May 13 00:42:00.732750 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 13 00:42:00.732759 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 13 00:42:00.732770 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 13 00:42:00.732780 systemd[1]: Successfully loaded SELinux policy in 52.088ms. May 13 00:42:00.732801 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.968ms. May 13 00:42:00.732812 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 13 00:42:00.732823 systemd[1]: Detected virtualization kvm. May 13 00:42:00.732833 systemd[1]: Detected architecture x86-64. May 13 00:42:00.732843 systemd[1]: Detected first boot. May 13 00:42:00.732853 systemd[1]: Initializing machine ID from VM UUID. May 13 00:42:00.732863 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). May 13 00:42:00.732874 systemd[1]: Populated /etc with preset unit settings. May 13 00:42:00.732885 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 13 00:42:00.732899 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 13 00:42:00.732912 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:42:00.732923 systemd[1]: Queued start job for default target multi-user.target. May 13 00:42:00.732934 systemd[1]: Unnecessary job was removed for dev-vda6.device. May 13 00:42:00.732947 systemd[1]: Created slice system-addon\x2dconfig.slice. May 13 00:42:00.732957 systemd[1]: Created slice system-addon\x2drun.slice. May 13 00:42:00.732979 systemd[1]: Created slice system-getty.slice. May 13 00:42:00.732994 systemd[1]: Created slice system-modprobe.slice. May 13 00:42:00.733005 systemd[1]: Created slice system-serial\x2dgetty.slice. May 13 00:42:00.733016 systemd[1]: Created slice system-system\x2dcloudinit.slice. May 13 00:42:00.733030 systemd[1]: Created slice system-systemd\x2dfsck.slice. May 13 00:42:00.733040 systemd[1]: Created slice user.slice. May 13 00:42:00.733050 systemd[1]: Started systemd-ask-password-console.path. May 13 00:42:00.733060 systemd[1]: Started systemd-ask-password-wall.path. May 13 00:42:00.733071 systemd[1]: Set up automount boot.automount. May 13 00:42:00.733081 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. May 13 00:42:00.733091 systemd[1]: Reached target integritysetup.target. May 13 00:42:00.733101 systemd[1]: Reached target remote-cryptsetup.target. May 13 00:42:00.733111 systemd[1]: Reached target remote-fs.target. May 13 00:42:00.733122 systemd[1]: Reached target slices.target. May 13 00:42:00.733133 systemd[1]: Reached target swap.target. May 13 00:42:00.733143 systemd[1]: Reached target torcx.target. May 13 00:42:00.733153 systemd[1]: Reached target veritysetup.target. May 13 00:42:00.733163 systemd[1]: Listening on systemd-coredump.socket. May 13 00:42:00.733174 systemd[1]: Listening on systemd-initctl.socket. May 13 00:42:00.733184 systemd[1]: Listening on systemd-journald-audit.socket. May 13 00:42:00.733195 systemd[1]: Listening on systemd-journald-dev-log.socket. May 13 00:42:00.733208 systemd[1]: Listening on systemd-journald.socket. May 13 00:42:00.733218 systemd[1]: Listening on systemd-networkd.socket. May 13 00:42:00.733228 systemd[1]: Listening on systemd-udevd-control.socket. May 13 00:42:00.733238 systemd[1]: Listening on systemd-udevd-kernel.socket. May 13 00:42:00.733248 systemd[1]: Listening on systemd-userdbd.socket. May 13 00:42:00.733259 systemd[1]: Mounting dev-hugepages.mount... May 13 00:42:00.733283 systemd[1]: Mounting dev-mqueue.mount... May 13 00:42:00.733296 systemd[1]: Mounting media.mount... May 13 00:42:00.733309 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:42:00.733321 systemd[1]: Mounting sys-kernel-debug.mount... May 13 00:42:00.733335 systemd[1]: Mounting sys-kernel-tracing.mount... May 13 00:42:00.733348 systemd[1]: Mounting tmp.mount... May 13 00:42:00.733360 systemd[1]: Starting flatcar-tmpfiles.service... May 13 00:42:00.733371 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 13 00:42:00.733381 systemd[1]: Starting kmod-static-nodes.service... May 13 00:42:00.733409 systemd[1]: Starting modprobe@configfs.service... May 13 00:42:00.733420 systemd[1]: Starting modprobe@dm_mod.service... May 13 00:42:00.733430 systemd[1]: Starting modprobe@drm.service... May 13 00:42:00.733441 systemd[1]: Starting modprobe@efi_pstore.service... May 13 00:42:00.733454 systemd[1]: Starting modprobe@fuse.service... May 13 00:42:00.733464 systemd[1]: Starting modprobe@loop.service... May 13 00:42:00.733475 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 13 00:42:00.733486 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. May 13 00:42:00.733496 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) May 13 00:42:00.733509 systemd[1]: Starting systemd-journald.service... May 13 00:42:00.733519 kernel: fuse: init (API version 7.34) May 13 00:42:00.733529 systemd[1]: Starting systemd-modules-load.service... May 13 00:42:00.733539 systemd[1]: Starting systemd-network-generator.service... May 13 00:42:00.733551 systemd[1]: Starting systemd-remount-fs.service... May 13 00:42:00.733562 systemd[1]: Starting systemd-udev-trigger.service... May 13 00:42:00.733572 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:42:00.733582 kernel: loop: module loaded May 13 00:42:00.733592 systemd[1]: Mounted dev-hugepages.mount. May 13 00:42:00.733603 systemd[1]: Mounted dev-mqueue.mount. May 13 00:42:00.733612 systemd[1]: Mounted media.mount. May 13 00:42:00.733625 systemd[1]: Mounted sys-kernel-debug.mount. May 13 00:42:00.733635 systemd[1]: Mounted sys-kernel-tracing.mount. May 13 00:42:00.733647 systemd[1]: Mounted tmp.mount. May 13 00:42:00.733657 systemd[1]: Finished kmod-static-nodes.service. May 13 00:42:00.733670 systemd-journald[1017]: Journal started May 13 00:42:00.733717 systemd-journald[1017]: Runtime Journal (/run/log/journal/2e4cfe9f63dd44daba82bb7a08ceaca7) is 6.0M, max 48.5M, 42.5M free. May 13 00:42:00.473000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 13 00:42:00.473000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 May 13 00:42:00.730000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 13 00:42:00.730000 audit[1017]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7fff12b6cf60 a2=4000 a3=7fff12b6cffc items=0 ppid=1 pid=1017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:42:00.730000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" May 13 00:42:00.733000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:42:00.735761 systemd[1]: Started systemd-journald.service. May 13 00:42:00.735000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:42:00.736766 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 13 00:42:00.737005 systemd[1]: Finished modprobe@configfs.service. May 13 00:42:00.737000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:42:00.737000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:42:00.738181 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:42:00.738568 systemd[1]: Finished modprobe@dm_mod.service. May 13 00:42:00.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:42:00.739000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:42:00.739909 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 00:42:00.740105 systemd[1]: Finished modprobe@drm.service. May 13 00:42:00.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:42:00.740000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:42:00.741378 systemd[1]: Finished flatcar-tmpfiles.service. May 13 00:42:00.741000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:42:00.742484 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:42:00.742653 systemd[1]: Finished modprobe@efi_pstore.service. May 13 00:42:00.743000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:42:00.743000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:42:00.743988 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 13 00:42:00.744154 systemd[1]: Finished modprobe@fuse.service. May 13 00:42:00.744000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:42:00.744000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:42:00.745234 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:42:00.745557 systemd[1]: Finished modprobe@loop.service. May 13 00:42:00.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:42:00.746000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:42:00.746835 systemd[1]: Finished systemd-modules-load.service. May 13 00:42:00.748058 systemd[1]: Finished systemd-network-generator.service. May 13 00:42:00.747000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:42:00.748000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:42:00.749253 systemd[1]: Finished systemd-remount-fs.service. May 13 00:42:00.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:42:00.750499 systemd[1]: Reached target network-pre.target. May 13 00:42:00.752655 systemd[1]: Mounting sys-fs-fuse-connections.mount... May 13 00:42:00.754500 systemd[1]: Mounting sys-kernel-config.mount... May 13 00:42:00.755245 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 13 00:42:00.758060 systemd[1]: Starting systemd-hwdb-update.service... May 13 00:42:00.759922 systemd[1]: Starting systemd-journal-flush.service... May 13 00:42:00.763461 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 00:42:00.764828 systemd[1]: Starting systemd-random-seed.service... May 13 00:42:00.766703 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 13 00:42:00.767812 systemd[1]: Starting systemd-sysctl.service... May 13 00:42:00.769828 systemd[1]: Starting systemd-sysusers.service... May 13 00:42:00.772855 systemd[1]: Mounted sys-fs-fuse-connections.mount. May 13 00:42:00.774258 systemd[1]: Mounted sys-kernel-config.mount. May 13 00:42:00.775640 systemd[1]: Finished systemd-udev-trigger.service. May 13 00:42:00.776000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:42:00.777892 systemd[1]: Starting systemd-udev-settle.service... May 13 00:42:00.779987 systemd[1]: Finished systemd-random-seed.service. May 13 00:42:00.781158 systemd[1]: Reached target first-boot-complete.target. May 13 00:42:00.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:42:00.788364 systemd[1]: Finished systemd-sysusers.service. May 13 00:42:00.791348 systemd-journald[1017]: Time spent on flushing to /var/log/journal/2e4cfe9f63dd44daba82bb7a08ceaca7 is 16.080ms for 1036 entries. May 13 00:42:00.791348 systemd-journald[1017]: System Journal (/var/log/journal/2e4cfe9f63dd44daba82bb7a08ceaca7) is 8.0M, max 195.6M, 187.6M free. May 13 00:42:00.999299 systemd-journald[1017]: Received client request to flush runtime journal. May 13 00:42:00.818000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:42:00.822000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:42:00.858000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:42:00.999783 udevadm[1058]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 13 00:42:00.820828 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 13 00:42:00.822150 systemd[1]: Finished systemd-sysctl.service. May 13 00:42:00.858304 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 13 00:42:01.000760 systemd[1]: Finished systemd-journal-flush.service. May 13 00:42:00.999000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:42:01.626439 systemd[1]: Finished systemd-hwdb-update.service. May 13 00:42:01.627000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:42:01.629088 kernel: kauditd_printk_skb: 77 callbacks suppressed May 13 00:42:01.629157 kernel: audit: type=1130 audit(1747096921.627:114): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:42:01.629712 systemd[1]: Starting systemd-udevd.service... May 13 00:42:01.652705 systemd-udevd[1070]: Using default interface naming scheme 'v252'. May 13 00:42:01.670823 systemd[1]: Started systemd-udevd.service. May 13 00:42:01.671000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:42:01.674720 systemd[1]: Starting systemd-networkd.service... May 13 00:42:01.676421 kernel: audit: type=1130 audit(1747096921.671:115): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:42:01.684143 systemd[1]: Starting systemd-userdbd.service... May 13 00:42:01.728060 systemd[1]: Started systemd-userdbd.service. May 13 00:42:01.736378 kernel: audit: type=1130 audit(1747096921.729:116): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:42:01.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:42:01.736215 systemd[1]: Found device dev-ttyS0.device. May 13 00:42:01.749953 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 13 00:42:01.773439 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 13 00:42:01.784421 kernel: ACPI: button: Power Button [PWRF] May 13 00:42:01.802852 systemd-networkd[1079]: lo: Link UP May 13 00:42:01.803300 systemd-networkd[1079]: lo: Gained carrier May 13 00:42:01.804013 systemd-networkd[1079]: Enumeration completed May 13 00:42:01.804328 systemd-networkd[1079]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 00:42:01.805000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:42:01.804368 systemd[1]: Started systemd-networkd.service. May 13 00:42:01.807486 systemd-networkd[1079]: eth0: Link UP May 13 00:42:01.807601 systemd-networkd[1079]: eth0: Gained carrier May 13 00:42:01.810441 kernel: audit: type=1130 audit(1747096921.805:117): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:42:01.820576 systemd-networkd[1079]: eth0: DHCPv4 address 10.0.0.58/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 00:42:01.797000 audit[1076]: AVC avc: denied { confidentiality } for pid=1076 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 May 13 00:42:01.830414 kernel: audit: type=1400 audit(1747096921.797:118): avc: denied { confidentiality } for pid=1076 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 May 13 00:42:01.847113 kernel: audit: type=1300 audit(1747096921.797:118): arch=c000003e syscall=175 success=yes exit=0 a0=556a82d3c780 a1=338ac a2=7f764fe96bc5 a3=5 items=110 ppid=1070 pid=1076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:42:01.847281 kernel: audit: type=1307 audit(1747096921.797:118): cwd="/" May 13 00:42:01.847313 kernel: audit: type=1302 audit(1747096921.797:118): item=0 name=(null) inode=2063 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.847362 kernel: audit: type=1302 audit(1747096921.797:118): item=1 name=(null) inode=14712 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.847418 kernel: audit: type=1302 audit(1747096921.797:118): item=2 name=(null) inode=14712 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit[1076]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=556a82d3c780 a1=338ac a2=7f764fe96bc5 a3=5 items=110 ppid=1070 pid=1076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:42:01.797000 audit: CWD cwd="/" May 13 00:42:01.797000 audit: PATH item=0 name=(null) inode=2063 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=1 name=(null) inode=14712 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=2 name=(null) inode=14712 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=3 name=(null) inode=14713 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=4 name=(null) inode=14712 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=5 name=(null) inode=14714 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=6 name=(null) inode=14712 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=7 name=(null) inode=14715 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=8 name=(null) inode=14715 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=9 name=(null) inode=14716 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=10 name=(null) inode=14715 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=11 name=(null) inode=14717 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=12 name=(null) inode=14715 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=13 name=(null) inode=14718 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=14 name=(null) inode=14715 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=15 name=(null) inode=14719 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=16 name=(null) inode=14715 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=17 name=(null) inode=14720 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=18 name=(null) inode=14712 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=19 name=(null) inode=14721 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=20 name=(null) inode=14721 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=21 name=(null) inode=14722 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=22 name=(null) inode=14721 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=23 name=(null) inode=14723 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=24 name=(null) inode=14721 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=25 name=(null) inode=14724 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=26 name=(null) inode=14721 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=27 name=(null) inode=14725 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=28 name=(null) inode=14721 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=29 name=(null) inode=14726 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=30 name=(null) inode=14712 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=31 name=(null) inode=14727 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=32 name=(null) inode=14727 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=33 name=(null) inode=14728 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=34 name=(null) inode=14727 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=35 name=(null) inode=14729 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=36 name=(null) inode=14727 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=37 name=(null) inode=14730 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=38 name=(null) inode=14727 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=39 name=(null) inode=14731 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=40 name=(null) inode=14727 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=41 name=(null) inode=14732 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=42 name=(null) inode=14712 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=43 name=(null) inode=14733 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=44 name=(null) inode=14733 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=45 name=(null) inode=14734 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=46 name=(null) inode=14733 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=47 name=(null) inode=14735 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=48 name=(null) inode=14733 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=49 name=(null) inode=14736 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=50 name=(null) inode=14733 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=51 name=(null) inode=14737 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=52 name=(null) inode=14733 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=53 name=(null) inode=14738 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=54 name=(null) inode=2063 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=55 name=(null) inode=14739 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=56 name=(null) inode=14739 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=57 name=(null) inode=14740 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=58 name=(null) inode=14739 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=59 name=(null) inode=14741 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=60 name=(null) inode=14739 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=61 name=(null) inode=14742 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=62 name=(null) inode=14742 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=63 name=(null) inode=14743 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=64 name=(null) inode=14742 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=65 name=(null) inode=14744 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=66 name=(null) inode=14742 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=67 name=(null) inode=14745 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=68 name=(null) inode=14742 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=69 name=(null) inode=14746 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=70 name=(null) inode=14742 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=71 name=(null) inode=14747 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=72 name=(null) inode=14739 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=73 name=(null) inode=14748 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=74 name=(null) inode=14748 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=75 name=(null) inode=14749 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=76 name=(null) inode=14748 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=77 name=(null) inode=14750 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=78 name=(null) inode=14748 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=79 name=(null) inode=14751 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=80 name=(null) inode=14748 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=81 name=(null) inode=14752 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=82 name=(null) inode=14748 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=83 name=(null) inode=14753 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=84 name=(null) inode=14739 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=85 name=(null) inode=14754 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=86 name=(null) inode=14754 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=87 name=(null) inode=14755 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=88 name=(null) inode=14754 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=89 name=(null) inode=14756 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=90 name=(null) inode=14754 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=91 name=(null) inode=14757 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=92 name=(null) inode=14754 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=93 name=(null) inode=14758 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=94 name=(null) inode=14754 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=95 name=(null) inode=14759 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=96 name=(null) inode=14739 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=97 name=(null) inode=14760 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=98 name=(null) inode=14760 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=99 name=(null) inode=14761 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=100 name=(null) inode=14760 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=101 name=(null) inode=14762 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=102 name=(null) inode=14760 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=103 name=(null) inode=14763 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=104 name=(null) inode=14760 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=105 name=(null) inode=14764 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=106 name=(null) inode=14760 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=107 name=(null) inode=14765 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PATH item=109 name=(null) inode=14766 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:42:01.797000 audit: PROCTITLE proctitle="(udev-worker)" May 13 00:42:01.868441 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 13 00:42:01.948321 kernel: mousedev: PS/2 mouse device common for all mice May 13 00:42:01.948444 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 13 00:42:01.948880 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) May 13 00:42:01.949052 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 13 00:42:01.976441 kernel: kvm: Nested Virtualization enabled May 13 00:42:01.976673 kernel: SVM: kvm: Nested Paging enabled May 13 00:42:01.976709 kernel: SVM: Virtual VMLOAD VMSAVE supported May 13 00:42:01.976775 kernel: SVM: Virtual GIF supported May 13 00:42:01.995429 kernel: EDAC MC: Ver: 3.0.0 May 13 00:42:02.022050 systemd[1]: Finished systemd-udev-settle.service. May 13 00:42:02.022000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:42:02.025134 systemd[1]: Starting lvm2-activation-early.service... May 13 00:42:02.037435 lvm[1107]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 00:42:02.068957 systemd[1]: Finished lvm2-activation-early.service. May 13 00:42:02.069000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:42:02.070598 systemd[1]: Reached target cryptsetup.target. May 13 00:42:02.073280 systemd[1]: Starting lvm2-activation.service... May 13 00:42:02.077575 lvm[1109]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 00:42:02.107733 systemd[1]: Finished lvm2-activation.service. May 13 00:42:02.108000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:42:02.109222 systemd[1]: Reached target local-fs-pre.target. May 13 00:42:02.110243 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 13 00:42:02.110273 systemd[1]: Reached target local-fs.target. May 13 00:42:02.111202 systemd[1]: Reached target machines.target. May 13 00:42:02.114424 systemd[1]: Starting ldconfig.service... May 13 00:42:02.115785 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 13 00:42:02.115855 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:42:02.118052 systemd[1]: Starting systemd-boot-update.service... May 13 00:42:02.120495 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... May 13 00:42:02.123774 systemd[1]: Starting systemd-machine-id-commit.service... May 13 00:42:02.126713 systemd[1]: Starting systemd-sysext.service... May 13 00:42:02.127422 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1112 (bootctl) May 13 00:42:02.129480 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... May 13 00:42:02.131000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:42:02.130908 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. May 13 00:42:02.138638 systemd[1]: Unmounting usr-share-oem.mount... May 13 00:42:02.143308 systemd[1]: usr-share-oem.mount: Deactivated successfully. May 13 00:42:02.143705 systemd[1]: Unmounted usr-share-oem.mount. May 13 00:42:02.158458 kernel: loop0: detected capacity change from 0 to 210664 May 13 00:42:02.186404 systemd-fsck[1124]: fsck.fat 4.2 (2021-01-31) May 13 00:42:02.186404 systemd-fsck[1124]: /dev/vda1: 790 files, 120692/258078 clusters May 13 00:42:02.188538 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. May 13 00:42:02.191000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:42:02.193673 systemd[1]: Mounting boot.mount... May 13 00:42:02.427884 systemd[1]: Mounted boot.mount. May 13 00:42:02.442860 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 13 00:42:02.444803 systemd[1]: Finished systemd-machine-id-commit.service. May 13 00:42:02.445535 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 13 00:42:02.446000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:42:02.446887 systemd[1]: Finished systemd-boot-update.service. May 13 00:42:02.446000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:42:02.466424 kernel: loop1: detected capacity change from 0 to 210664 May 13 00:42:02.471273 (sd-sysext)[1133]: Using extensions 'kubernetes'. May 13 00:42:02.472001 (sd-sysext)[1133]: Merged extensions into '/usr'. May 13 00:42:02.488033 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:42:02.489778 systemd[1]: Mounting usr-share-oem.mount... May 13 00:42:02.496969 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 13 00:42:02.498513 systemd[1]: Starting modprobe@dm_mod.service... May 13 00:42:02.500786 systemd[1]: Starting modprobe@efi_pstore.service... May 13 00:42:02.502815 systemd[1]: Starting modprobe@loop.service... May 13 00:42:02.504379 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 13 00:42:02.504534 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:42:02.504666 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:42:02.505902 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:42:02.506103 systemd[1]: Finished modprobe@dm_mod.service. May 13 00:42:02.506000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:42:02.506000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:42:02.507743 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:42:02.507923 systemd[1]: Finished modprobe@efi_pstore.service. May 13 00:42:02.508000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:42:02.508000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:42:02.509652 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:42:02.509839 systemd[1]: Finished modprobe@loop.service. May 13 00:42:02.511000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:42:02.511000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:42:02.515063 systemd[1]: Mounted usr-share-oem.mount. May 13 00:42:02.516498 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 00:42:02.516643 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 13 00:42:02.517736 systemd[1]: Finished systemd-sysext.service. May 13 00:42:02.518000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:42:02.520445 systemd[1]: Starting ensure-sysext.service... May 13 00:42:02.522772 systemd[1]: Starting systemd-tmpfiles-setup.service... May 13 00:42:02.527303 systemd[1]: Reloading. May 13 00:42:02.565589 systemd-tmpfiles[1147]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. May 13 00:42:02.570856 systemd-tmpfiles[1147]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 13 00:42:02.573362 systemd-tmpfiles[1147]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 13 00:42:02.595802 /usr/lib/systemd/system-generators/torcx-generator[1167]: time="2025-05-13T00:42:02Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 13 00:42:02.595835 /usr/lib/systemd/system-generators/torcx-generator[1167]: time="2025-05-13T00:42:02Z" level=info msg="torcx already run" May 13 00:42:02.705491 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 13 00:42:02.705521 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 13 00:42:02.733860 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:42:02.750847 ldconfig[1111]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 13 00:42:02.792043 systemd[1]: Finished systemd-tmpfiles-setup.service. May 13 00:42:02.792000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:42:02.796709 systemd[1]: Starting audit-rules.service... May 13 00:42:02.799185 systemd[1]: Starting clean-ca-certificates.service... May 13 00:42:02.801802 systemd[1]: Starting systemd-journal-catalog-update.service... May 13 00:42:02.804503 systemd[1]: Starting systemd-resolved.service... May 13 00:42:02.807497 systemd[1]: Starting systemd-timesyncd.service... May 13 00:42:02.809850 systemd[1]: Starting systemd-update-utmp.service... May 13 00:42:02.811865 systemd[1]: Finished clean-ca-certificates.service. May 13 00:42:02.814000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:42:02.816917 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 00:42:02.816000 audit[1226]: SYSTEM_BOOT pid=1226 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' May 13 00:42:02.827280 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 13 00:42:02.829658 systemd[1]: Starting modprobe@dm_mod.service... May 13 00:42:02.832752 systemd[1]: Starting modprobe@efi_pstore.service... May 13 00:42:02.835352 systemd[1]: Starting modprobe@loop.service... May 13 00:42:02.836412 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 13 00:42:02.836562 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:42:02.836704 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 00:42:02.839000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:42:02.838688 systemd[1]: Finished systemd-update-utmp.service. May 13 00:42:02.840660 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:42:02.840847 systemd[1]: Finished modprobe@dm_mod.service. May 13 00:42:02.842000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:42:02.842000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:42:02.843143 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:42:02.843337 systemd[1]: Finished modprobe@efi_pstore.service. May 13 00:42:02.844000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:42:02.844000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:42:02.845286 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:42:02.848601 systemd[1]: Finished modprobe@loop.service. May 13 00:42:02.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:42:02.851000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:42:02.852330 augenrules[1245]: No rules May 13 00:42:02.851000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 May 13 00:42:02.851000 audit[1245]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffdfdb98d70 a2=420 a3=0 items=0 ppid=1217 pid=1245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:42:02.851000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 May 13 00:42:02.852659 systemd[1]: Finished systemd-journal-catalog-update.service. May 13 00:42:02.854371 systemd[1]: Finished audit-rules.service. May 13 00:42:02.856559 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 00:42:02.856682 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 13 00:42:02.858463 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 13 00:42:02.859993 systemd[1]: Starting modprobe@dm_mod.service... May 13 00:42:02.862038 systemd[1]: Starting modprobe@efi_pstore.service... May 13 00:42:02.864079 systemd[1]: Starting modprobe@loop.service... May 13 00:42:02.865105 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 13 00:42:02.865317 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:42:02.865473 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 00:42:02.866476 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:42:02.866693 systemd[1]: Finished modprobe@dm_mod.service. May 13 00:42:02.918360 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:42:02.918622 systemd[1]: Finished modprobe@efi_pstore.service. May 13 00:42:02.920867 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:42:02.921162 systemd[1]: Finished modprobe@loop.service. May 13 00:42:02.922999 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 00:42:02.923325 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 13 00:42:02.927008 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 13 00:42:02.928382 systemd[1]: Starting modprobe@dm_mod.service... May 13 00:42:02.930697 systemd[1]: Starting modprobe@drm.service... May 13 00:42:02.932962 systemd[1]: Starting modprobe@efi_pstore.service... May 13 00:42:02.935951 systemd[1]: Starting modprobe@loop.service... May 13 00:42:02.937027 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 13 00:42:02.937265 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:42:02.939151 systemd[1]: Starting systemd-networkd-wait-online.service... May 13 00:42:02.940661 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 00:42:02.942286 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:42:02.942545 systemd[1]: Finished modprobe@dm_mod.service. May 13 00:42:02.943975 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 00:42:02.944129 systemd[1]: Finished modprobe@drm.service. May 13 00:42:02.945386 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:42:02.945603 systemd[1]: Finished modprobe@efi_pstore.service. May 13 00:42:02.946907 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:42:02.947093 systemd[1]: Finished modprobe@loop.service. May 13 00:42:02.949037 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 00:42:02.949166 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 13 00:42:02.951822 systemd[1]: Finished ensure-sysext.service. May 13 00:42:02.961938 systemd[1]: Started systemd-timesyncd.service. May 13 00:42:02.963067 systemd[1]: Reached target time-set.target. May 13 00:42:02.964673 systemd-timesyncd[1223]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 13 00:42:02.964876 systemd-timesyncd[1223]: Initial clock synchronization to Tue 2025-05-13 00:42:03.099200 UTC. May 13 00:42:02.965531 systemd-resolved[1221]: Positive Trust Anchors: May 13 00:42:02.965553 systemd-resolved[1221]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 00:42:02.965591 systemd-resolved[1221]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 13 00:42:02.974718 systemd-resolved[1221]: Defaulting to hostname 'linux'. May 13 00:42:02.976934 systemd[1]: Started systemd-resolved.service. May 13 00:42:02.978344 systemd[1]: Reached target network.target. May 13 00:42:02.979491 systemd[1]: Reached target nss-lookup.target. May 13 00:42:03.006620 systemd[1]: Finished ldconfig.service. May 13 00:42:03.009943 systemd[1]: Starting systemd-update-done.service... May 13 00:42:03.019696 systemd[1]: Finished systemd-update-done.service. May 13 00:42:03.020997 systemd[1]: Reached target sysinit.target. May 13 00:42:03.022016 systemd[1]: Started motdgen.path. May 13 00:42:03.022995 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. May 13 00:42:03.024459 systemd[1]: Started logrotate.timer. May 13 00:42:03.025442 systemd[1]: Started mdadm.timer. May 13 00:42:03.026253 systemd[1]: Started systemd-tmpfiles-clean.timer. May 13 00:42:03.027442 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 13 00:42:03.027544 systemd[1]: Reached target paths.target. May 13 00:42:03.028476 systemd[1]: Reached target timers.target. May 13 00:42:03.029839 systemd[1]: Listening on dbus.socket. May 13 00:42:03.032232 systemd[1]: Starting docker.socket... May 13 00:42:03.034555 systemd[1]: Listening on sshd.socket. May 13 00:42:03.039136 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:42:03.039671 systemd[1]: Listening on docker.socket. May 13 00:42:03.040651 systemd[1]: Reached target sockets.target. May 13 00:42:03.041560 systemd[1]: Reached target basic.target. May 13 00:42:03.042569 systemd[1]: System is tainted: cgroupsv1 May 13 00:42:03.042617 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. May 13 00:42:03.042637 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. May 13 00:42:03.043894 systemd[1]: Starting containerd.service... May 13 00:42:03.046024 systemd[1]: Starting dbus.service... May 13 00:42:03.049041 systemd[1]: Starting enable-oem-cloudinit.service... May 13 00:42:03.074251 systemd[1]: Starting extend-filesystems.service... May 13 00:42:03.074766 jq[1281]: false May 13 00:42:03.075512 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). May 13 00:42:03.076551 systemd-networkd[1079]: eth0: Gained IPv6LL May 13 00:42:03.077384 systemd[1]: Starting motdgen.service... May 13 00:42:03.080685 systemd[1]: Starting ssh-key-proc-cmdline.service... May 13 00:42:03.083467 systemd[1]: Starting sshd-keygen.service... May 13 00:42:03.088560 systemd[1]: Starting systemd-logind.service... May 13 00:42:03.102124 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:42:03.102270 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 13 00:42:03.104723 systemd[1]: Starting update-engine.service... May 13 00:42:03.107157 systemd[1]: Starting update-ssh-keys-after-ignition.service... May 13 00:42:03.109470 systemd[1]: Finished systemd-networkd-wait-online.service. May 13 00:42:03.111563 jq[1299]: true May 13 00:42:03.118976 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 13 00:42:03.119463 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. May 13 00:42:03.119982 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 13 00:42:03.120303 systemd[1]: Finished ssh-key-proc-cmdline.service. May 13 00:42:03.124452 extend-filesystems[1282]: Found loop1 May 13 00:42:03.124452 extend-filesystems[1282]: Found sr0 May 13 00:42:03.124452 extend-filesystems[1282]: Found vda May 13 00:42:03.124452 extend-filesystems[1282]: Found vda1 May 13 00:42:03.124452 extend-filesystems[1282]: Found vda2 May 13 00:42:03.124452 extend-filesystems[1282]: Found vda3 May 13 00:42:03.130638 extend-filesystems[1282]: Found usr May 13 00:42:03.130638 extend-filesystems[1282]: Found vda4 May 13 00:42:03.130638 extend-filesystems[1282]: Found vda6 May 13 00:42:03.130638 extend-filesystems[1282]: Found vda7 May 13 00:42:03.130638 extend-filesystems[1282]: Found vda9 May 13 00:42:03.130638 extend-filesystems[1282]: Checking size of /dev/vda9 May 13 00:42:03.135887 dbus-daemon[1279]: [system] SELinux support is enabled May 13 00:42:03.136797 jq[1306]: true May 13 00:42:03.137493 systemd[1]: Started dbus.service. May 13 00:42:03.146023 systemd[1]: Reached target network-online.target. May 13 00:42:03.147186 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:42:03.240134 systemd[1]: Starting kubelet.service... May 13 00:42:03.241181 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 13 00:42:03.241227 systemd[1]: Reached target system-config.target. May 13 00:42:03.242386 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 13 00:42:03.242442 systemd[1]: Reached target user-config.target. May 13 00:42:03.243584 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:42:03.244595 update_engine[1298]: I0513 00:42:03.244026 1298 main.cc:92] Flatcar Update Engine starting May 13 00:42:03.256899 update_engine[1298]: I0513 00:42:03.256760 1298 update_check_scheduler.cc:74] Next update check in 2m31s May 13 00:42:03.256787 systemd[1]: Started update-engine.service. May 13 00:42:03.259845 systemd[1]: Started locksmithd.service. May 13 00:42:03.261156 systemd[1]: motdgen.service: Deactivated successfully. May 13 00:42:03.261443 systemd[1]: Finished motdgen.service. May 13 00:42:03.268123 extend-filesystems[1282]: Resized partition /dev/vda9 May 13 00:42:03.285907 extend-filesystems[1333]: resize2fs 1.46.5 (30-Dec-2021) May 13 00:42:03.324528 env[1308]: time="2025-05-13T00:42:03.324455931Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 May 13 00:42:03.340318 env[1308]: time="2025-05-13T00:42:03.340245498Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 13 00:42:03.340497 env[1308]: time="2025-05-13T00:42:03.340460625Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 13 00:42:03.342222 env[1308]: time="2025-05-13T00:42:03.342158465Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.181-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 13 00:42:03.342300 env[1308]: time="2025-05-13T00:42:03.342221973Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 13 00:42:03.342647 env[1308]: time="2025-05-13T00:42:03.342615532Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 13 00:42:03.342696 env[1308]: time="2025-05-13T00:42:03.342646400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 13 00:42:03.342696 env[1308]: time="2025-05-13T00:42:03.342664696Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" May 13 00:42:03.342696 env[1308]: time="2025-05-13T00:42:03.342677726Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 13 00:42:03.342793 env[1308]: time="2025-05-13T00:42:03.342766998Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 13 00:42:03.343116 env[1308]: time="2025-05-13T00:42:03.343087158Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 13 00:42:03.343319 env[1308]: time="2025-05-13T00:42:03.343288786Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 13 00:42:03.343319 env[1308]: time="2025-05-13T00:42:03.343315090Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 13 00:42:03.343429 env[1308]: time="2025-05-13T00:42:03.343389662Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" May 13 00:42:03.343484 env[1308]: time="2025-05-13T00:42:03.343429861Z" level=info msg="metadata content store policy set" policy=shared May 13 00:42:03.360364 systemd-logind[1290]: Watching system buttons on /dev/input/event1 (Power Button) May 13 00:42:03.360390 systemd-logind[1290]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 13 00:42:03.362551 systemd-logind[1290]: New seat seat0. May 13 00:42:03.368493 systemd[1]: Started systemd-logind.service. May 13 00:42:03.412436 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 13 00:42:03.446435 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 13 00:42:03.460645 locksmithd[1330]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 13 00:42:03.472333 extend-filesystems[1333]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 13 00:42:03.472333 extend-filesystems[1333]: old_desc_blocks = 1, new_desc_blocks = 1 May 13 00:42:03.472333 extend-filesystems[1333]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 13 00:42:03.482714 env[1308]: time="2025-05-13T00:42:03.472177187Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 13 00:42:03.482714 env[1308]: time="2025-05-13T00:42:03.472315023Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 13 00:42:03.482714 env[1308]: time="2025-05-13T00:42:03.472421053Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 13 00:42:03.482714 env[1308]: time="2025-05-13T00:42:03.473345203Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 13 00:42:03.482714 env[1308]: time="2025-05-13T00:42:03.473420843Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 13 00:42:03.482714 env[1308]: time="2025-05-13T00:42:03.473437806Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 13 00:42:03.482714 env[1308]: time="2025-05-13T00:42:03.473450235Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 13 00:42:03.482714 env[1308]: time="2025-05-13T00:42:03.473471220Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 13 00:42:03.482714 env[1308]: time="2025-05-13T00:42:03.474266550Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 May 13 00:42:03.482714 env[1308]: time="2025-05-13T00:42:03.474289593Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 13 00:42:03.482714 env[1308]: time="2025-05-13T00:42:03.474304294Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 13 00:42:03.482714 env[1308]: time="2025-05-13T00:42:03.474316427Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 13 00:42:03.482714 env[1308]: time="2025-05-13T00:42:03.474482950Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 13 00:42:03.482714 env[1308]: time="2025-05-13T00:42:03.474558123Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 13 00:42:03.483210 bash[1325]: Updated "/home/core/.ssh/authorized_keys" May 13 00:42:03.483326 extend-filesystems[1282]: Resized filesystem in /dev/vda9 May 13 00:42:03.472910 systemd[1]: extend-filesystems.service: Deactivated successfully. May 13 00:42:03.487119 env[1308]: time="2025-05-13T00:42:03.474872741Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 13 00:42:03.487119 env[1308]: time="2025-05-13T00:42:03.474899787Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 13 00:42:03.487119 env[1308]: time="2025-05-13T00:42:03.474914468Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 13 00:42:03.487119 env[1308]: time="2025-05-13T00:42:03.474966648Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 13 00:42:03.487119 env[1308]: time="2025-05-13T00:42:03.474986635Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 13 00:42:03.487119 env[1308]: time="2025-05-13T00:42:03.475009292Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 13 00:42:03.487119 env[1308]: time="2025-05-13T00:42:03.475019866Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 13 00:42:03.487119 env[1308]: time="2025-05-13T00:42:03.475031042Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 13 00:42:03.487119 env[1308]: time="2025-05-13T00:42:03.475043216Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 13 00:42:03.487119 env[1308]: time="2025-05-13T00:42:03.475053903Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 13 00:42:03.487119 env[1308]: time="2025-05-13T00:42:03.475063449Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 13 00:42:03.487119 env[1308]: time="2025-05-13T00:42:03.475074950Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 13 00:42:03.487119 env[1308]: time="2025-05-13T00:42:03.475177027Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 13 00:42:03.487119 env[1308]: time="2025-05-13T00:42:03.475190209Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 13 00:42:03.487119 env[1308]: time="2025-05-13T00:42:03.475203188Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 13 00:42:03.473178 systemd[1]: Finished extend-filesystems.service. May 13 00:42:03.487765 env[1308]: time="2025-05-13T00:42:03.475213804Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 13 00:42:03.487765 env[1308]: time="2025-05-13T00:42:03.475234494Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 May 13 00:42:03.487765 env[1308]: time="2025-05-13T00:42:03.475246373Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 13 00:42:03.487765 env[1308]: time="2025-05-13T00:42:03.475268969Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" May 13 00:42:03.487765 env[1308]: time="2025-05-13T00:42:03.475309209Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 13 00:42:03.478211 systemd[1]: Finished update-ssh-keys-after-ignition.service. May 13 00:42:03.484601 systemd[1]: Started containerd.service. May 13 00:42:03.488035 env[1308]: time="2025-05-13T00:42:03.475498837Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 13 00:42:03.488035 env[1308]: time="2025-05-13T00:42:03.475556558Z" level=info msg="Connect containerd service" May 13 00:42:03.488035 env[1308]: time="2025-05-13T00:42:03.475600028Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 13 00:42:03.488035 env[1308]: time="2025-05-13T00:42:03.476071480Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 00:42:03.488035 env[1308]: time="2025-05-13T00:42:03.484229816Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 13 00:42:03.488035 env[1308]: time="2025-05-13T00:42:03.484314280Z" level=info msg=serving... address=/run/containerd/containerd.sock May 13 00:42:03.488035 env[1308]: time="2025-05-13T00:42:03.484378063Z" level=info msg="containerd successfully booted in 0.160615s" May 13 00:42:03.493812 env[1308]: time="2025-05-13T00:42:03.493500869Z" level=info msg="Start subscribing containerd event" May 13 00:42:03.493812 env[1308]: time="2025-05-13T00:42:03.493642383Z" level=info msg="Start recovering state" May 13 00:42:03.493812 env[1308]: time="2025-05-13T00:42:03.493772710Z" level=info msg="Start event monitor" May 13 00:42:03.493812 env[1308]: time="2025-05-13T00:42:03.493812237Z" level=info msg="Start snapshots syncer" May 13 00:42:03.493979 env[1308]: time="2025-05-13T00:42:03.493826826Z" level=info msg="Start cni network conf syncer for default" May 13 00:42:03.493979 env[1308]: time="2025-05-13T00:42:03.493837665Z" level=info msg="Start streaming server" May 13 00:42:04.215986 sshd_keygen[1296]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 13 00:42:04.245542 systemd[1]: Finished sshd-keygen.service. May 13 00:42:04.248494 systemd[1]: Starting issuegen.service... May 13 00:42:04.254906 systemd[1]: issuegen.service: Deactivated successfully. May 13 00:42:04.255172 systemd[1]: Finished issuegen.service. May 13 00:42:04.257912 systemd[1]: Starting systemd-user-sessions.service... May 13 00:42:04.264948 systemd[1]: Finished systemd-user-sessions.service. May 13 00:42:04.268248 systemd[1]: Started getty@tty1.service. May 13 00:42:04.272863 systemd[1]: Started serial-getty@ttyS0.service. May 13 00:42:04.274287 systemd[1]: Reached target getty.target. May 13 00:42:04.572006 systemd[1]: Started kubelet.service. May 13 00:42:04.574815 systemd[1]: Reached target multi-user.target. May 13 00:42:04.579993 systemd[1]: Starting systemd-update-utmp-runlevel.service... May 13 00:42:04.586219 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. May 13 00:42:04.586493 systemd[1]: Finished systemd-update-utmp-runlevel.service. May 13 00:42:04.588591 systemd[1]: Startup finished in 6.056s (kernel) + 7.629s (userspace) = 13.686s. May 13 00:42:05.359884 kubelet[1376]: E0513 00:42:05.359807 1376 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 00:42:05.361519 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 00:42:05.361732 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 00:42:05.612994 systemd[1]: Created slice system-sshd.slice. May 13 00:42:05.614613 systemd[1]: Started sshd@0-10.0.0.58:22-10.0.0.1:34914.service. May 13 00:42:05.657046 sshd[1387]: Accepted publickey for core from 10.0.0.1 port 34914 ssh2: RSA SHA256:rB6W9bZE2VLaM16OfY/13txyT/mKzB4zHBxc/zNPaeA May 13 00:42:05.658922 sshd[1387]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:42:05.668525 systemd-logind[1290]: New session 1 of user core. May 13 00:42:05.669551 systemd[1]: Created slice user-500.slice. May 13 00:42:05.670767 systemd[1]: Starting user-runtime-dir@500.service... May 13 00:42:05.682289 systemd[1]: Finished user-runtime-dir@500.service. May 13 00:42:05.683841 systemd[1]: Starting user@500.service... May 13 00:42:05.688307 (systemd)[1392]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 13 00:42:05.767203 systemd[1392]: Queued start job for default target default.target. May 13 00:42:05.767429 systemd[1392]: Reached target paths.target. May 13 00:42:05.767445 systemd[1392]: Reached target sockets.target. May 13 00:42:05.767456 systemd[1392]: Reached target timers.target. May 13 00:42:05.767467 systemd[1392]: Reached target basic.target. May 13 00:42:05.767510 systemd[1392]: Reached target default.target. May 13 00:42:05.767539 systemd[1392]: Startup finished in 70ms. May 13 00:42:05.767662 systemd[1]: Started user@500.service. May 13 00:42:05.768858 systemd[1]: Started session-1.scope. May 13 00:42:05.821478 systemd[1]: Started sshd@1-10.0.0.58:22-10.0.0.1:34928.service. May 13 00:42:05.862043 sshd[1401]: Accepted publickey for core from 10.0.0.1 port 34928 ssh2: RSA SHA256:rB6W9bZE2VLaM16OfY/13txyT/mKzB4zHBxc/zNPaeA May 13 00:42:05.863741 sshd[1401]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:42:05.868260 systemd-logind[1290]: New session 2 of user core. May 13 00:42:05.868994 systemd[1]: Started session-2.scope. May 13 00:42:05.925288 sshd[1401]: pam_unix(sshd:session): session closed for user core May 13 00:42:05.928575 systemd[1]: Started sshd@2-10.0.0.58:22-10.0.0.1:34932.service. May 13 00:42:05.929288 systemd[1]: sshd@1-10.0.0.58:22-10.0.0.1:34928.service: Deactivated successfully. May 13 00:42:05.930955 systemd-logind[1290]: Session 2 logged out. Waiting for processes to exit. May 13 00:42:05.931120 systemd[1]: session-2.scope: Deactivated successfully. May 13 00:42:05.932331 systemd-logind[1290]: Removed session 2. May 13 00:42:05.965365 sshd[1407]: Accepted publickey for core from 10.0.0.1 port 34932 ssh2: RSA SHA256:rB6W9bZE2VLaM16OfY/13txyT/mKzB4zHBxc/zNPaeA May 13 00:42:05.966896 sshd[1407]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:42:05.971151 systemd-logind[1290]: New session 3 of user core. May 13 00:42:05.972242 systemd[1]: Started session-3.scope. May 13 00:42:06.023514 sshd[1407]: pam_unix(sshd:session): session closed for user core May 13 00:42:06.026020 systemd[1]: Started sshd@3-10.0.0.58:22-10.0.0.1:34940.service. May 13 00:42:06.027277 systemd[1]: sshd@2-10.0.0.58:22-10.0.0.1:34932.service: Deactivated successfully. May 13 00:42:06.028087 systemd[1]: session-3.scope: Deactivated successfully. May 13 00:42:06.028118 systemd-logind[1290]: Session 3 logged out. Waiting for processes to exit. May 13 00:42:06.029030 systemd-logind[1290]: Removed session 3. May 13 00:42:06.063210 sshd[1413]: Accepted publickey for core from 10.0.0.1 port 34940 ssh2: RSA SHA256:rB6W9bZE2VLaM16OfY/13txyT/mKzB4zHBxc/zNPaeA May 13 00:42:06.064480 sshd[1413]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:42:06.068029 systemd-logind[1290]: New session 4 of user core. May 13 00:42:06.068833 systemd[1]: Started session-4.scope. May 13 00:42:06.124480 sshd[1413]: pam_unix(sshd:session): session closed for user core May 13 00:42:06.126869 systemd[1]: Started sshd@4-10.0.0.58:22-10.0.0.1:34954.service. May 13 00:42:06.127932 systemd[1]: sshd@3-10.0.0.58:22-10.0.0.1:34940.service: Deactivated successfully. May 13 00:42:06.128612 systemd-logind[1290]: Session 4 logged out. Waiting for processes to exit. May 13 00:42:06.128668 systemd[1]: session-4.scope: Deactivated successfully. May 13 00:42:06.129463 systemd-logind[1290]: Removed session 4. May 13 00:42:06.168221 sshd[1420]: Accepted publickey for core from 10.0.0.1 port 34954 ssh2: RSA SHA256:rB6W9bZE2VLaM16OfY/13txyT/mKzB4zHBxc/zNPaeA May 13 00:42:06.169870 sshd[1420]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:42:06.173855 systemd-logind[1290]: New session 5 of user core. May 13 00:42:06.174762 systemd[1]: Started session-5.scope. May 13 00:42:06.238466 sudo[1426]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 13 00:42:06.238789 sudo[1426]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 13 00:42:06.253640 systemd[1]: Starting coreos-metadata.service... May 13 00:42:06.266277 systemd[1]: coreos-metadata.service: Deactivated successfully. May 13 00:42:06.266574 systemd[1]: Finished coreos-metadata.service. May 13 00:42:07.050809 systemd[1]: Stopped kubelet.service. May 13 00:42:07.054074 systemd[1]: Starting kubelet.service... May 13 00:42:07.075235 systemd[1]: Reloading. May 13 00:42:07.164804 /usr/lib/systemd/system-generators/torcx-generator[1496]: time="2025-05-13T00:42:07Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 13 00:42:07.164847 /usr/lib/systemd/system-generators/torcx-generator[1496]: time="2025-05-13T00:42:07Z" level=info msg="torcx already run" May 13 00:42:07.707177 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 13 00:42:07.707202 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 13 00:42:07.727972 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:42:07.795452 systemd[1]: Started kubelet.service. May 13 00:42:07.799177 systemd[1]: Stopping kubelet.service... May 13 00:42:07.799674 systemd[1]: kubelet.service: Deactivated successfully. May 13 00:42:07.800025 systemd[1]: Stopped kubelet.service. May 13 00:42:07.801693 systemd[1]: Starting kubelet.service... May 13 00:42:07.880857 systemd[1]: Started kubelet.service. May 13 00:42:07.941361 kubelet[1563]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:42:07.941361 kubelet[1563]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 13 00:42:07.941361 kubelet[1563]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:42:07.942437 kubelet[1563]: I0513 00:42:07.942355 1563 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 00:42:08.435002 kubelet[1563]: I0513 00:42:08.434949 1563 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 13 00:42:08.435002 kubelet[1563]: I0513 00:42:08.434985 1563 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 00:42:08.435195 kubelet[1563]: I0513 00:42:08.435182 1563 server.go:927] "Client rotation is on, will bootstrap in background" May 13 00:42:08.444898 kubelet[1563]: I0513 00:42:08.444840 1563 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 00:42:08.463292 kubelet[1563]: I0513 00:42:08.463241 1563 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 00:42:08.465792 kubelet[1563]: I0513 00:42:08.465734 1563 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 00:42:08.466032 kubelet[1563]: I0513 00:42:08.465790 1563 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.58","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 13 00:42:08.466544 kubelet[1563]: I0513 00:42:08.466517 1563 topology_manager.go:138] "Creating topology manager with none policy" May 13 00:42:08.466576 kubelet[1563]: I0513 00:42:08.466549 1563 container_manager_linux.go:301] "Creating device plugin manager" May 13 00:42:08.466725 kubelet[1563]: I0513 00:42:08.466702 1563 state_mem.go:36] "Initialized new in-memory state store" May 13 00:42:08.467522 kubelet[1563]: I0513 00:42:08.467498 1563 kubelet.go:400] "Attempting to sync node with API server" May 13 00:42:08.467555 kubelet[1563]: I0513 00:42:08.467525 1563 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 00:42:08.467577 kubelet[1563]: I0513 00:42:08.467560 1563 kubelet.go:312] "Adding apiserver pod source" May 13 00:42:08.467599 kubelet[1563]: I0513 00:42:08.467589 1563 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 00:42:08.467664 kubelet[1563]: E0513 00:42:08.467633 1563 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:42:08.467823 kubelet[1563]: E0513 00:42:08.467775 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:42:08.471134 kubelet[1563]: I0513 00:42:08.471086 1563 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 13 00:42:08.472294 kubelet[1563]: W0513 00:42:08.472259 1563 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.0.0.58" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope May 13 00:42:08.472294 kubelet[1563]: E0513 00:42:08.472299 1563 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.58" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope May 13 00:42:08.472381 kubelet[1563]: I0513 00:42:08.472318 1563 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 00:42:08.472381 kubelet[1563]: W0513 00:42:08.472345 1563 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope May 13 00:42:08.472381 kubelet[1563]: W0513 00:42:08.472374 1563 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 13 00:42:08.472505 kubelet[1563]: E0513 00:42:08.472454 1563 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope May 13 00:42:08.473035 kubelet[1563]: I0513 00:42:08.473008 1563 server.go:1264] "Started kubelet" May 13 00:42:08.473658 kubelet[1563]: I0513 00:42:08.473607 1563 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 00:42:08.474045 kubelet[1563]: I0513 00:42:08.474021 1563 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 00:42:08.474152 kubelet[1563]: I0513 00:42:08.474058 1563 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 13 00:42:08.979626 kubelet[1563]: I0513 00:42:08.979579 1563 server.go:455] "Adding debug handlers to kubelet server" May 13 00:42:08.981586 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). May 13 00:42:08.982437 kubelet[1563]: I0513 00:42:08.981709 1563 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 00:42:08.984629 kubelet[1563]: E0513 00:42:08.984593 1563 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.58\" not found" May 13 00:42:08.984720 kubelet[1563]: I0513 00:42:08.984678 1563 volume_manager.go:291] "Starting Kubelet Volume Manager" May 13 00:42:08.984848 kubelet[1563]: E0513 00:42:08.984809 1563 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 00:42:08.984896 kubelet[1563]: I0513 00:42:08.984861 1563 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 00:42:08.985007 kubelet[1563]: I0513 00:42:08.984982 1563 reconciler.go:26] "Reconciler: start to sync state" May 13 00:42:08.985920 kubelet[1563]: I0513 00:42:08.985881 1563 factory.go:221] Registration of the systemd container factory successfully May 13 00:42:08.986126 kubelet[1563]: I0513 00:42:08.986024 1563 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 00:42:08.988280 kubelet[1563]: I0513 00:42:08.988238 1563 factory.go:221] Registration of the containerd container factory successfully May 13 00:42:08.990141 kubelet[1563]: E0513 00:42:08.990101 1563 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.58\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" May 13 00:42:08.990364 kubelet[1563]: W0513 00:42:08.990218 1563 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope May 13 00:42:08.990364 kubelet[1563]: E0513 00:42:08.990246 1563 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope May 13 00:42:08.990473 kubelet[1563]: E0513 00:42:08.990318 1563 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.58.183eef6661ceded6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.58,UID:10.0.0.58,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.0.0.58,},FirstTimestamp:2025-05-13 00:42:08.472981206 +0000 UTC m=+0.587076422,LastTimestamp:2025-05-13 00:42:08.472981206 +0000 UTC m=+0.587076422,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.58,}" May 13 00:42:08.993674 kubelet[1563]: E0513 00:42:08.993598 1563 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.58.183eef6680508675 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.58,UID:10.0.0.58,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:10.0.0.58,},FirstTimestamp:2025-05-13 00:42:08.984794741 +0000 UTC m=+1.098889966,LastTimestamp:2025-05-13 00:42:08.984794741 +0000 UTC m=+1.098889966,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.58,}" May 13 00:42:09.011167 kubelet[1563]: I0513 00:42:09.011117 1563 cpu_manager.go:214] "Starting CPU manager" policy="none" May 13 00:42:09.011167 kubelet[1563]: I0513 00:42:09.011139 1563 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 13 00:42:09.011167 kubelet[1563]: I0513 00:42:09.011160 1563 state_mem.go:36] "Initialized new in-memory state store" May 13 00:42:09.057446 kubelet[1563]: I0513 00:42:09.057408 1563 policy_none.go:49] "None policy: Start" May 13 00:42:09.058168 kubelet[1563]: I0513 00:42:09.058142 1563 memory_manager.go:170] "Starting memorymanager" policy="None" May 13 00:42:09.058168 kubelet[1563]: I0513 00:42:09.058174 1563 state_mem.go:35] "Initializing new in-memory state store" May 13 00:42:09.064030 kubelet[1563]: I0513 00:42:09.064007 1563 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 00:42:09.064200 kubelet[1563]: I0513 00:42:09.064151 1563 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 00:42:09.064271 kubelet[1563]: I0513 00:42:09.064259 1563 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 00:42:09.065713 kubelet[1563]: E0513 00:42:09.065693 1563 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.58\" not found" May 13 00:42:09.086515 kubelet[1563]: I0513 00:42:09.086476 1563 kubelet_node_status.go:73] "Attempting to register node" node="10.0.0.58" May 13 00:42:09.093053 kubelet[1563]: I0513 00:42:09.093025 1563 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 00:42:09.093953 kubelet[1563]: I0513 00:42:09.093924 1563 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 00:42:09.094013 kubelet[1563]: I0513 00:42:09.093965 1563 status_manager.go:217] "Starting to sync pod status with apiserver" May 13 00:42:09.094013 kubelet[1563]: I0513 00:42:09.094003 1563 kubelet.go:2337] "Starting kubelet main sync loop" May 13 00:42:09.094075 kubelet[1563]: E0513 00:42:09.094065 1563 kubelet.go:2361] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" May 13 00:42:09.101017 kubelet[1563]: I0513 00:42:09.100978 1563 kubelet_node_status.go:76] "Successfully registered node" node="10.0.0.58" May 13 00:42:09.133871 kubelet[1563]: E0513 00:42:09.133802 1563 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.58\" not found" May 13 00:42:09.234600 kubelet[1563]: E0513 00:42:09.234356 1563 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.58\" not found" May 13 00:42:09.334859 kubelet[1563]: E0513 00:42:09.334742 1563 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.58\" not found" May 13 00:42:09.399045 sudo[1426]: pam_unix(sudo:session): session closed for user root May 13 00:42:09.400623 sshd[1420]: pam_unix(sshd:session): session closed for user core May 13 00:42:09.402866 systemd[1]: sshd@4-10.0.0.58:22-10.0.0.1:34954.service: Deactivated successfully. May 13 00:42:09.404113 systemd[1]: session-5.scope: Deactivated successfully. May 13 00:42:09.404169 systemd-logind[1290]: Session 5 logged out. Waiting for processes to exit. May 13 00:42:09.405214 systemd-logind[1290]: Removed session 5. May 13 00:42:09.435653 kubelet[1563]: E0513 00:42:09.435565 1563 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.58\" not found" May 13 00:42:09.436724 kubelet[1563]: I0513 00:42:09.436665 1563 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" May 13 00:42:09.437036 kubelet[1563]: W0513 00:42:09.436993 1563 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received May 13 00:42:09.468509 kubelet[1563]: E0513 00:42:09.468440 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:42:09.536861 kubelet[1563]: E0513 00:42:09.536702 1563 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.58\" not found" May 13 00:42:09.637253 kubelet[1563]: E0513 00:42:09.637138 1563 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.58\" not found" May 13 00:42:09.738672 kubelet[1563]: I0513 00:42:09.738628 1563 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" May 13 00:42:09.739100 env[1308]: time="2025-05-13T00:42:09.739049862Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 13 00:42:09.739412 kubelet[1563]: I0513 00:42:09.739238 1563 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" May 13 00:42:10.469255 kubelet[1563]: I0513 00:42:10.469188 1563 apiserver.go:52] "Watching apiserver" May 13 00:42:10.469695 kubelet[1563]: E0513 00:42:10.469299 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:42:10.474187 kubelet[1563]: I0513 00:42:10.474087 1563 topology_manager.go:215] "Topology Admit Handler" podUID="218c7015-c112-42f9-9155-ab20605eafda" podNamespace="kube-system" podName="cilium-82jqh" May 13 00:42:10.474439 kubelet[1563]: I0513 00:42:10.474332 1563 topology_manager.go:215] "Topology Admit Handler" podUID="9c285b5d-d32e-4737-abee-e58782390934" podNamespace="kube-system" podName="kube-proxy-mzvq7" May 13 00:42:10.486300 kubelet[1563]: I0513 00:42:10.486254 1563 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 00:42:10.494215 kubelet[1563]: I0513 00:42:10.494174 1563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/218c7015-c112-42f9-9155-ab20605eafda-lib-modules\") pod \"cilium-82jqh\" (UID: \"218c7015-c112-42f9-9155-ab20605eafda\") " pod="kube-system/cilium-82jqh" May 13 00:42:10.494215 kubelet[1563]: I0513 00:42:10.494211 1563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/218c7015-c112-42f9-9155-ab20605eafda-host-proc-sys-net\") pod \"cilium-82jqh\" (UID: \"218c7015-c112-42f9-9155-ab20605eafda\") " pod="kube-system/cilium-82jqh" May 13 00:42:10.494376 kubelet[1563]: I0513 00:42:10.494230 1563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/218c7015-c112-42f9-9155-ab20605eafda-hubble-tls\") pod \"cilium-82jqh\" (UID: \"218c7015-c112-42f9-9155-ab20605eafda\") " pod="kube-system/cilium-82jqh" May 13 00:42:10.494376 kubelet[1563]: I0513 00:42:10.494249 1563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/218c7015-c112-42f9-9155-ab20605eafda-host-proc-sys-kernel\") pod \"cilium-82jqh\" (UID: \"218c7015-c112-42f9-9155-ab20605eafda\") " pod="kube-system/cilium-82jqh" May 13 00:42:10.494376 kubelet[1563]: I0513 00:42:10.494279 1563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9c285b5d-d32e-4737-abee-e58782390934-kube-proxy\") pod \"kube-proxy-mzvq7\" (UID: \"9c285b5d-d32e-4737-abee-e58782390934\") " pod="kube-system/kube-proxy-mzvq7" May 13 00:42:10.494376 kubelet[1563]: I0513 00:42:10.494302 1563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9c285b5d-d32e-4737-abee-e58782390934-xtables-lock\") pod \"kube-proxy-mzvq7\" (UID: \"9c285b5d-d32e-4737-abee-e58782390934\") " pod="kube-system/kube-proxy-mzvq7" May 13 00:42:10.494376 kubelet[1563]: I0513 00:42:10.494350 1563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbdl2\" (UniqueName: \"kubernetes.io/projected/9c285b5d-d32e-4737-abee-e58782390934-kube-api-access-wbdl2\") pod \"kube-proxy-mzvq7\" (UID: \"9c285b5d-d32e-4737-abee-e58782390934\") " pod="kube-system/kube-proxy-mzvq7" May 13 00:42:10.494546 kubelet[1563]: I0513 00:42:10.494366 1563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/218c7015-c112-42f9-9155-ab20605eafda-cilium-run\") pod \"cilium-82jqh\" (UID: \"218c7015-c112-42f9-9155-ab20605eafda\") " pod="kube-system/cilium-82jqh" May 13 00:42:10.494546 kubelet[1563]: I0513 00:42:10.494385 1563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/218c7015-c112-42f9-9155-ab20605eafda-cni-path\") pod \"cilium-82jqh\" (UID: \"218c7015-c112-42f9-9155-ab20605eafda\") " pod="kube-system/cilium-82jqh" May 13 00:42:10.494546 kubelet[1563]: I0513 00:42:10.494416 1563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/218c7015-c112-42f9-9155-ab20605eafda-xtables-lock\") pod \"cilium-82jqh\" (UID: \"218c7015-c112-42f9-9155-ab20605eafda\") " pod="kube-system/cilium-82jqh" May 13 00:42:10.494546 kubelet[1563]: I0513 00:42:10.494482 1563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/218c7015-c112-42f9-9155-ab20605eafda-etc-cni-netd\") pod \"cilium-82jqh\" (UID: \"218c7015-c112-42f9-9155-ab20605eafda\") " pod="kube-system/cilium-82jqh" May 13 00:42:10.494638 kubelet[1563]: I0513 00:42:10.494543 1563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/218c7015-c112-42f9-9155-ab20605eafda-clustermesh-secrets\") pod \"cilium-82jqh\" (UID: \"218c7015-c112-42f9-9155-ab20605eafda\") " pod="kube-system/cilium-82jqh" May 13 00:42:10.494638 kubelet[1563]: I0513 00:42:10.494578 1563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/218c7015-c112-42f9-9155-ab20605eafda-cilium-config-path\") pod \"cilium-82jqh\" (UID: \"218c7015-c112-42f9-9155-ab20605eafda\") " pod="kube-system/cilium-82jqh" May 13 00:42:10.494638 kubelet[1563]: I0513 00:42:10.494609 1563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxh5z\" (UniqueName: \"kubernetes.io/projected/218c7015-c112-42f9-9155-ab20605eafda-kube-api-access-gxh5z\") pod \"cilium-82jqh\" (UID: \"218c7015-c112-42f9-9155-ab20605eafda\") " pod="kube-system/cilium-82jqh" May 13 00:42:10.494638 kubelet[1563]: I0513 00:42:10.494634 1563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9c285b5d-d32e-4737-abee-e58782390934-lib-modules\") pod \"kube-proxy-mzvq7\" (UID: \"9c285b5d-d32e-4737-abee-e58782390934\") " pod="kube-system/kube-proxy-mzvq7" May 13 00:42:10.494728 kubelet[1563]: I0513 00:42:10.494659 1563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/218c7015-c112-42f9-9155-ab20605eafda-bpf-maps\") pod \"cilium-82jqh\" (UID: \"218c7015-c112-42f9-9155-ab20605eafda\") " pod="kube-system/cilium-82jqh" May 13 00:42:10.494728 kubelet[1563]: I0513 00:42:10.494683 1563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/218c7015-c112-42f9-9155-ab20605eafda-hostproc\") pod \"cilium-82jqh\" (UID: \"218c7015-c112-42f9-9155-ab20605eafda\") " pod="kube-system/cilium-82jqh" May 13 00:42:10.494728 kubelet[1563]: I0513 00:42:10.494706 1563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/218c7015-c112-42f9-9155-ab20605eafda-cilium-cgroup\") pod \"cilium-82jqh\" (UID: \"218c7015-c112-42f9-9155-ab20605eafda\") " pod="kube-system/cilium-82jqh" May 13 00:42:10.778107 kubelet[1563]: E0513 00:42:10.777952 1563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:10.778699 kubelet[1563]: E0513 00:42:10.778667 1563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:10.779001 env[1308]: time="2025-05-13T00:42:10.778953358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-82jqh,Uid:218c7015-c112-42f9-9155-ab20605eafda,Namespace:kube-system,Attempt:0,}" May 13 00:42:10.779545 env[1308]: time="2025-05-13T00:42:10.779517442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mzvq7,Uid:9c285b5d-d32e-4737-abee-e58782390934,Namespace:kube-system,Attempt:0,}" May 13 00:42:11.469643 kubelet[1563]: E0513 00:42:11.469563 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:42:12.470825 kubelet[1563]: E0513 00:42:12.470716 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:42:12.958820 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2762424373.mount: Deactivated successfully. May 13 00:42:13.266191 env[1308]: time="2025-05-13T00:42:13.266021386Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:42:13.304763 env[1308]: time="2025-05-13T00:42:13.304693524Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:42:13.385203 env[1308]: time="2025-05-13T00:42:13.385132931Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:42:13.420052 env[1308]: time="2025-05-13T00:42:13.419963259Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:42:13.446642 env[1308]: time="2025-05-13T00:42:13.446533853Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:42:13.470767 env[1308]: time="2025-05-13T00:42:13.470679298Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:42:13.471543 kubelet[1563]: E0513 00:42:13.471487 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:42:13.473221 env[1308]: time="2025-05-13T00:42:13.473180504Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:42:13.474755 env[1308]: time="2025-05-13T00:42:13.474715890Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:42:13.540755 env[1308]: time="2025-05-13T00:42:13.540562741Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:42:13.540755 env[1308]: time="2025-05-13T00:42:13.540630829Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:42:13.540755 env[1308]: time="2025-05-13T00:42:13.540645510Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:42:13.540978 env[1308]: time="2025-05-13T00:42:13.540921966Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f7a83edb8ff80983f3ac0a90277e9ffd1aeb86850923e4f5fd919cbc9e48f0df pid=1617 runtime=io.containerd.runc.v2 May 13 00:42:13.541160 env[1308]: time="2025-05-13T00:42:13.541102418Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:42:13.541160 env[1308]: time="2025-05-13T00:42:13.541135355Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:42:13.541160 env[1308]: time="2025-05-13T00:42:13.541145166Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:42:13.541301 env[1308]: time="2025-05-13T00:42:13.541243312Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/dee56fa1e23b080ef75d689289b9b83dd86ca4302213a7ffc55ee0f0e2d9028a pid=1626 runtime=io.containerd.runc.v2 May 13 00:42:13.578204 env[1308]: time="2025-05-13T00:42:13.578149417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-82jqh,Uid:218c7015-c112-42f9-9155-ab20605eafda,Namespace:kube-system,Attempt:0,} returns sandbox id \"dee56fa1e23b080ef75d689289b9b83dd86ca4302213a7ffc55ee0f0e2d9028a\"" May 13 00:42:13.579310 kubelet[1563]: E0513 00:42:13.579276 1563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:13.581022 env[1308]: time="2025-05-13T00:42:13.580985998Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 13 00:42:13.582189 env[1308]: time="2025-05-13T00:42:13.582156514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mzvq7,Uid:9c285b5d-d32e-4737-abee-e58782390934,Namespace:kube-system,Attempt:0,} returns sandbox id \"f7a83edb8ff80983f3ac0a90277e9ffd1aeb86850923e4f5fd919cbc9e48f0df\"" May 13 00:42:13.583036 kubelet[1563]: E0513 00:42:13.582799 1563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:14.472331 kubelet[1563]: E0513 00:42:14.472256 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:42:15.472663 kubelet[1563]: E0513 00:42:15.472604 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:42:16.473125 kubelet[1563]: E0513 00:42:16.473082 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:42:17.473624 kubelet[1563]: E0513 00:42:17.473552 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:42:18.474627 kubelet[1563]: E0513 00:42:18.474554 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:42:18.828145 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount827425201.mount: Deactivated successfully. May 13 00:42:19.475045 kubelet[1563]: E0513 00:42:19.474912 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:42:20.476163 kubelet[1563]: E0513 00:42:20.476105 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:42:21.476964 kubelet[1563]: E0513 00:42:21.476882 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:42:22.477553 kubelet[1563]: E0513 00:42:22.477472 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:42:23.411525 env[1308]: time="2025-05-13T00:42:23.411452945Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:42:23.413543 env[1308]: time="2025-05-13T00:42:23.413509421Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:42:23.415553 env[1308]: time="2025-05-13T00:42:23.415517751Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:42:23.416207 env[1308]: time="2025-05-13T00:42:23.416167134Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 13 00:42:23.417639 env[1308]: time="2025-05-13T00:42:23.417586666Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 13 00:42:23.419326 env[1308]: time="2025-05-13T00:42:23.419282456Z" level=info msg="CreateContainer within sandbox \"dee56fa1e23b080ef75d689289b9b83dd86ca4302213a7ffc55ee0f0e2d9028a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 13 00:42:23.433675 env[1308]: time="2025-05-13T00:42:23.433622816Z" level=info msg="CreateContainer within sandbox \"dee56fa1e23b080ef75d689289b9b83dd86ca4302213a7ffc55ee0f0e2d9028a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"bdfba942be8496841d315a146831a2e1bdce38a63a7a8521f4a5b7ad8431543c\"" May 13 00:42:23.435223 env[1308]: time="2025-05-13T00:42:23.435180697Z" level=info msg="StartContainer for \"bdfba942be8496841d315a146831a2e1bdce38a63a7a8521f4a5b7ad8431543c\"" May 13 00:42:23.476745 env[1308]: time="2025-05-13T00:42:23.476689881Z" level=info msg="StartContainer for \"bdfba942be8496841d315a146831a2e1bdce38a63a7a8521f4a5b7ad8431543c\" returns successfully" May 13 00:42:23.477617 kubelet[1563]: E0513 00:42:23.477588 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:42:23.917655 env[1308]: time="2025-05-13T00:42:23.917598716Z" level=info msg="shim disconnected" id=bdfba942be8496841d315a146831a2e1bdce38a63a7a8521f4a5b7ad8431543c May 13 00:42:23.917853 env[1308]: time="2025-05-13T00:42:23.917660194Z" level=warning msg="cleaning up after shim disconnected" id=bdfba942be8496841d315a146831a2e1bdce38a63a7a8521f4a5b7ad8431543c namespace=k8s.io May 13 00:42:23.917853 env[1308]: time="2025-05-13T00:42:23.917669903Z" level=info msg="cleaning up dead shim" May 13 00:42:23.926923 env[1308]: time="2025-05-13T00:42:23.926879397Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:42:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1743 runtime=io.containerd.runc.v2\n" May 13 00:42:24.118105 kubelet[1563]: E0513 00:42:24.118069 1563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:24.119749 env[1308]: time="2025-05-13T00:42:24.119704926Z" level=info msg="CreateContainer within sandbox \"dee56fa1e23b080ef75d689289b9b83dd86ca4302213a7ffc55ee0f0e2d9028a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 13 00:42:24.428736 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bdfba942be8496841d315a146831a2e1bdce38a63a7a8521f4a5b7ad8431543c-rootfs.mount: Deactivated successfully. May 13 00:42:24.477741 kubelet[1563]: E0513 00:42:24.477698 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:42:24.528011 env[1308]: time="2025-05-13T00:42:24.527961941Z" level=info msg="CreateContainer within sandbox \"dee56fa1e23b080ef75d689289b9b83dd86ca4302213a7ffc55ee0f0e2d9028a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b811f8a3e6837ca40a70676e489ccb8810b753dcdcd67ac2544d2dc394cd78ea\"" May 13 00:42:24.528734 env[1308]: time="2025-05-13T00:42:24.528671243Z" level=info msg="StartContainer for \"b811f8a3e6837ca40a70676e489ccb8810b753dcdcd67ac2544d2dc394cd78ea\"" May 13 00:42:24.581813 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 00:42:24.582164 systemd[1]: Stopped systemd-sysctl.service. May 13 00:42:24.584135 systemd[1]: Stopping systemd-sysctl.service... May 13 00:42:24.585665 systemd[1]: Starting systemd-sysctl.service... May 13 00:42:24.587445 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 13 00:42:24.596210 systemd[1]: Finished systemd-sysctl.service. May 13 00:42:24.657826 env[1308]: time="2025-05-13T00:42:24.657734228Z" level=info msg="StartContainer for \"b811f8a3e6837ca40a70676e489ccb8810b753dcdcd67ac2544d2dc394cd78ea\" returns successfully" May 13 00:42:24.846842 env[1308]: time="2025-05-13T00:42:24.846684477Z" level=info msg="shim disconnected" id=b811f8a3e6837ca40a70676e489ccb8810b753dcdcd67ac2544d2dc394cd78ea May 13 00:42:24.846842 env[1308]: time="2025-05-13T00:42:24.846730771Z" level=warning msg="cleaning up after shim disconnected" id=b811f8a3e6837ca40a70676e489ccb8810b753dcdcd67ac2544d2dc394cd78ea namespace=k8s.io May 13 00:42:24.846842 env[1308]: time="2025-05-13T00:42:24.846740259Z" level=info msg="cleaning up dead shim" May 13 00:42:24.853628 env[1308]: time="2025-05-13T00:42:24.853567275Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:42:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1809 runtime=io.containerd.runc.v2\n" May 13 00:42:25.122123 kubelet[1563]: E0513 00:42:25.121819 1563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:25.123827 env[1308]: time="2025-05-13T00:42:25.123785575Z" level=info msg="CreateContainer within sandbox \"dee56fa1e23b080ef75d689289b9b83dd86ca4302213a7ffc55ee0f0e2d9028a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 13 00:42:25.299049 env[1308]: time="2025-05-13T00:42:25.298970708Z" level=info msg="CreateContainer within sandbox \"dee56fa1e23b080ef75d689289b9b83dd86ca4302213a7ffc55ee0f0e2d9028a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"505de3abb9335649236e1d6fb591fb131d0605bb2c80640c37ccb9aeb4183374\"" May 13 00:42:25.299722 env[1308]: time="2025-05-13T00:42:25.299675798Z" level=info msg="StartContainer for \"505de3abb9335649236e1d6fb591fb131d0605bb2c80640c37ccb9aeb4183374\"" May 13 00:42:25.344411 env[1308]: time="2025-05-13T00:42:25.344349689Z" level=info msg="StartContainer for \"505de3abb9335649236e1d6fb591fb131d0605bb2c80640c37ccb9aeb4183374\" returns successfully" May 13 00:42:25.404722 env[1308]: time="2025-05-13T00:42:25.404600789Z" level=info msg="shim disconnected" id=505de3abb9335649236e1d6fb591fb131d0605bb2c80640c37ccb9aeb4183374 May 13 00:42:25.404722 env[1308]: time="2025-05-13T00:42:25.404650878Z" level=warning msg="cleaning up after shim disconnected" id=505de3abb9335649236e1d6fb591fb131d0605bb2c80640c37ccb9aeb4183374 namespace=k8s.io May 13 00:42:25.404722 env[1308]: time="2025-05-13T00:42:25.404660073Z" level=info msg="cleaning up dead shim" May 13 00:42:25.411200 env[1308]: time="2025-05-13T00:42:25.411150794Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:42:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1865 runtime=io.containerd.runc.v2\n" May 13 00:42:25.430135 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b811f8a3e6837ca40a70676e489ccb8810b753dcdcd67ac2544d2dc394cd78ea-rootfs.mount: Deactivated successfully. May 13 00:42:25.478063 kubelet[1563]: E0513 00:42:25.477974 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:42:25.611340 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3313221089.mount: Deactivated successfully. May 13 00:42:26.124947 kubelet[1563]: E0513 00:42:26.124911 1563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:26.126839 env[1308]: time="2025-05-13T00:42:26.126796551Z" level=info msg="CreateContainer within sandbox \"dee56fa1e23b080ef75d689289b9b83dd86ca4302213a7ffc55ee0f0e2d9028a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 13 00:42:26.146365 env[1308]: time="2025-05-13T00:42:26.146312998Z" level=info msg="CreateContainer within sandbox \"dee56fa1e23b080ef75d689289b9b83dd86ca4302213a7ffc55ee0f0e2d9028a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c1075e017df7ec70fc2e95297c1a4b5ba8811a6efe4cd01a97d20ac1928e9bda\"" May 13 00:42:26.146884 env[1308]: time="2025-05-13T00:42:26.146862999Z" level=info msg="StartContainer for \"c1075e017df7ec70fc2e95297c1a4b5ba8811a6efe4cd01a97d20ac1928e9bda\"" May 13 00:42:26.185598 env[1308]: time="2025-05-13T00:42:26.185540139Z" level=info msg="StartContainer for \"c1075e017df7ec70fc2e95297c1a4b5ba8811a6efe4cd01a97d20ac1928e9bda\" returns successfully" May 13 00:42:26.191239 env[1308]: time="2025-05-13T00:42:26.191184412Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:42:26.193683 env[1308]: time="2025-05-13T00:42:26.193651941Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:42:26.195272 env[1308]: time="2025-05-13T00:42:26.195248540Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:42:26.197256 env[1308]: time="2025-05-13T00:42:26.197183507Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:42:26.197515 env[1308]: time="2025-05-13T00:42:26.197485418Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\"" May 13 00:42:26.199822 env[1308]: time="2025-05-13T00:42:26.199797555Z" level=info msg="CreateContainer within sandbox \"f7a83edb8ff80983f3ac0a90277e9ffd1aeb86850923e4f5fd919cbc9e48f0df\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 13 00:42:26.407578 env[1308]: time="2025-05-13T00:42:26.407507711Z" level=info msg="shim disconnected" id=c1075e017df7ec70fc2e95297c1a4b5ba8811a6efe4cd01a97d20ac1928e9bda May 13 00:42:26.407578 env[1308]: time="2025-05-13T00:42:26.407555890Z" level=warning msg="cleaning up after shim disconnected" id=c1075e017df7ec70fc2e95297c1a4b5ba8811a6efe4cd01a97d20ac1928e9bda namespace=k8s.io May 13 00:42:26.407578 env[1308]: time="2025-05-13T00:42:26.407566447Z" level=info msg="cleaning up dead shim" May 13 00:42:26.415514 env[1308]: time="2025-05-13T00:42:26.415441453Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:42:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1923 runtime=io.containerd.runc.v2\n" May 13 00:42:26.416621 env[1308]: time="2025-05-13T00:42:26.416563561Z" level=info msg="CreateContainer within sandbox \"f7a83edb8ff80983f3ac0a90277e9ffd1aeb86850923e4f5fd919cbc9e48f0df\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"68d0a7e53eff945c378a0d5859b72e3c384c2e562e6bab71c6c4aeb0bf08f61d\"" May 13 00:42:26.417184 env[1308]: time="2025-05-13T00:42:26.417130245Z" level=info msg="StartContainer for \"68d0a7e53eff945c378a0d5859b72e3c384c2e562e6bab71c6c4aeb0bf08f61d\"" May 13 00:42:26.429136 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c1075e017df7ec70fc2e95297c1a4b5ba8811a6efe4cd01a97d20ac1928e9bda-rootfs.mount: Deactivated successfully. May 13 00:42:26.467513 env[1308]: time="2025-05-13T00:42:26.467448287Z" level=info msg="StartContainer for \"68d0a7e53eff945c378a0d5859b72e3c384c2e562e6bab71c6c4aeb0bf08f61d\" returns successfully" May 13 00:42:26.478647 kubelet[1563]: E0513 00:42:26.478586 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:42:27.128048 kubelet[1563]: E0513 00:42:27.128009 1563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:27.129314 kubelet[1563]: E0513 00:42:27.129251 1563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:27.130828 env[1308]: time="2025-05-13T00:42:27.130769994Z" level=info msg="CreateContainer within sandbox \"dee56fa1e23b080ef75d689289b9b83dd86ca4302213a7ffc55ee0f0e2d9028a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 13 00:42:27.146119 env[1308]: time="2025-05-13T00:42:27.146057740Z" level=info msg="CreateContainer within sandbox \"dee56fa1e23b080ef75d689289b9b83dd86ca4302213a7ffc55ee0f0e2d9028a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e485116111d07cb30b553cdb1389976bc51e174bc106b58e1166386f0bfc37bd\"" May 13 00:42:27.146653 env[1308]: time="2025-05-13T00:42:27.146620640Z" level=info msg="StartContainer for \"e485116111d07cb30b553cdb1389976bc51e174bc106b58e1166386f0bfc37bd\"" May 13 00:42:27.149205 kubelet[1563]: I0513 00:42:27.149147 1563 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-mzvq7" podStartSLOduration=5.533701732 podStartE2EDuration="18.149126689s" podCreationTimestamp="2025-05-13 00:42:09 +0000 UTC" firstStartedPulling="2025-05-13 00:42:13.583237084 +0000 UTC m=+5.697332299" lastFinishedPulling="2025-05-13 00:42:26.198662041 +0000 UTC m=+18.312757256" observedRunningTime="2025-05-13 00:42:27.148938076 +0000 UTC m=+19.263033301" watchObservedRunningTime="2025-05-13 00:42:27.149126689 +0000 UTC m=+19.263221904" May 13 00:42:27.188685 env[1308]: time="2025-05-13T00:42:27.188619244Z" level=info msg="StartContainer for \"e485116111d07cb30b553cdb1389976bc51e174bc106b58e1166386f0bfc37bd\" returns successfully" May 13 00:42:27.292200 kubelet[1563]: I0513 00:42:27.292158 1563 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 13 00:42:27.478903 kubelet[1563]: E0513 00:42:27.478854 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:42:27.509423 kernel: Initializing XFRM netlink socket May 13 00:42:28.133689 kubelet[1563]: E0513 00:42:28.133651 1563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:28.134092 kubelet[1563]: E0513 00:42:28.134072 1563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:28.468321 kubelet[1563]: E0513 00:42:28.468256 1563 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:42:28.479695 kubelet[1563]: E0513 00:42:28.479642 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:42:29.135793 kubelet[1563]: E0513 00:42:29.135736 1563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:29.161863 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready May 13 00:42:29.162009 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready May 13 00:42:29.163419 systemd-networkd[1079]: cilium_host: Link UP May 13 00:42:29.163535 systemd-networkd[1079]: cilium_net: Link UP May 13 00:42:29.163658 systemd-networkd[1079]: cilium_net: Gained carrier May 13 00:42:29.163837 systemd-networkd[1079]: cilium_host: Gained carrier May 13 00:42:29.213571 systemd-networkd[1079]: cilium_net: Gained IPv6LL May 13 00:42:29.253523 systemd-networkd[1079]: cilium_vxlan: Link UP May 13 00:42:29.253533 systemd-networkd[1079]: cilium_vxlan: Gained carrier May 13 00:42:29.468529 systemd-networkd[1079]: cilium_host: Gained IPv6LL May 13 00:42:29.480774 kubelet[1563]: E0513 00:42:29.480705 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:42:29.505426 kernel: NET: Registered PF_ALG protocol family May 13 00:42:30.060043 systemd-networkd[1079]: lxc_health: Link UP May 13 00:42:30.068852 systemd-networkd[1079]: lxc_health: Gained carrier May 13 00:42:30.069622 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 13 00:42:30.137384 kubelet[1563]: E0513 00:42:30.137353 1563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:30.481806 kubelet[1563]: E0513 00:42:30.481738 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:42:30.666343 kubelet[1563]: I0513 00:42:30.666247 1563 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-82jqh" podStartSLOduration=11.829303044 podStartE2EDuration="21.666219785s" podCreationTimestamp="2025-05-13 00:42:09 +0000 UTC" firstStartedPulling="2025-05-13 00:42:13.580379773 +0000 UTC m=+5.694474988" lastFinishedPulling="2025-05-13 00:42:23.417296484 +0000 UTC m=+15.531391729" observedRunningTime="2025-05-13 00:42:28.15712703 +0000 UTC m=+20.271222245" watchObservedRunningTime="2025-05-13 00:42:30.666219785 +0000 UTC m=+22.780315000" May 13 00:42:30.666671 kubelet[1563]: I0513 00:42:30.666630 1563 topology_manager.go:215] "Topology Admit Handler" podUID="ccaff0b3-9f46-4b4a-a6a6-57a99d7cc1ec" podNamespace="default" podName="nginx-deployment-85f456d6dd-vtmgj" May 13 00:42:30.736476 kubelet[1563]: I0513 00:42:30.736329 1563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jd9zg\" (UniqueName: \"kubernetes.io/projected/ccaff0b3-9f46-4b4a-a6a6-57a99d7cc1ec-kube-api-access-jd9zg\") pod \"nginx-deployment-85f456d6dd-vtmgj\" (UID: \"ccaff0b3-9f46-4b4a-a6a6-57a99d7cc1ec\") " pod="default/nginx-deployment-85f456d6dd-vtmgj" May 13 00:42:30.971003 env[1308]: time="2025-05-13T00:42:30.970941828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-vtmgj,Uid:ccaff0b3-9f46-4b4a-a6a6-57a99d7cc1ec,Namespace:default,Attempt:0,}" May 13 00:42:31.001137 systemd-networkd[1079]: lxc40c1f126a90c: Link UP May 13 00:42:31.011443 kernel: eth0: renamed from tmpce856 May 13 00:42:31.019876 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 13 00:42:31.020010 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc40c1f126a90c: link becomes ready May 13 00:42:31.020053 systemd-networkd[1079]: lxc40c1f126a90c: Gained carrier May 13 00:42:31.138644 kubelet[1563]: E0513 00:42:31.138611 1563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:31.236558 systemd-networkd[1079]: cilium_vxlan: Gained IPv6LL May 13 00:42:31.482709 kubelet[1563]: E0513 00:42:31.482653 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:42:31.492559 systemd-networkd[1079]: lxc_health: Gained IPv6LL May 13 00:42:32.483020 kubelet[1563]: E0513 00:42:32.482954 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:42:32.772679 systemd-networkd[1079]: lxc40c1f126a90c: Gained IPv6LL May 13 00:42:33.483825 kubelet[1563]: E0513 00:42:33.483749 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:42:34.290004 env[1308]: time="2025-05-13T00:42:34.289910975Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:42:34.290004 env[1308]: time="2025-05-13T00:42:34.289965030Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:42:34.290004 env[1308]: time="2025-05-13T00:42:34.289975583Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:42:34.290539 env[1308]: time="2025-05-13T00:42:34.290153104Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ce85612a67094aa142901b47b18230d5b5deb643227c7e5ca15d35e15345e83d pid=2635 runtime=io.containerd.runc.v2 May 13 00:42:34.314794 systemd-resolved[1221]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:42:34.337148 env[1308]: time="2025-05-13T00:42:34.336452847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-vtmgj,Uid:ccaff0b3-9f46-4b4a-a6a6-57a99d7cc1ec,Namespace:default,Attempt:0,} returns sandbox id \"ce85612a67094aa142901b47b18230d5b5deb643227c7e5ca15d35e15345e83d\"" May 13 00:42:34.337937 env[1308]: time="2025-05-13T00:42:34.337912998Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" May 13 00:42:34.484913 kubelet[1563]: E0513 00:42:34.484864 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:42:35.486029 kubelet[1563]: E0513 00:42:35.485955 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:42:36.486567 kubelet[1563]: E0513 00:42:36.486500 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:42:37.316534 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1898139990.mount: Deactivated successfully. May 13 00:42:37.486690 kubelet[1563]: E0513 00:42:37.486619 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:42:38.488054 kubelet[1563]: E0513 00:42:38.487938 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:42:39.261493 env[1308]: time="2025-05-13T00:42:39.261422478Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:42:39.263233 env[1308]: time="2025-05-13T00:42:39.263188431Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7e2dd24abce21cd256091445aca4b7eb00774264c2b0a8714701dd7091509efa,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:42:39.264723 env[1308]: time="2025-05-13T00:42:39.264690802Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:42:39.266413 env[1308]: time="2025-05-13T00:42:39.266368000Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:42:39.266978 env[1308]: time="2025-05-13T00:42:39.266946449Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:7e2dd24abce21cd256091445aca4b7eb00774264c2b0a8714701dd7091509efa\"" May 13 00:42:39.268968 env[1308]: time="2025-05-13T00:42:39.268940378Z" level=info msg="CreateContainer within sandbox \"ce85612a67094aa142901b47b18230d5b5deb643227c7e5ca15d35e15345e83d\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" May 13 00:42:39.281848 env[1308]: time="2025-05-13T00:42:39.281798776Z" level=info msg="CreateContainer within sandbox \"ce85612a67094aa142901b47b18230d5b5deb643227c7e5ca15d35e15345e83d\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"f80b351435bb1aeac54b2beede3052e580e361898b7e4512855e48a48df71715\"" May 13 00:42:39.282298 env[1308]: time="2025-05-13T00:42:39.282254379Z" level=info msg="StartContainer for \"f80b351435bb1aeac54b2beede3052e580e361898b7e4512855e48a48df71715\"" May 13 00:42:39.317169 env[1308]: time="2025-05-13T00:42:39.317020817Z" level=info msg="StartContainer for \"f80b351435bb1aeac54b2beede3052e580e361898b7e4512855e48a48df71715\" returns successfully" May 13 00:42:39.488219 kubelet[1563]: E0513 00:42:39.488168 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:42:40.164697 kubelet[1563]: I0513 00:42:40.164634 1563 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-85f456d6dd-vtmgj" podStartSLOduration=5.234376759 podStartE2EDuration="10.164616454s" podCreationTimestamp="2025-05-13 00:42:30 +0000 UTC" firstStartedPulling="2025-05-13 00:42:34.337681873 +0000 UTC m=+26.451777088" lastFinishedPulling="2025-05-13 00:42:39.267921568 +0000 UTC m=+31.382016783" observedRunningTime="2025-05-13 00:42:40.164467704 +0000 UTC m=+32.278562920" watchObservedRunningTime="2025-05-13 00:42:40.164616454 +0000 UTC m=+32.278711669" May 13 00:42:40.488578 kubelet[1563]: E0513 00:42:40.488369 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:42:40.621598 kubelet[1563]: I0513 00:42:40.621518 1563 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 00:42:40.622371 kubelet[1563]: E0513 00:42:40.622346 1563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:41.160417 kubelet[1563]: E0513 00:42:41.160364 1563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:41.488722 kubelet[1563]: E0513 00:42:41.488531 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:42:42.488883 kubelet[1563]: E0513 00:42:42.488768 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:42:42.885571 kubelet[1563]: I0513 00:42:42.885382 1563 topology_manager.go:215] "Topology Admit Handler" podUID="2ed0e678-1951-456a-9bd9-59f86edacf39" podNamespace="default" podName="nfs-server-provisioner-0" May 13 00:42:42.906222 kubelet[1563]: I0513 00:42:42.906139 1563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crk64\" (UniqueName: \"kubernetes.io/projected/2ed0e678-1951-456a-9bd9-59f86edacf39-kube-api-access-crk64\") pod \"nfs-server-provisioner-0\" (UID: \"2ed0e678-1951-456a-9bd9-59f86edacf39\") " pod="default/nfs-server-provisioner-0" May 13 00:42:42.906222 kubelet[1563]: I0513 00:42:42.906212 1563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/2ed0e678-1951-456a-9bd9-59f86edacf39-data\") pod \"nfs-server-provisioner-0\" (UID: \"2ed0e678-1951-456a-9bd9-59f86edacf39\") " pod="default/nfs-server-provisioner-0" May 13 00:42:43.190010 env[1308]: time="2025-05-13T00:42:43.189948896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:2ed0e678-1951-456a-9bd9-59f86edacf39,Namespace:default,Attempt:0,}" May 13 00:42:43.218529 systemd-networkd[1079]: lxc8a4613dd3618: Link UP May 13 00:42:43.223439 kernel: eth0: renamed from tmp2f947 May 13 00:42:43.231344 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 13 00:42:43.231466 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc8a4613dd3618: link becomes ready May 13 00:42:43.231813 systemd-networkd[1079]: lxc8a4613dd3618: Gained carrier May 13 00:42:43.410801 env[1308]: time="2025-05-13T00:42:43.410704981Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:42:43.410801 env[1308]: time="2025-05-13T00:42:43.410766497Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:42:43.410801 env[1308]: time="2025-05-13T00:42:43.410785325Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:42:43.411066 env[1308]: time="2025-05-13T00:42:43.410907997Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2f947ffd891d9a2af2e9755b935d07778be0a7569f965064d0e584e4b328016a pid=2762 runtime=io.containerd.runc.v2 May 13 00:42:43.447050 systemd-resolved[1221]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:42:43.469267 env[1308]: time="2025-05-13T00:42:43.469216882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:2ed0e678-1951-456a-9bd9-59f86edacf39,Namespace:default,Attempt:0,} returns sandbox id \"2f947ffd891d9a2af2e9755b935d07778be0a7569f965064d0e584e4b328016a\"" May 13 00:42:43.470841 env[1308]: time="2025-05-13T00:42:43.470814215Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" May 13 00:42:43.489664 kubelet[1563]: E0513 00:42:43.489615 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:42:44.490018 kubelet[1563]: E0513 00:42:44.489950 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:42:45.063816 systemd-networkd[1079]: lxc8a4613dd3618: Gained IPv6LL May 13 00:42:45.490849 kubelet[1563]: E0513 00:42:45.490765 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:42:46.286183 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount175620976.mount: Deactivated successfully. May 13 00:42:46.491993 kubelet[1563]: E0513 00:42:46.491909 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:42:47.492526 kubelet[1563]: E0513 00:42:47.492448 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:42:48.468062 kubelet[1563]: E0513 00:42:48.468000 1563 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:42:48.493500 kubelet[1563]: E0513 00:42:48.493450 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:42:48.940211 env[1308]: time="2025-05-13T00:42:48.940136478Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:42:48.941983 update_engine[1298]: I0513 00:42:48.941468 1298 update_attempter.cc:509] Updating boot flags... May 13 00:42:48.946120 env[1308]: time="2025-05-13T00:42:48.946080859Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:42:48.953411 env[1308]: time="2025-05-13T00:42:48.947992099Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:42:48.953411 env[1308]: time="2025-05-13T00:42:48.950327340Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:42:48.953411 env[1308]: time="2025-05-13T00:42:48.951097578Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" May 13 00:42:48.954262 env[1308]: time="2025-05-13T00:42:48.954212506Z" level=info msg="CreateContainer within sandbox \"2f947ffd891d9a2af2e9755b935d07778be0a7569f965064d0e584e4b328016a\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" May 13 00:42:48.972767 env[1308]: time="2025-05-13T00:42:48.972622169Z" level=info msg="CreateContainer within sandbox \"2f947ffd891d9a2af2e9755b935d07778be0a7569f965064d0e584e4b328016a\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"6cf48e9baa9db9f718f66fdb5d9e0135c63278009bf0f79be5f021a42338e2fc\"" May 13 00:42:48.974449 env[1308]: time="2025-05-13T00:42:48.973602338Z" level=info msg="StartContainer for \"6cf48e9baa9db9f718f66fdb5d9e0135c63278009bf0f79be5f021a42338e2fc\"" May 13 00:42:49.130343 env[1308]: time="2025-05-13T00:42:49.130281815Z" level=info msg="StartContainer for \"6cf48e9baa9db9f718f66fdb5d9e0135c63278009bf0f79be5f021a42338e2fc\" returns successfully" May 13 00:42:49.494520 kubelet[1563]: E0513 00:42:49.494448 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:42:50.495657 kubelet[1563]: E0513 00:42:50.495579 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:42:51.495846 kubelet[1563]: E0513 00:42:51.495772 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:42:52.495993 kubelet[1563]: E0513 00:42:52.495932 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:42:53.496359 kubelet[1563]: E0513 00:42:53.496292 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:42:54.497019 kubelet[1563]: E0513 00:42:54.496953 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:42:55.497231 kubelet[1563]: E0513 00:42:55.497175 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:42:56.497988 kubelet[1563]: E0513 00:42:56.497927 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:42:57.499091 kubelet[1563]: E0513 00:42:57.499011 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:42:58.483165 kubelet[1563]: I0513 00:42:58.483086 1563 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=11.001151069 podStartE2EDuration="16.483062022s" podCreationTimestamp="2025-05-13 00:42:42 +0000 UTC" firstStartedPulling="2025-05-13 00:42:43.470568801 +0000 UTC m=+35.584664016" lastFinishedPulling="2025-05-13 00:42:48.952479754 +0000 UTC m=+41.066574969" observedRunningTime="2025-05-13 00:42:49.198115608 +0000 UTC m=+41.312210853" watchObservedRunningTime="2025-05-13 00:42:58.483062022 +0000 UTC m=+50.597157237" May 13 00:42:58.483487 kubelet[1563]: I0513 00:42:58.483250 1563 topology_manager.go:215] "Topology Admit Handler" podUID="0957be84-52a4-463e-aac1-5aa7cbd7f53b" podNamespace="default" podName="test-pod-1" May 13 00:42:58.500070 kubelet[1563]: E0513 00:42:58.500017 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:42:58.612305 kubelet[1563]: I0513 00:42:58.612201 1563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0bbd65be-ffac-4073-831c-f18942df8ed2\" (UniqueName: \"kubernetes.io/nfs/0957be84-52a4-463e-aac1-5aa7cbd7f53b-pvc-0bbd65be-ffac-4073-831c-f18942df8ed2\") pod \"test-pod-1\" (UID: \"0957be84-52a4-463e-aac1-5aa7cbd7f53b\") " pod="default/test-pod-1" May 13 00:42:58.612305 kubelet[1563]: I0513 00:42:58.612313 1563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxbwb\" (UniqueName: \"kubernetes.io/projected/0957be84-52a4-463e-aac1-5aa7cbd7f53b-kube-api-access-bxbwb\") pod \"test-pod-1\" (UID: \"0957be84-52a4-463e-aac1-5aa7cbd7f53b\") " pod="default/test-pod-1" May 13 00:42:58.737441 kernel: FS-Cache: Loaded May 13 00:42:58.780892 kernel: RPC: Registered named UNIX socket transport module. May 13 00:42:58.781069 kernel: RPC: Registered udp transport module. May 13 00:42:58.781091 kernel: RPC: Registered tcp transport module. May 13 00:42:58.781601 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. May 13 00:42:58.839438 kernel: FS-Cache: Netfs 'nfs' registered for caching May 13 00:42:59.015846 kernel: NFS: Registering the id_resolver key type May 13 00:42:59.016011 kernel: Key type id_resolver registered May 13 00:42:59.016042 kernel: Key type id_legacy registered May 13 00:42:59.040663 nfsidmap[2892]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' May 13 00:42:59.044116 nfsidmap[2895]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' May 13 00:42:59.086659 env[1308]: time="2025-05-13T00:42:59.086598887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:0957be84-52a4-463e-aac1-5aa7cbd7f53b,Namespace:default,Attempt:0,}" May 13 00:42:59.117145 systemd-networkd[1079]: lxcc2942a2f0dd8: Link UP May 13 00:42:59.126503 kernel: eth0: renamed from tmp99727 May 13 00:42:59.133803 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 13 00:42:59.133870 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcc2942a2f0dd8: link becomes ready May 13 00:42:59.133964 systemd-networkd[1079]: lxcc2942a2f0dd8: Gained carrier May 13 00:42:59.265579 env[1308]: time="2025-05-13T00:42:59.265496324Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:42:59.265579 env[1308]: time="2025-05-13T00:42:59.265534228Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:42:59.265847 env[1308]: time="2025-05-13T00:42:59.265544928Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:42:59.266021 env[1308]: time="2025-05-13T00:42:59.265920603Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/99727e4a6ec07932f75103af9b2d624e91f7ce3f24a0b8df513727b9ece47017 pid=2929 runtime=io.containerd.runc.v2 May 13 00:42:59.284675 systemd-resolved[1221]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:42:59.305868 env[1308]: time="2025-05-13T00:42:59.305737023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:0957be84-52a4-463e-aac1-5aa7cbd7f53b,Namespace:default,Attempt:0,} returns sandbox id \"99727e4a6ec07932f75103af9b2d624e91f7ce3f24a0b8df513727b9ece47017\"" May 13 00:42:59.307772 env[1308]: time="2025-05-13T00:42:59.307738388Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" May 13 00:42:59.500678 kubelet[1563]: E0513 00:42:59.500615 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:43:00.018646 env[1308]: time="2025-05-13T00:43:00.018584696Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:43:00.081360 env[1308]: time="2025-05-13T00:43:00.081283793Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7e2dd24abce21cd256091445aca4b7eb00774264c2b0a8714701dd7091509efa,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:43:00.084468 env[1308]: time="2025-05-13T00:43:00.084436740Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:43:00.086856 env[1308]: time="2025-05-13T00:43:00.086816167Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:43:00.087512 env[1308]: time="2025-05-13T00:43:00.087485974Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:7e2dd24abce21cd256091445aca4b7eb00774264c2b0a8714701dd7091509efa\"" May 13 00:43:00.089828 env[1308]: time="2025-05-13T00:43:00.089790465Z" level=info msg="CreateContainer within sandbox \"99727e4a6ec07932f75103af9b2d624e91f7ce3f24a0b8df513727b9ece47017\" for container &ContainerMetadata{Name:test,Attempt:0,}" May 13 00:43:00.110614 env[1308]: time="2025-05-13T00:43:00.110547018Z" level=info msg="CreateContainer within sandbox \"99727e4a6ec07932f75103af9b2d624e91f7ce3f24a0b8df513727b9ece47017\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"86790660ccef1535a865c2fa165ab0d52c1d1daa78040199f3b93b305a83d392\"" May 13 00:43:00.111197 env[1308]: time="2025-05-13T00:43:00.111159063Z" level=info msg="StartContainer for \"86790660ccef1535a865c2fa165ab0d52c1d1daa78040199f3b93b305a83d392\"" May 13 00:43:00.151271 env[1308]: time="2025-05-13T00:43:00.151218078Z" level=info msg="StartContainer for \"86790660ccef1535a865c2fa165ab0d52c1d1daa78040199f3b93b305a83d392\" returns successfully" May 13 00:43:00.205084 kubelet[1563]: I0513 00:43:00.205010 1563 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=16.423809527 podStartE2EDuration="17.204991225s" podCreationTimestamp="2025-05-13 00:42:43 +0000 UTC" firstStartedPulling="2025-05-13 00:42:59.307373374 +0000 UTC m=+51.421468589" lastFinishedPulling="2025-05-13 00:43:00.088555072 +0000 UTC m=+52.202650287" observedRunningTime="2025-05-13 00:43:00.204841532 +0000 UTC m=+52.318936757" watchObservedRunningTime="2025-05-13 00:43:00.204991225 +0000 UTC m=+52.319086440" May 13 00:43:00.356627 systemd-networkd[1079]: lxcc2942a2f0dd8: Gained IPv6LL May 13 00:43:00.501286 kubelet[1563]: E0513 00:43:00.501243 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:43:01.501831 kubelet[1563]: E0513 00:43:01.501749 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:43:02.502274 kubelet[1563]: E0513 00:43:02.502223 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:43:03.502880 kubelet[1563]: E0513 00:43:03.502796 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:43:04.503727 kubelet[1563]: E0513 00:43:04.503645 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:43:05.361501 env[1308]: time="2025-05-13T00:43:05.361377194Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 00:43:05.367500 env[1308]: time="2025-05-13T00:43:05.367452738Z" level=info msg="StopContainer for \"e485116111d07cb30b553cdb1389976bc51e174bc106b58e1166386f0bfc37bd\" with timeout 2 (s)" May 13 00:43:05.367702 env[1308]: time="2025-05-13T00:43:05.367683234Z" level=info msg="Stop container \"e485116111d07cb30b553cdb1389976bc51e174bc106b58e1166386f0bfc37bd\" with signal terminated" May 13 00:43:05.373546 systemd-networkd[1079]: lxc_health: Link DOWN May 13 00:43:05.373555 systemd-networkd[1079]: lxc_health: Lost carrier May 13 00:43:05.429185 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e485116111d07cb30b553cdb1389976bc51e174bc106b58e1166386f0bfc37bd-rootfs.mount: Deactivated successfully. May 13 00:43:05.436639 env[1308]: time="2025-05-13T00:43:05.436595981Z" level=info msg="shim disconnected" id=e485116111d07cb30b553cdb1389976bc51e174bc106b58e1166386f0bfc37bd May 13 00:43:05.436776 env[1308]: time="2025-05-13T00:43:05.436642521Z" level=warning msg="cleaning up after shim disconnected" id=e485116111d07cb30b553cdb1389976bc51e174bc106b58e1166386f0bfc37bd namespace=k8s.io May 13 00:43:05.436776 env[1308]: time="2025-05-13T00:43:05.436650808Z" level=info msg="cleaning up dead shim" May 13 00:43:05.443671 env[1308]: time="2025-05-13T00:43:05.443614133Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:43:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3059 runtime=io.containerd.runc.v2\n" May 13 00:43:05.446917 env[1308]: time="2025-05-13T00:43:05.446878324Z" level=info msg="StopContainer for \"e485116111d07cb30b553cdb1389976bc51e174bc106b58e1166386f0bfc37bd\" returns successfully" May 13 00:43:05.447696 env[1308]: time="2025-05-13T00:43:05.447656813Z" level=info msg="StopPodSandbox for \"dee56fa1e23b080ef75d689289b9b83dd86ca4302213a7ffc55ee0f0e2d9028a\"" May 13 00:43:05.447760 env[1308]: time="2025-05-13T00:43:05.447725728Z" level=info msg="Container to stop \"505de3abb9335649236e1d6fb591fb131d0605bb2c80640c37ccb9aeb4183374\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:43:05.447760 env[1308]: time="2025-05-13T00:43:05.447739083Z" level=info msg="Container to stop \"bdfba942be8496841d315a146831a2e1bdce38a63a7a8521f4a5b7ad8431543c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:43:05.447760 env[1308]: time="2025-05-13T00:43:05.447748772Z" level=info msg="Container to stop \"b811f8a3e6837ca40a70676e489ccb8810b753dcdcd67ac2544d2dc394cd78ea\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:43:05.447760 env[1308]: time="2025-05-13T00:43:05.447758150Z" level=info msg="Container to stop \"c1075e017df7ec70fc2e95297c1a4b5ba8811a6efe4cd01a97d20ac1928e9bda\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:43:05.447915 env[1308]: time="2025-05-13T00:43:05.447768760Z" level=info msg="Container to stop \"e485116111d07cb30b553cdb1389976bc51e174bc106b58e1166386f0bfc37bd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:43:05.449929 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dee56fa1e23b080ef75d689289b9b83dd86ca4302213a7ffc55ee0f0e2d9028a-shm.mount: Deactivated successfully. May 13 00:43:05.465186 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dee56fa1e23b080ef75d689289b9b83dd86ca4302213a7ffc55ee0f0e2d9028a-rootfs.mount: Deactivated successfully. May 13 00:43:05.470636 env[1308]: time="2025-05-13T00:43:05.470585804Z" level=info msg="shim disconnected" id=dee56fa1e23b080ef75d689289b9b83dd86ca4302213a7ffc55ee0f0e2d9028a May 13 00:43:05.470780 env[1308]: time="2025-05-13T00:43:05.470641543Z" level=warning msg="cleaning up after shim disconnected" id=dee56fa1e23b080ef75d689289b9b83dd86ca4302213a7ffc55ee0f0e2d9028a namespace=k8s.io May 13 00:43:05.470780 env[1308]: time="2025-05-13T00:43:05.470650980Z" level=info msg="cleaning up dead shim" May 13 00:43:05.478066 env[1308]: time="2025-05-13T00:43:05.478013912Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:43:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3091 runtime=io.containerd.runc.v2\n" May 13 00:43:05.478343 env[1308]: time="2025-05-13T00:43:05.478314675Z" level=info msg="TearDown network for sandbox \"dee56fa1e23b080ef75d689289b9b83dd86ca4302213a7ffc55ee0f0e2d9028a\" successfully" May 13 00:43:05.478378 env[1308]: time="2025-05-13T00:43:05.478340675Z" level=info msg="StopPodSandbox for \"dee56fa1e23b080ef75d689289b9b83dd86ca4302213a7ffc55ee0f0e2d9028a\" returns successfully" May 13 00:43:05.504689 kubelet[1563]: E0513 00:43:05.504644 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:43:05.556334 kubelet[1563]: I0513 00:43:05.556263 1563 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/218c7015-c112-42f9-9155-ab20605eafda-cilium-run\") pod \"218c7015-c112-42f9-9155-ab20605eafda\" (UID: \"218c7015-c112-42f9-9155-ab20605eafda\") " May 13 00:43:05.556334 kubelet[1563]: I0513 00:43:05.556318 1563 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/218c7015-c112-42f9-9155-ab20605eafda-cni-path\") pod \"218c7015-c112-42f9-9155-ab20605eafda\" (UID: \"218c7015-c112-42f9-9155-ab20605eafda\") " May 13 00:43:05.556334 kubelet[1563]: I0513 00:43:05.556335 1563 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/218c7015-c112-42f9-9155-ab20605eafda-etc-cni-netd\") pod \"218c7015-c112-42f9-9155-ab20605eafda\" (UID: \"218c7015-c112-42f9-9155-ab20605eafda\") " May 13 00:43:05.556541 kubelet[1563]: I0513 00:43:05.556354 1563 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/218c7015-c112-42f9-9155-ab20605eafda-lib-modules\") pod \"218c7015-c112-42f9-9155-ab20605eafda\" (UID: \"218c7015-c112-42f9-9155-ab20605eafda\") " May 13 00:43:05.556541 kubelet[1563]: I0513 00:43:05.556371 1563 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/218c7015-c112-42f9-9155-ab20605eafda-host-proc-sys-kernel\") pod \"218c7015-c112-42f9-9155-ab20605eafda\" (UID: \"218c7015-c112-42f9-9155-ab20605eafda\") " May 13 00:43:05.556541 kubelet[1563]: I0513 00:43:05.556407 1563 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/218c7015-c112-42f9-9155-ab20605eafda-cni-path" (OuterVolumeSpecName: "cni-path") pod "218c7015-c112-42f9-9155-ab20605eafda" (UID: "218c7015-c112-42f9-9155-ab20605eafda"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:43:05.556541 kubelet[1563]: I0513 00:43:05.556442 1563 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/218c7015-c112-42f9-9155-ab20605eafda-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "218c7015-c112-42f9-9155-ab20605eafda" (UID: "218c7015-c112-42f9-9155-ab20605eafda"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:43:05.556541 kubelet[1563]: I0513 00:43:05.556412 1563 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/218c7015-c112-42f9-9155-ab20605eafda-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "218c7015-c112-42f9-9155-ab20605eafda" (UID: "218c7015-c112-42f9-9155-ab20605eafda"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:43:05.556684 kubelet[1563]: I0513 00:43:05.556421 1563 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/218c7015-c112-42f9-9155-ab20605eafda-clustermesh-secrets\") pod \"218c7015-c112-42f9-9155-ab20605eafda\" (UID: \"218c7015-c112-42f9-9155-ab20605eafda\") " May 13 00:43:05.556684 kubelet[1563]: I0513 00:43:05.556469 1563 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/218c7015-c112-42f9-9155-ab20605eafda-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "218c7015-c112-42f9-9155-ab20605eafda" (UID: "218c7015-c112-42f9-9155-ab20605eafda"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:43:05.556684 kubelet[1563]: I0513 00:43:05.556496 1563 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/218c7015-c112-42f9-9155-ab20605eafda-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "218c7015-c112-42f9-9155-ab20605eafda" (UID: "218c7015-c112-42f9-9155-ab20605eafda"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:43:05.556684 kubelet[1563]: I0513 00:43:05.556513 1563 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gxh5z\" (UniqueName: \"kubernetes.io/projected/218c7015-c112-42f9-9155-ab20605eafda-kube-api-access-gxh5z\") pod \"218c7015-c112-42f9-9155-ab20605eafda\" (UID: \"218c7015-c112-42f9-9155-ab20605eafda\") " May 13 00:43:05.556684 kubelet[1563]: I0513 00:43:05.556541 1563 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/218c7015-c112-42f9-9155-ab20605eafda-hubble-tls\") pod \"218c7015-c112-42f9-9155-ab20605eafda\" (UID: \"218c7015-c112-42f9-9155-ab20605eafda\") " May 13 00:43:05.556809 kubelet[1563]: I0513 00:43:05.556576 1563 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/218c7015-c112-42f9-9155-ab20605eafda-cilium-config-path\") pod \"218c7015-c112-42f9-9155-ab20605eafda\" (UID: \"218c7015-c112-42f9-9155-ab20605eafda\") " May 13 00:43:05.556809 kubelet[1563]: I0513 00:43:05.556604 1563 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/218c7015-c112-42f9-9155-ab20605eafda-hostproc\") pod \"218c7015-c112-42f9-9155-ab20605eafda\" (UID: \"218c7015-c112-42f9-9155-ab20605eafda\") " May 13 00:43:05.556809 kubelet[1563]: I0513 00:43:05.556623 1563 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/218c7015-c112-42f9-9155-ab20605eafda-cilium-cgroup\") pod \"218c7015-c112-42f9-9155-ab20605eafda\" (UID: \"218c7015-c112-42f9-9155-ab20605eafda\") " May 13 00:43:05.556809 kubelet[1563]: I0513 00:43:05.556641 1563 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/218c7015-c112-42f9-9155-ab20605eafda-host-proc-sys-net\") pod \"218c7015-c112-42f9-9155-ab20605eafda\" (UID: \"218c7015-c112-42f9-9155-ab20605eafda\") " May 13 00:43:05.556809 kubelet[1563]: I0513 00:43:05.556661 1563 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/218c7015-c112-42f9-9155-ab20605eafda-xtables-lock\") pod \"218c7015-c112-42f9-9155-ab20605eafda\" (UID: \"218c7015-c112-42f9-9155-ab20605eafda\") " May 13 00:43:05.556809 kubelet[1563]: I0513 00:43:05.556681 1563 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/218c7015-c112-42f9-9155-ab20605eafda-bpf-maps\") pod \"218c7015-c112-42f9-9155-ab20605eafda\" (UID: \"218c7015-c112-42f9-9155-ab20605eafda\") " May 13 00:43:05.556957 kubelet[1563]: I0513 00:43:05.556719 1563 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/218c7015-c112-42f9-9155-ab20605eafda-etc-cni-netd\") on node \"10.0.0.58\" DevicePath \"\"" May 13 00:43:05.556957 kubelet[1563]: I0513 00:43:05.556733 1563 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/218c7015-c112-42f9-9155-ab20605eafda-cilium-run\") on node \"10.0.0.58\" DevicePath \"\"" May 13 00:43:05.556957 kubelet[1563]: I0513 00:43:05.556744 1563 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/218c7015-c112-42f9-9155-ab20605eafda-cni-path\") on node \"10.0.0.58\" DevicePath \"\"" May 13 00:43:05.556957 kubelet[1563]: I0513 00:43:05.556755 1563 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/218c7015-c112-42f9-9155-ab20605eafda-lib-modules\") on node \"10.0.0.58\" DevicePath \"\"" May 13 00:43:05.556957 kubelet[1563]: I0513 00:43:05.556765 1563 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/218c7015-c112-42f9-9155-ab20605eafda-host-proc-sys-kernel\") on node \"10.0.0.58\" DevicePath \"\"" May 13 00:43:05.556957 kubelet[1563]: I0513 00:43:05.556791 1563 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/218c7015-c112-42f9-9155-ab20605eafda-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "218c7015-c112-42f9-9155-ab20605eafda" (UID: "218c7015-c112-42f9-9155-ab20605eafda"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:43:05.556957 kubelet[1563]: I0513 00:43:05.556939 1563 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/218c7015-c112-42f9-9155-ab20605eafda-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "218c7015-c112-42f9-9155-ab20605eafda" (UID: "218c7015-c112-42f9-9155-ab20605eafda"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:43:05.558985 kubelet[1563]: I0513 00:43:05.557379 1563 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/218c7015-c112-42f9-9155-ab20605eafda-hostproc" (OuterVolumeSpecName: "hostproc") pod "218c7015-c112-42f9-9155-ab20605eafda" (UID: "218c7015-c112-42f9-9155-ab20605eafda"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:43:05.559217 kubelet[1563]: I0513 00:43:05.559198 1563 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/218c7015-c112-42f9-9155-ab20605eafda-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "218c7015-c112-42f9-9155-ab20605eafda" (UID: "218c7015-c112-42f9-9155-ab20605eafda"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:43:05.559363 kubelet[1563]: I0513 00:43:05.559315 1563 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/218c7015-c112-42f9-9155-ab20605eafda-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "218c7015-c112-42f9-9155-ab20605eafda" (UID: "218c7015-c112-42f9-9155-ab20605eafda"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:43:05.559532 kubelet[1563]: I0513 00:43:05.559498 1563 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/218c7015-c112-42f9-9155-ab20605eafda-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "218c7015-c112-42f9-9155-ab20605eafda" (UID: "218c7015-c112-42f9-9155-ab20605eafda"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 13 00:43:05.559859 kubelet[1563]: I0513 00:43:05.559834 1563 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/218c7015-c112-42f9-9155-ab20605eafda-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "218c7015-c112-42f9-9155-ab20605eafda" (UID: "218c7015-c112-42f9-9155-ab20605eafda"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 13 00:43:05.560893 systemd[1]: var-lib-kubelet-pods-218c7015\x2dc112\x2d42f9\x2d9155\x2dab20605eafda-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 13 00:43:05.561077 kubelet[1563]: I0513 00:43:05.560967 1563 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/218c7015-c112-42f9-9155-ab20605eafda-kube-api-access-gxh5z" (OuterVolumeSpecName: "kube-api-access-gxh5z") pod "218c7015-c112-42f9-9155-ab20605eafda" (UID: "218c7015-c112-42f9-9155-ab20605eafda"). InnerVolumeSpecName "kube-api-access-gxh5z". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 00:43:05.561517 kubelet[1563]: I0513 00:43:05.561493 1563 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/218c7015-c112-42f9-9155-ab20605eafda-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "218c7015-c112-42f9-9155-ab20605eafda" (UID: "218c7015-c112-42f9-9155-ab20605eafda"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 00:43:05.658301 kubelet[1563]: I0513 00:43:05.658221 1563 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/218c7015-c112-42f9-9155-ab20605eafda-cilium-cgroup\") on node \"10.0.0.58\" DevicePath \"\"" May 13 00:43:05.658301 kubelet[1563]: I0513 00:43:05.658276 1563 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/218c7015-c112-42f9-9155-ab20605eafda-hubble-tls\") on node \"10.0.0.58\" DevicePath \"\"" May 13 00:43:05.658301 kubelet[1563]: I0513 00:43:05.658285 1563 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/218c7015-c112-42f9-9155-ab20605eafda-cilium-config-path\") on node \"10.0.0.58\" DevicePath \"\"" May 13 00:43:05.658301 kubelet[1563]: I0513 00:43:05.658300 1563 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/218c7015-c112-42f9-9155-ab20605eafda-hostproc\") on node \"10.0.0.58\" DevicePath \"\"" May 13 00:43:05.658301 kubelet[1563]: I0513 00:43:05.658309 1563 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/218c7015-c112-42f9-9155-ab20605eafda-bpf-maps\") on node \"10.0.0.58\" DevicePath \"\"" May 13 00:43:05.658301 kubelet[1563]: I0513 00:43:05.658317 1563 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/218c7015-c112-42f9-9155-ab20605eafda-host-proc-sys-net\") on node \"10.0.0.58\" DevicePath \"\"" May 13 00:43:05.658301 kubelet[1563]: I0513 00:43:05.658324 1563 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/218c7015-c112-42f9-9155-ab20605eafda-xtables-lock\") on node \"10.0.0.58\" DevicePath \"\"" May 13 00:43:05.658700 kubelet[1563]: I0513 00:43:05.658335 1563 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/218c7015-c112-42f9-9155-ab20605eafda-clustermesh-secrets\") on node \"10.0.0.58\" DevicePath \"\"" May 13 00:43:05.658700 kubelet[1563]: I0513 00:43:05.658348 1563 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-gxh5z\" (UniqueName: \"kubernetes.io/projected/218c7015-c112-42f9-9155-ab20605eafda-kube-api-access-gxh5z\") on node \"10.0.0.58\" DevicePath \"\"" May 13 00:43:06.211479 kubelet[1563]: I0513 00:43:06.211450 1563 scope.go:117] "RemoveContainer" containerID="e485116111d07cb30b553cdb1389976bc51e174bc106b58e1166386f0bfc37bd" May 13 00:43:06.212869 env[1308]: time="2025-05-13T00:43:06.212831527Z" level=info msg="RemoveContainer for \"e485116111d07cb30b553cdb1389976bc51e174bc106b58e1166386f0bfc37bd\"" May 13 00:43:06.216117 env[1308]: time="2025-05-13T00:43:06.216053969Z" level=info msg="RemoveContainer for \"e485116111d07cb30b553cdb1389976bc51e174bc106b58e1166386f0bfc37bd\" returns successfully" May 13 00:43:06.216270 kubelet[1563]: I0513 00:43:06.216244 1563 scope.go:117] "RemoveContainer" containerID="c1075e017df7ec70fc2e95297c1a4b5ba8811a6efe4cd01a97d20ac1928e9bda" May 13 00:43:06.217206 env[1308]: time="2025-05-13T00:43:06.217163629Z" level=info msg="RemoveContainer for \"c1075e017df7ec70fc2e95297c1a4b5ba8811a6efe4cd01a97d20ac1928e9bda\"" May 13 00:43:06.220059 env[1308]: time="2025-05-13T00:43:06.220028660Z" level=info msg="RemoveContainer for \"c1075e017df7ec70fc2e95297c1a4b5ba8811a6efe4cd01a97d20ac1928e9bda\" returns successfully" May 13 00:43:06.220223 kubelet[1563]: I0513 00:43:06.220202 1563 scope.go:117] "RemoveContainer" containerID="505de3abb9335649236e1d6fb591fb131d0605bb2c80640c37ccb9aeb4183374" May 13 00:43:06.221146 env[1308]: time="2025-05-13T00:43:06.221124263Z" level=info msg="RemoveContainer for \"505de3abb9335649236e1d6fb591fb131d0605bb2c80640c37ccb9aeb4183374\"" May 13 00:43:06.224717 env[1308]: time="2025-05-13T00:43:06.224589445Z" level=info msg="RemoveContainer for \"505de3abb9335649236e1d6fb591fb131d0605bb2c80640c37ccb9aeb4183374\" returns successfully" May 13 00:43:06.224908 kubelet[1563]: I0513 00:43:06.224873 1563 scope.go:117] "RemoveContainer" containerID="b811f8a3e6837ca40a70676e489ccb8810b753dcdcd67ac2544d2dc394cd78ea" May 13 00:43:06.225936 env[1308]: time="2025-05-13T00:43:06.225902359Z" level=info msg="RemoveContainer for \"b811f8a3e6837ca40a70676e489ccb8810b753dcdcd67ac2544d2dc394cd78ea\"" May 13 00:43:06.228779 env[1308]: time="2025-05-13T00:43:06.228748222Z" level=info msg="RemoveContainer for \"b811f8a3e6837ca40a70676e489ccb8810b753dcdcd67ac2544d2dc394cd78ea\" returns successfully" May 13 00:43:06.228926 kubelet[1563]: I0513 00:43:06.228894 1563 scope.go:117] "RemoveContainer" containerID="bdfba942be8496841d315a146831a2e1bdce38a63a7a8521f4a5b7ad8431543c" May 13 00:43:06.229745 env[1308]: time="2025-05-13T00:43:06.229715917Z" level=info msg="RemoveContainer for \"bdfba942be8496841d315a146831a2e1bdce38a63a7a8521f4a5b7ad8431543c\"" May 13 00:43:06.232227 env[1308]: time="2025-05-13T00:43:06.232201453Z" level=info msg="RemoveContainer for \"bdfba942be8496841d315a146831a2e1bdce38a63a7a8521f4a5b7ad8431543c\" returns successfully" May 13 00:43:06.232376 kubelet[1563]: I0513 00:43:06.232346 1563 scope.go:117] "RemoveContainer" containerID="e485116111d07cb30b553cdb1389976bc51e174bc106b58e1166386f0bfc37bd" May 13 00:43:06.232635 env[1308]: time="2025-05-13T00:43:06.232512536Z" level=error msg="ContainerStatus for \"e485116111d07cb30b553cdb1389976bc51e174bc106b58e1166386f0bfc37bd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e485116111d07cb30b553cdb1389976bc51e174bc106b58e1166386f0bfc37bd\": not found" May 13 00:43:06.232794 kubelet[1563]: E0513 00:43:06.232766 1563 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e485116111d07cb30b553cdb1389976bc51e174bc106b58e1166386f0bfc37bd\": not found" containerID="e485116111d07cb30b553cdb1389976bc51e174bc106b58e1166386f0bfc37bd" May 13 00:43:06.232878 kubelet[1563]: I0513 00:43:06.232799 1563 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e485116111d07cb30b553cdb1389976bc51e174bc106b58e1166386f0bfc37bd"} err="failed to get container status \"e485116111d07cb30b553cdb1389976bc51e174bc106b58e1166386f0bfc37bd\": rpc error: code = NotFound desc = an error occurred when try to find container \"e485116111d07cb30b553cdb1389976bc51e174bc106b58e1166386f0bfc37bd\": not found" May 13 00:43:06.232878 kubelet[1563]: I0513 00:43:06.232876 1563 scope.go:117] "RemoveContainer" containerID="c1075e017df7ec70fc2e95297c1a4b5ba8811a6efe4cd01a97d20ac1928e9bda" May 13 00:43:06.233051 env[1308]: time="2025-05-13T00:43:06.233008256Z" level=error msg="ContainerStatus for \"c1075e017df7ec70fc2e95297c1a4b5ba8811a6efe4cd01a97d20ac1928e9bda\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c1075e017df7ec70fc2e95297c1a4b5ba8811a6efe4cd01a97d20ac1928e9bda\": not found" May 13 00:43:06.233196 kubelet[1563]: E0513 00:43:06.233169 1563 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c1075e017df7ec70fc2e95297c1a4b5ba8811a6efe4cd01a97d20ac1928e9bda\": not found" containerID="c1075e017df7ec70fc2e95297c1a4b5ba8811a6efe4cd01a97d20ac1928e9bda" May 13 00:43:06.233247 kubelet[1563]: I0513 00:43:06.233202 1563 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c1075e017df7ec70fc2e95297c1a4b5ba8811a6efe4cd01a97d20ac1928e9bda"} err="failed to get container status \"c1075e017df7ec70fc2e95297c1a4b5ba8811a6efe4cd01a97d20ac1928e9bda\": rpc error: code = NotFound desc = an error occurred when try to find container \"c1075e017df7ec70fc2e95297c1a4b5ba8811a6efe4cd01a97d20ac1928e9bda\": not found" May 13 00:43:06.233247 kubelet[1563]: I0513 00:43:06.233224 1563 scope.go:117] "RemoveContainer" containerID="505de3abb9335649236e1d6fb591fb131d0605bb2c80640c37ccb9aeb4183374" May 13 00:43:06.233493 env[1308]: time="2025-05-13T00:43:06.233380918Z" level=error msg="ContainerStatus for \"505de3abb9335649236e1d6fb591fb131d0605bb2c80640c37ccb9aeb4183374\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"505de3abb9335649236e1d6fb591fb131d0605bb2c80640c37ccb9aeb4183374\": not found" May 13 00:43:06.233590 kubelet[1563]: E0513 00:43:06.233569 1563 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"505de3abb9335649236e1d6fb591fb131d0605bb2c80640c37ccb9aeb4183374\": not found" containerID="505de3abb9335649236e1d6fb591fb131d0605bb2c80640c37ccb9aeb4183374" May 13 00:43:06.233631 kubelet[1563]: I0513 00:43:06.233592 1563 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"505de3abb9335649236e1d6fb591fb131d0605bb2c80640c37ccb9aeb4183374"} err="failed to get container status \"505de3abb9335649236e1d6fb591fb131d0605bb2c80640c37ccb9aeb4183374\": rpc error: code = NotFound desc = an error occurred when try to find container \"505de3abb9335649236e1d6fb591fb131d0605bb2c80640c37ccb9aeb4183374\": not found" May 13 00:43:06.233661 kubelet[1563]: I0513 00:43:06.233630 1563 scope.go:117] "RemoveContainer" containerID="b811f8a3e6837ca40a70676e489ccb8810b753dcdcd67ac2544d2dc394cd78ea" May 13 00:43:06.233800 env[1308]: time="2025-05-13T00:43:06.233759361Z" level=error msg="ContainerStatus for \"b811f8a3e6837ca40a70676e489ccb8810b753dcdcd67ac2544d2dc394cd78ea\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b811f8a3e6837ca40a70676e489ccb8810b753dcdcd67ac2544d2dc394cd78ea\": not found" May 13 00:43:06.233965 kubelet[1563]: E0513 00:43:06.233946 1563 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b811f8a3e6837ca40a70676e489ccb8810b753dcdcd67ac2544d2dc394cd78ea\": not found" containerID="b811f8a3e6837ca40a70676e489ccb8810b753dcdcd67ac2544d2dc394cd78ea" May 13 00:43:06.234013 kubelet[1563]: I0513 00:43:06.233965 1563 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b811f8a3e6837ca40a70676e489ccb8810b753dcdcd67ac2544d2dc394cd78ea"} err="failed to get container status \"b811f8a3e6837ca40a70676e489ccb8810b753dcdcd67ac2544d2dc394cd78ea\": rpc error: code = NotFound desc = an error occurred when try to find container \"b811f8a3e6837ca40a70676e489ccb8810b753dcdcd67ac2544d2dc394cd78ea\": not found" May 13 00:43:06.234013 kubelet[1563]: I0513 00:43:06.233979 1563 scope.go:117] "RemoveContainer" containerID="bdfba942be8496841d315a146831a2e1bdce38a63a7a8521f4a5b7ad8431543c" May 13 00:43:06.234281 env[1308]: time="2025-05-13T00:43:06.234203050Z" level=error msg="ContainerStatus for \"bdfba942be8496841d315a146831a2e1bdce38a63a7a8521f4a5b7ad8431543c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bdfba942be8496841d315a146831a2e1bdce38a63a7a8521f4a5b7ad8431543c\": not found" May 13 00:43:06.234379 kubelet[1563]: E0513 00:43:06.234350 1563 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bdfba942be8496841d315a146831a2e1bdce38a63a7a8521f4a5b7ad8431543c\": not found" containerID="bdfba942be8496841d315a146831a2e1bdce38a63a7a8521f4a5b7ad8431543c" May 13 00:43:06.234379 kubelet[1563]: I0513 00:43:06.234368 1563 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bdfba942be8496841d315a146831a2e1bdce38a63a7a8521f4a5b7ad8431543c"} err="failed to get container status \"bdfba942be8496841d315a146831a2e1bdce38a63a7a8521f4a5b7ad8431543c\": rpc error: code = NotFound desc = an error occurred when try to find container \"bdfba942be8496841d315a146831a2e1bdce38a63a7a8521f4a5b7ad8431543c\": not found" May 13 00:43:06.346984 systemd[1]: var-lib-kubelet-pods-218c7015\x2dc112\x2d42f9\x2d9155\x2dab20605eafda-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgxh5z.mount: Deactivated successfully. May 13 00:43:06.347116 systemd[1]: var-lib-kubelet-pods-218c7015\x2dc112\x2d42f9\x2d9155\x2dab20605eafda-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 13 00:43:06.505333 kubelet[1563]: E0513 00:43:06.505181 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:43:07.096418 kubelet[1563]: I0513 00:43:07.096353 1563 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="218c7015-c112-42f9-9155-ab20605eafda" path="/var/lib/kubelet/pods/218c7015-c112-42f9-9155-ab20605eafda/volumes" May 13 00:43:07.506283 kubelet[1563]: E0513 00:43:07.506212 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:43:07.983947 kubelet[1563]: I0513 00:43:07.983866 1563 topology_manager.go:215] "Topology Admit Handler" podUID="a4554c67-9494-4df9-9de9-d656826715f1" podNamespace="kube-system" podName="cilium-operator-599987898-27mhl" May 13 00:43:07.983947 kubelet[1563]: E0513 00:43:07.983928 1563 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="218c7015-c112-42f9-9155-ab20605eafda" containerName="mount-bpf-fs" May 13 00:43:07.983947 kubelet[1563]: E0513 00:43:07.983938 1563 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="218c7015-c112-42f9-9155-ab20605eafda" containerName="clean-cilium-state" May 13 00:43:07.983947 kubelet[1563]: E0513 00:43:07.983944 1563 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="218c7015-c112-42f9-9155-ab20605eafda" containerName="cilium-agent" May 13 00:43:07.983947 kubelet[1563]: E0513 00:43:07.983949 1563 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="218c7015-c112-42f9-9155-ab20605eafda" containerName="mount-cgroup" May 13 00:43:07.983947 kubelet[1563]: E0513 00:43:07.983956 1563 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="218c7015-c112-42f9-9155-ab20605eafda" containerName="apply-sysctl-overwrites" May 13 00:43:07.983947 kubelet[1563]: I0513 00:43:07.983973 1563 memory_manager.go:354] "RemoveStaleState removing state" podUID="218c7015-c112-42f9-9155-ab20605eafda" containerName="cilium-agent" May 13 00:43:07.997754 kubelet[1563]: I0513 00:43:07.997671 1563 topology_manager.go:215] "Topology Admit Handler" podUID="38aaa280-afd4-4af3-ae45-0da6c81180e4" podNamespace="kube-system" podName="cilium-465fn" May 13 00:43:08.071969 kubelet[1563]: I0513 00:43:08.071895 1563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/38aaa280-afd4-4af3-ae45-0da6c81180e4-bpf-maps\") pod \"cilium-465fn\" (UID: \"38aaa280-afd4-4af3-ae45-0da6c81180e4\") " pod="kube-system/cilium-465fn" May 13 00:43:08.071969 kubelet[1563]: I0513 00:43:08.071963 1563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/38aaa280-afd4-4af3-ae45-0da6c81180e4-hostproc\") pod \"cilium-465fn\" (UID: \"38aaa280-afd4-4af3-ae45-0da6c81180e4\") " pod="kube-system/cilium-465fn" May 13 00:43:08.072213 kubelet[1563]: I0513 00:43:08.071994 1563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/38aaa280-afd4-4af3-ae45-0da6c81180e4-lib-modules\") pod \"cilium-465fn\" (UID: \"38aaa280-afd4-4af3-ae45-0da6c81180e4\") " pod="kube-system/cilium-465fn" May 13 00:43:08.072213 kubelet[1563]: I0513 00:43:08.072020 1563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/38aaa280-afd4-4af3-ae45-0da6c81180e4-cilium-ipsec-secrets\") pod \"cilium-465fn\" (UID: \"38aaa280-afd4-4af3-ae45-0da6c81180e4\") " pod="kube-system/cilium-465fn" May 13 00:43:08.072213 kubelet[1563]: I0513 00:43:08.072050 1563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77476\" (UniqueName: \"kubernetes.io/projected/38aaa280-afd4-4af3-ae45-0da6c81180e4-kube-api-access-77476\") pod \"cilium-465fn\" (UID: \"38aaa280-afd4-4af3-ae45-0da6c81180e4\") " pod="kube-system/cilium-465fn" May 13 00:43:08.072213 kubelet[1563]: I0513 00:43:08.072083 1563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsbl4\" (UniqueName: \"kubernetes.io/projected/a4554c67-9494-4df9-9de9-d656826715f1-kube-api-access-gsbl4\") pod \"cilium-operator-599987898-27mhl\" (UID: \"a4554c67-9494-4df9-9de9-d656826715f1\") " pod="kube-system/cilium-operator-599987898-27mhl" May 13 00:43:08.072213 kubelet[1563]: I0513 00:43:08.072117 1563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/38aaa280-afd4-4af3-ae45-0da6c81180e4-cilium-run\") pod \"cilium-465fn\" (UID: \"38aaa280-afd4-4af3-ae45-0da6c81180e4\") " pod="kube-system/cilium-465fn" May 13 00:43:08.072323 kubelet[1563]: I0513 00:43:08.072146 1563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/38aaa280-afd4-4af3-ae45-0da6c81180e4-cni-path\") pod \"cilium-465fn\" (UID: \"38aaa280-afd4-4af3-ae45-0da6c81180e4\") " pod="kube-system/cilium-465fn" May 13 00:43:08.072323 kubelet[1563]: I0513 00:43:08.072187 1563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/38aaa280-afd4-4af3-ae45-0da6c81180e4-clustermesh-secrets\") pod \"cilium-465fn\" (UID: \"38aaa280-afd4-4af3-ae45-0da6c81180e4\") " pod="kube-system/cilium-465fn" May 13 00:43:08.072323 kubelet[1563]: I0513 00:43:08.072215 1563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/38aaa280-afd4-4af3-ae45-0da6c81180e4-host-proc-sys-net\") pod \"cilium-465fn\" (UID: \"38aaa280-afd4-4af3-ae45-0da6c81180e4\") " pod="kube-system/cilium-465fn" May 13 00:43:08.072323 kubelet[1563]: I0513 00:43:08.072244 1563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/38aaa280-afd4-4af3-ae45-0da6c81180e4-cilium-cgroup\") pod \"cilium-465fn\" (UID: \"38aaa280-afd4-4af3-ae45-0da6c81180e4\") " pod="kube-system/cilium-465fn" May 13 00:43:08.072323 kubelet[1563]: I0513 00:43:08.072269 1563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/38aaa280-afd4-4af3-ae45-0da6c81180e4-etc-cni-netd\") pod \"cilium-465fn\" (UID: \"38aaa280-afd4-4af3-ae45-0da6c81180e4\") " pod="kube-system/cilium-465fn" May 13 00:43:08.072474 kubelet[1563]: I0513 00:43:08.072322 1563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/38aaa280-afd4-4af3-ae45-0da6c81180e4-xtables-lock\") pod \"cilium-465fn\" (UID: \"38aaa280-afd4-4af3-ae45-0da6c81180e4\") " pod="kube-system/cilium-465fn" May 13 00:43:08.072474 kubelet[1563]: I0513 00:43:08.072353 1563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/38aaa280-afd4-4af3-ae45-0da6c81180e4-cilium-config-path\") pod \"cilium-465fn\" (UID: \"38aaa280-afd4-4af3-ae45-0da6c81180e4\") " pod="kube-system/cilium-465fn" May 13 00:43:08.072474 kubelet[1563]: I0513 00:43:08.072382 1563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a4554c67-9494-4df9-9de9-d656826715f1-cilium-config-path\") pod \"cilium-operator-599987898-27mhl\" (UID: \"a4554c67-9494-4df9-9de9-d656826715f1\") " pod="kube-system/cilium-operator-599987898-27mhl" May 13 00:43:08.072474 kubelet[1563]: I0513 00:43:08.072440 1563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/38aaa280-afd4-4af3-ae45-0da6c81180e4-host-proc-sys-kernel\") pod \"cilium-465fn\" (UID: \"38aaa280-afd4-4af3-ae45-0da6c81180e4\") " pod="kube-system/cilium-465fn" May 13 00:43:08.072474 kubelet[1563]: I0513 00:43:08.072464 1563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/38aaa280-afd4-4af3-ae45-0da6c81180e4-hubble-tls\") pod \"cilium-465fn\" (UID: \"38aaa280-afd4-4af3-ae45-0da6c81180e4\") " pod="kube-system/cilium-465fn" May 13 00:43:08.170809 kubelet[1563]: E0513 00:43:08.170723 1563 pod_workers.go:1298] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-77476 lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-465fn" podUID="38aaa280-afd4-4af3-ae45-0da6c81180e4" May 13 00:43:08.275426 kubelet[1563]: I0513 00:43:08.274480 1563 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/38aaa280-afd4-4af3-ae45-0da6c81180e4-hostproc\") pod \"38aaa280-afd4-4af3-ae45-0da6c81180e4\" (UID: \"38aaa280-afd4-4af3-ae45-0da6c81180e4\") " May 13 00:43:08.275426 kubelet[1563]: I0513 00:43:08.274547 1563 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/38aaa280-afd4-4af3-ae45-0da6c81180e4-bpf-maps\") pod \"38aaa280-afd4-4af3-ae45-0da6c81180e4\" (UID: \"38aaa280-afd4-4af3-ae45-0da6c81180e4\") " May 13 00:43:08.275426 kubelet[1563]: I0513 00:43:08.274568 1563 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/38aaa280-afd4-4af3-ae45-0da6c81180e4-cilium-run\") pod \"38aaa280-afd4-4af3-ae45-0da6c81180e4\" (UID: \"38aaa280-afd4-4af3-ae45-0da6c81180e4\") " May 13 00:43:08.275426 kubelet[1563]: I0513 00:43:08.274585 1563 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/38aaa280-afd4-4af3-ae45-0da6c81180e4-cni-path\") pod \"38aaa280-afd4-4af3-ae45-0da6c81180e4\" (UID: \"38aaa280-afd4-4af3-ae45-0da6c81180e4\") " May 13 00:43:08.275426 kubelet[1563]: I0513 00:43:08.274605 1563 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/38aaa280-afd4-4af3-ae45-0da6c81180e4-cilium-cgroup\") pod \"38aaa280-afd4-4af3-ae45-0da6c81180e4\" (UID: \"38aaa280-afd4-4af3-ae45-0da6c81180e4\") " May 13 00:43:08.275426 kubelet[1563]: I0513 00:43:08.274644 1563 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/38aaa280-afd4-4af3-ae45-0da6c81180e4-hubble-tls\") pod \"38aaa280-afd4-4af3-ae45-0da6c81180e4\" (UID: \"38aaa280-afd4-4af3-ae45-0da6c81180e4\") " May 13 00:43:08.275714 kubelet[1563]: I0513 00:43:08.274639 1563 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/38aaa280-afd4-4af3-ae45-0da6c81180e4-hostproc" (OuterVolumeSpecName: "hostproc") pod "38aaa280-afd4-4af3-ae45-0da6c81180e4" (UID: "38aaa280-afd4-4af3-ae45-0da6c81180e4"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:43:08.275714 kubelet[1563]: I0513 00:43:08.274665 1563 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-77476\" (UniqueName: \"kubernetes.io/projected/38aaa280-afd4-4af3-ae45-0da6c81180e4-kube-api-access-77476\") pod \"38aaa280-afd4-4af3-ae45-0da6c81180e4\" (UID: \"38aaa280-afd4-4af3-ae45-0da6c81180e4\") " May 13 00:43:08.275714 kubelet[1563]: I0513 00:43:08.274707 1563 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/38aaa280-afd4-4af3-ae45-0da6c81180e4-clustermesh-secrets\") pod \"38aaa280-afd4-4af3-ae45-0da6c81180e4\" (UID: \"38aaa280-afd4-4af3-ae45-0da6c81180e4\") " May 13 00:43:08.275714 kubelet[1563]: I0513 00:43:08.274709 1563 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/38aaa280-afd4-4af3-ae45-0da6c81180e4-cni-path" (OuterVolumeSpecName: "cni-path") pod "38aaa280-afd4-4af3-ae45-0da6c81180e4" (UID: "38aaa280-afd4-4af3-ae45-0da6c81180e4"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:43:08.275714 kubelet[1563]: I0513 00:43:08.274705 1563 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/38aaa280-afd4-4af3-ae45-0da6c81180e4-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "38aaa280-afd4-4af3-ae45-0da6c81180e4" (UID: "38aaa280-afd4-4af3-ae45-0da6c81180e4"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:43:08.275838 kubelet[1563]: I0513 00:43:08.274753 1563 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/38aaa280-afd4-4af3-ae45-0da6c81180e4-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "38aaa280-afd4-4af3-ae45-0da6c81180e4" (UID: "38aaa280-afd4-4af3-ae45-0da6c81180e4"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:43:08.275838 kubelet[1563]: I0513 00:43:08.274733 1563 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/38aaa280-afd4-4af3-ae45-0da6c81180e4-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "38aaa280-afd4-4af3-ae45-0da6c81180e4" (UID: "38aaa280-afd4-4af3-ae45-0da6c81180e4"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:43:08.275838 kubelet[1563]: I0513 00:43:08.274727 1563 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/38aaa280-afd4-4af3-ae45-0da6c81180e4-xtables-lock\") pod \"38aaa280-afd4-4af3-ae45-0da6c81180e4\" (UID: \"38aaa280-afd4-4af3-ae45-0da6c81180e4\") " May 13 00:43:08.275838 kubelet[1563]: I0513 00:43:08.274862 1563 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/38aaa280-afd4-4af3-ae45-0da6c81180e4-cilium-ipsec-secrets\") pod \"38aaa280-afd4-4af3-ae45-0da6c81180e4\" (UID: \"38aaa280-afd4-4af3-ae45-0da6c81180e4\") " May 13 00:43:08.275838 kubelet[1563]: I0513 00:43:08.274899 1563 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/38aaa280-afd4-4af3-ae45-0da6c81180e4-lib-modules\") pod \"38aaa280-afd4-4af3-ae45-0da6c81180e4\" (UID: \"38aaa280-afd4-4af3-ae45-0da6c81180e4\") " May 13 00:43:08.275838 kubelet[1563]: I0513 00:43:08.274924 1563 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/38aaa280-afd4-4af3-ae45-0da6c81180e4-etc-cni-netd\") pod \"38aaa280-afd4-4af3-ae45-0da6c81180e4\" (UID: \"38aaa280-afd4-4af3-ae45-0da6c81180e4\") " May 13 00:43:08.275983 kubelet[1563]: I0513 00:43:08.274949 1563 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/38aaa280-afd4-4af3-ae45-0da6c81180e4-host-proc-sys-kernel\") pod \"38aaa280-afd4-4af3-ae45-0da6c81180e4\" (UID: \"38aaa280-afd4-4af3-ae45-0da6c81180e4\") " May 13 00:43:08.275983 kubelet[1563]: I0513 00:43:08.274970 1563 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/38aaa280-afd4-4af3-ae45-0da6c81180e4-host-proc-sys-net\") pod \"38aaa280-afd4-4af3-ae45-0da6c81180e4\" (UID: \"38aaa280-afd4-4af3-ae45-0da6c81180e4\") " May 13 00:43:08.275983 kubelet[1563]: I0513 00:43:08.274996 1563 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/38aaa280-afd4-4af3-ae45-0da6c81180e4-cilium-config-path\") pod \"38aaa280-afd4-4af3-ae45-0da6c81180e4\" (UID: \"38aaa280-afd4-4af3-ae45-0da6c81180e4\") " May 13 00:43:08.275983 kubelet[1563]: I0513 00:43:08.275059 1563 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/38aaa280-afd4-4af3-ae45-0da6c81180e4-hostproc\") on node \"10.0.0.58\" DevicePath \"\"" May 13 00:43:08.275983 kubelet[1563]: I0513 00:43:08.275082 1563 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/38aaa280-afd4-4af3-ae45-0da6c81180e4-bpf-maps\") on node \"10.0.0.58\" DevicePath \"\"" May 13 00:43:08.275983 kubelet[1563]: I0513 00:43:08.275095 1563 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/38aaa280-afd4-4af3-ae45-0da6c81180e4-cilium-run\") on node \"10.0.0.58\" DevicePath \"\"" May 13 00:43:08.275983 kubelet[1563]: I0513 00:43:08.275109 1563 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/38aaa280-afd4-4af3-ae45-0da6c81180e4-cni-path\") on node \"10.0.0.58\" DevicePath \"\"" May 13 00:43:08.276145 kubelet[1563]: I0513 00:43:08.275122 1563 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/38aaa280-afd4-4af3-ae45-0da6c81180e4-xtables-lock\") on node \"10.0.0.58\" DevicePath \"\"" May 13 00:43:08.276414 kubelet[1563]: I0513 00:43:08.276365 1563 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/38aaa280-afd4-4af3-ae45-0da6c81180e4-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "38aaa280-afd4-4af3-ae45-0da6c81180e4" (UID: "38aaa280-afd4-4af3-ae45-0da6c81180e4"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:43:08.276889 kubelet[1563]: I0513 00:43:08.276872 1563 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/38aaa280-afd4-4af3-ae45-0da6c81180e4-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "38aaa280-afd4-4af3-ae45-0da6c81180e4" (UID: "38aaa280-afd4-4af3-ae45-0da6c81180e4"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:43:08.277710 kubelet[1563]: I0513 00:43:08.277662 1563 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/38aaa280-afd4-4af3-ae45-0da6c81180e4-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "38aaa280-afd4-4af3-ae45-0da6c81180e4" (UID: "38aaa280-afd4-4af3-ae45-0da6c81180e4"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:43:08.277896 kubelet[1563]: I0513 00:43:08.277873 1563 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/38aaa280-afd4-4af3-ae45-0da6c81180e4-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "38aaa280-afd4-4af3-ae45-0da6c81180e4" (UID: "38aaa280-afd4-4af3-ae45-0da6c81180e4"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:43:08.278126 kubelet[1563]: I0513 00:43:08.278054 1563 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/38aaa280-afd4-4af3-ae45-0da6c81180e4-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "38aaa280-afd4-4af3-ae45-0da6c81180e4" (UID: "38aaa280-afd4-4af3-ae45-0da6c81180e4"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:43:08.279115 kubelet[1563]: I0513 00:43:08.279083 1563 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38aaa280-afd4-4af3-ae45-0da6c81180e4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "38aaa280-afd4-4af3-ae45-0da6c81180e4" (UID: "38aaa280-afd4-4af3-ae45-0da6c81180e4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 13 00:43:08.279354 kubelet[1563]: I0513 00:43:08.279324 1563 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38aaa280-afd4-4af3-ae45-0da6c81180e4-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "38aaa280-afd4-4af3-ae45-0da6c81180e4" (UID: "38aaa280-afd4-4af3-ae45-0da6c81180e4"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 00:43:08.280158 kubelet[1563]: I0513 00:43:08.280130 1563 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38aaa280-afd4-4af3-ae45-0da6c81180e4-kube-api-access-77476" (OuterVolumeSpecName: "kube-api-access-77476") pod "38aaa280-afd4-4af3-ae45-0da6c81180e4" (UID: "38aaa280-afd4-4af3-ae45-0da6c81180e4"). InnerVolumeSpecName "kube-api-access-77476". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 00:43:08.280492 systemd[1]: var-lib-kubelet-pods-38aaa280\x2dafd4\x2d4af3\x2dae45\x2d0da6c81180e4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d77476.mount: Deactivated successfully. May 13 00:43:08.280864 systemd[1]: var-lib-kubelet-pods-38aaa280\x2dafd4\x2d4af3\x2dae45\x2d0da6c81180e4-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 13 00:43:08.281839 kubelet[1563]: I0513 00:43:08.281802 1563 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38aaa280-afd4-4af3-ae45-0da6c81180e4-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "38aaa280-afd4-4af3-ae45-0da6c81180e4" (UID: "38aaa280-afd4-4af3-ae45-0da6c81180e4"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 13 00:43:08.283325 kubelet[1563]: I0513 00:43:08.283288 1563 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38aaa280-afd4-4af3-ae45-0da6c81180e4-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "38aaa280-afd4-4af3-ae45-0da6c81180e4" (UID: "38aaa280-afd4-4af3-ae45-0da6c81180e4"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 13 00:43:08.286596 kubelet[1563]: E0513 00:43:08.286538 1563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:43:08.287110 env[1308]: time="2025-05-13T00:43:08.287055486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-27mhl,Uid:a4554c67-9494-4df9-9de9-d656826715f1,Namespace:kube-system,Attempt:0,}" May 13 00:43:08.304028 env[1308]: time="2025-05-13T00:43:08.303885295Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:43:08.304028 env[1308]: time="2025-05-13T00:43:08.303954699Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:43:08.304028 env[1308]: time="2025-05-13T00:43:08.303970760Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:43:08.304349 env[1308]: time="2025-05-13T00:43:08.304249889Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/65bcfa674cd2b4b33eda0666b13b2a6a33df99c17a36b61f229dc024dbef38ce pid=3124 runtime=io.containerd.runc.v2 May 13 00:43:08.363735 env[1308]: time="2025-05-13T00:43:08.363654100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-27mhl,Uid:a4554c67-9494-4df9-9de9-d656826715f1,Namespace:kube-system,Attempt:0,} returns sandbox id \"65bcfa674cd2b4b33eda0666b13b2a6a33df99c17a36b61f229dc024dbef38ce\"" May 13 00:43:08.365025 kubelet[1563]: E0513 00:43:08.364549 1563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:43:08.365610 env[1308]: time="2025-05-13T00:43:08.365565977Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 13 00:43:08.375594 kubelet[1563]: I0513 00:43:08.375514 1563 reconciler_common.go:289] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/38aaa280-afd4-4af3-ae45-0da6c81180e4-cilium-ipsec-secrets\") on node \"10.0.0.58\" DevicePath \"\"" May 13 00:43:08.375594 kubelet[1563]: I0513 00:43:08.375545 1563 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/38aaa280-afd4-4af3-ae45-0da6c81180e4-lib-modules\") on node \"10.0.0.58\" DevicePath \"\"" May 13 00:43:08.375594 kubelet[1563]: I0513 00:43:08.375553 1563 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/38aaa280-afd4-4af3-ae45-0da6c81180e4-etc-cni-netd\") on node \"10.0.0.58\" DevicePath \"\"" May 13 00:43:08.375594 kubelet[1563]: I0513 00:43:08.375561 1563 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/38aaa280-afd4-4af3-ae45-0da6c81180e4-host-proc-sys-kernel\") on node \"10.0.0.58\" DevicePath \"\"" May 13 00:43:08.375594 kubelet[1563]: I0513 00:43:08.375570 1563 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/38aaa280-afd4-4af3-ae45-0da6c81180e4-host-proc-sys-net\") on node \"10.0.0.58\" DevicePath \"\"" May 13 00:43:08.375594 kubelet[1563]: I0513 00:43:08.375578 1563 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/38aaa280-afd4-4af3-ae45-0da6c81180e4-cilium-config-path\") on node \"10.0.0.58\" DevicePath \"\"" May 13 00:43:08.375594 kubelet[1563]: I0513 00:43:08.375585 1563 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/38aaa280-afd4-4af3-ae45-0da6c81180e4-cilium-cgroup\") on node \"10.0.0.58\" DevicePath \"\"" May 13 00:43:08.375594 kubelet[1563]: I0513 00:43:08.375592 1563 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-77476\" (UniqueName: \"kubernetes.io/projected/38aaa280-afd4-4af3-ae45-0da6c81180e4-kube-api-access-77476\") on node \"10.0.0.58\" DevicePath \"\"" May 13 00:43:08.376092 kubelet[1563]: I0513 00:43:08.375599 1563 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/38aaa280-afd4-4af3-ae45-0da6c81180e4-clustermesh-secrets\") on node \"10.0.0.58\" DevicePath \"\"" May 13 00:43:08.376092 kubelet[1563]: I0513 00:43:08.375608 1563 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/38aaa280-afd4-4af3-ae45-0da6c81180e4-hubble-tls\") on node \"10.0.0.58\" DevicePath \"\"" May 13 00:43:08.468786 kubelet[1563]: E0513 00:43:08.468698 1563 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:43:08.506848 kubelet[1563]: E0513 00:43:08.506772 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:43:08.986091 env[1308]: time="2025-05-13T00:43:08.986035360Z" level=info msg="StopPodSandbox for \"dee56fa1e23b080ef75d689289b9b83dd86ca4302213a7ffc55ee0f0e2d9028a\"" May 13 00:43:08.986278 env[1308]: time="2025-05-13T00:43:08.986123431Z" level=info msg="TearDown network for sandbox \"dee56fa1e23b080ef75d689289b9b83dd86ca4302213a7ffc55ee0f0e2d9028a\" successfully" May 13 00:43:08.986278 env[1308]: time="2025-05-13T00:43:08.986157617Z" level=info msg="StopPodSandbox for \"dee56fa1e23b080ef75d689289b9b83dd86ca4302213a7ffc55ee0f0e2d9028a\" returns successfully" May 13 00:43:08.986652 env[1308]: time="2025-05-13T00:43:08.986613528Z" level=info msg="RemovePodSandbox for \"dee56fa1e23b080ef75d689289b9b83dd86ca4302213a7ffc55ee0f0e2d9028a\"" May 13 00:43:08.986727 env[1308]: time="2025-05-13T00:43:08.986655158Z" level=info msg="Forcibly stopping sandbox \"dee56fa1e23b080ef75d689289b9b83dd86ca4302213a7ffc55ee0f0e2d9028a\"" May 13 00:43:08.986770 env[1308]: time="2025-05-13T00:43:08.986745073Z" level=info msg="TearDown network for sandbox \"dee56fa1e23b080ef75d689289b9b83dd86ca4302213a7ffc55ee0f0e2d9028a\" successfully" May 13 00:43:08.990659 env[1308]: time="2025-05-13T00:43:08.990607841Z" level=info msg="RemovePodSandbox \"dee56fa1e23b080ef75d689289b9b83dd86ca4302213a7ffc55ee0f0e2d9028a\" returns successfully" May 13 00:43:09.075859 kubelet[1563]: E0513 00:43:09.075790 1563 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 13 00:43:09.180973 systemd[1]: var-lib-kubelet-pods-38aaa280\x2dafd4\x2d4af3\x2dae45\x2d0da6c81180e4-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. May 13 00:43:09.181868 systemd[1]: var-lib-kubelet-pods-38aaa280\x2dafd4\x2d4af3\x2dae45\x2d0da6c81180e4-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 13 00:43:09.294507 kubelet[1563]: I0513 00:43:09.293721 1563 topology_manager.go:215] "Topology Admit Handler" podUID="327106c0-bf5b-4c08-8feb-893739d20a4e" podNamespace="kube-system" podName="cilium-4k2g9" May 13 00:43:09.383883 kubelet[1563]: I0513 00:43:09.383752 1563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/327106c0-bf5b-4c08-8feb-893739d20a4e-host-proc-sys-kernel\") pod \"cilium-4k2g9\" (UID: \"327106c0-bf5b-4c08-8feb-893739d20a4e\") " pod="kube-system/cilium-4k2g9" May 13 00:43:09.383883 kubelet[1563]: I0513 00:43:09.383820 1563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/327106c0-bf5b-4c08-8feb-893739d20a4e-hostproc\") pod \"cilium-4k2g9\" (UID: \"327106c0-bf5b-4c08-8feb-893739d20a4e\") " pod="kube-system/cilium-4k2g9" May 13 00:43:09.383883 kubelet[1563]: I0513 00:43:09.383844 1563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/327106c0-bf5b-4c08-8feb-893739d20a4e-cni-path\") pod \"cilium-4k2g9\" (UID: \"327106c0-bf5b-4c08-8feb-893739d20a4e\") " pod="kube-system/cilium-4k2g9" May 13 00:43:09.383883 kubelet[1563]: I0513 00:43:09.383864 1563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/327106c0-bf5b-4c08-8feb-893739d20a4e-etc-cni-netd\") pod \"cilium-4k2g9\" (UID: \"327106c0-bf5b-4c08-8feb-893739d20a4e\") " pod="kube-system/cilium-4k2g9" May 13 00:43:09.383883 kubelet[1563]: I0513 00:43:09.383884 1563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/327106c0-bf5b-4c08-8feb-893739d20a4e-xtables-lock\") pod \"cilium-4k2g9\" (UID: \"327106c0-bf5b-4c08-8feb-893739d20a4e\") " pod="kube-system/cilium-4k2g9" May 13 00:43:09.383883 kubelet[1563]: I0513 00:43:09.383905 1563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/327106c0-bf5b-4c08-8feb-893739d20a4e-bpf-maps\") pod \"cilium-4k2g9\" (UID: \"327106c0-bf5b-4c08-8feb-893739d20a4e\") " pod="kube-system/cilium-4k2g9" May 13 00:43:09.384341 kubelet[1563]: I0513 00:43:09.383927 1563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/327106c0-bf5b-4c08-8feb-893739d20a4e-clustermesh-secrets\") pod \"cilium-4k2g9\" (UID: \"327106c0-bf5b-4c08-8feb-893739d20a4e\") " pod="kube-system/cilium-4k2g9" May 13 00:43:09.384341 kubelet[1563]: I0513 00:43:09.383946 1563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/327106c0-bf5b-4c08-8feb-893739d20a4e-cilium-ipsec-secrets\") pod \"cilium-4k2g9\" (UID: \"327106c0-bf5b-4c08-8feb-893739d20a4e\") " pod="kube-system/cilium-4k2g9" May 13 00:43:09.384341 kubelet[1563]: I0513 00:43:09.383969 1563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/327106c0-bf5b-4c08-8feb-893739d20a4e-cilium-run\") pod \"cilium-4k2g9\" (UID: \"327106c0-bf5b-4c08-8feb-893739d20a4e\") " pod="kube-system/cilium-4k2g9" May 13 00:43:09.384341 kubelet[1563]: I0513 00:43:09.383986 1563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/327106c0-bf5b-4c08-8feb-893739d20a4e-cilium-cgroup\") pod \"cilium-4k2g9\" (UID: \"327106c0-bf5b-4c08-8feb-893739d20a4e\") " pod="kube-system/cilium-4k2g9" May 13 00:43:09.384341 kubelet[1563]: I0513 00:43:09.384003 1563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/327106c0-bf5b-4c08-8feb-893739d20a4e-lib-modules\") pod \"cilium-4k2g9\" (UID: \"327106c0-bf5b-4c08-8feb-893739d20a4e\") " pod="kube-system/cilium-4k2g9" May 13 00:43:09.384341 kubelet[1563]: I0513 00:43:09.384025 1563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/327106c0-bf5b-4c08-8feb-893739d20a4e-cilium-config-path\") pod \"cilium-4k2g9\" (UID: \"327106c0-bf5b-4c08-8feb-893739d20a4e\") " pod="kube-system/cilium-4k2g9" May 13 00:43:09.384640 kubelet[1563]: I0513 00:43:09.384045 1563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/327106c0-bf5b-4c08-8feb-893739d20a4e-host-proc-sys-net\") pod \"cilium-4k2g9\" (UID: \"327106c0-bf5b-4c08-8feb-893739d20a4e\") " pod="kube-system/cilium-4k2g9" May 13 00:43:09.384640 kubelet[1563]: I0513 00:43:09.384069 1563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/327106c0-bf5b-4c08-8feb-893739d20a4e-hubble-tls\") pod \"cilium-4k2g9\" (UID: \"327106c0-bf5b-4c08-8feb-893739d20a4e\") " pod="kube-system/cilium-4k2g9" May 13 00:43:09.384640 kubelet[1563]: I0513 00:43:09.384088 1563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfmz2\" (UniqueName: \"kubernetes.io/projected/327106c0-bf5b-4c08-8feb-893739d20a4e-kube-api-access-sfmz2\") pod \"cilium-4k2g9\" (UID: \"327106c0-bf5b-4c08-8feb-893739d20a4e\") " pod="kube-system/cilium-4k2g9" May 13 00:43:09.507892 kubelet[1563]: E0513 00:43:09.507815 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:43:09.602644 kubelet[1563]: E0513 00:43:09.602446 1563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:43:09.603183 env[1308]: time="2025-05-13T00:43:09.603065526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4k2g9,Uid:327106c0-bf5b-4c08-8feb-893739d20a4e,Namespace:kube-system,Attempt:0,}" May 13 00:43:09.736850 env[1308]: time="2025-05-13T00:43:09.736659725Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:43:09.736850 env[1308]: time="2025-05-13T00:43:09.736739300Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:43:09.736850 env[1308]: time="2025-05-13T00:43:09.736756553Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:43:09.737549 env[1308]: time="2025-05-13T00:43:09.737219176Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3d0d0749232e520f692455629bbeb59f9e108e41ff5b42d2856614b82e5660f0 pid=3174 runtime=io.containerd.runc.v2 May 13 00:43:09.782907 env[1308]: time="2025-05-13T00:43:09.782784389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4k2g9,Uid:327106c0-bf5b-4c08-8feb-893739d20a4e,Namespace:kube-system,Attempt:0,} returns sandbox id \"3d0d0749232e520f692455629bbeb59f9e108e41ff5b42d2856614b82e5660f0\"" May 13 00:43:09.784055 kubelet[1563]: E0513 00:43:09.783998 1563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:43:09.786342 env[1308]: time="2025-05-13T00:43:09.786285883Z" level=info msg="CreateContainer within sandbox \"3d0d0749232e520f692455629bbeb59f9e108e41ff5b42d2856614b82e5660f0\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 13 00:43:10.011200 env[1308]: time="2025-05-13T00:43:10.011092327Z" level=info msg="CreateContainer within sandbox \"3d0d0749232e520f692455629bbeb59f9e108e41ff5b42d2856614b82e5660f0\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a5cf9b9e14011be43f681d1047117e8b3457085f3dd6dbb348abc051f0c8c227\"" May 13 00:43:10.012199 env[1308]: time="2025-05-13T00:43:10.011870689Z" level=info msg="StartContainer for \"a5cf9b9e14011be43f681d1047117e8b3457085f3dd6dbb348abc051f0c8c227\"" May 13 00:43:10.399524 env[1308]: time="2025-05-13T00:43:10.399447757Z" level=info msg="StartContainer for \"a5cf9b9e14011be43f681d1047117e8b3457085f3dd6dbb348abc051f0c8c227\" returns successfully" May 13 00:43:10.426689 kubelet[1563]: I0513 00:43:10.426590 1563 setters.go:580] "Node became not ready" node="10.0.0.58" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-13T00:43:10Z","lastTransitionTime":"2025-05-13T00:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 13 00:43:10.434919 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a5cf9b9e14011be43f681d1047117e8b3457085f3dd6dbb348abc051f0c8c227-rootfs.mount: Deactivated successfully. May 13 00:43:10.478150 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount863688570.mount: Deactivated successfully. May 13 00:43:10.509111 kubelet[1563]: E0513 00:43:10.509015 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:43:10.573813 env[1308]: time="2025-05-13T00:43:10.573733032Z" level=info msg="shim disconnected" id=a5cf9b9e14011be43f681d1047117e8b3457085f3dd6dbb348abc051f0c8c227 May 13 00:43:10.577866 env[1308]: time="2025-05-13T00:43:10.574068660Z" level=warning msg="cleaning up after shim disconnected" id=a5cf9b9e14011be43f681d1047117e8b3457085f3dd6dbb348abc051f0c8c227 namespace=k8s.io May 13 00:43:10.577866 env[1308]: time="2025-05-13T00:43:10.574100751Z" level=info msg="cleaning up dead shim" May 13 00:43:10.603056 env[1308]: time="2025-05-13T00:43:10.602381311Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:43:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3256 runtime=io.containerd.runc.v2\n" May 13 00:43:11.116722 kubelet[1563]: I0513 00:43:11.116204 1563 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38aaa280-afd4-4af3-ae45-0da6c81180e4" path="/var/lib/kubelet/pods/38aaa280-afd4-4af3-ae45-0da6c81180e4/volumes" May 13 00:43:11.405541 kubelet[1563]: E0513 00:43:11.405150 1563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:43:11.417968 env[1308]: time="2025-05-13T00:43:11.416438922Z" level=info msg="CreateContainer within sandbox \"3d0d0749232e520f692455629bbeb59f9e108e41ff5b42d2856614b82e5660f0\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 13 00:43:11.467369 env[1308]: time="2025-05-13T00:43:11.467289769Z" level=info msg="CreateContainer within sandbox \"3d0d0749232e520f692455629bbeb59f9e108e41ff5b42d2856614b82e5660f0\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"10b0d0602c96e7820c028cbd3d3c6089998674383cceb82aae3370a9edd7abeb\"" May 13 00:43:11.468615 env[1308]: time="2025-05-13T00:43:11.468579125Z" level=info msg="StartContainer for \"10b0d0602c96e7820c028cbd3d3c6089998674383cceb82aae3370a9edd7abeb\"" May 13 00:43:11.519506 kubelet[1563]: E0513 00:43:11.515664 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:43:11.786739 env[1308]: time="2025-05-13T00:43:11.786561622Z" level=info msg="StartContainer for \"10b0d0602c96e7820c028cbd3d3c6089998674383cceb82aae3370a9edd7abeb\" returns successfully" May 13 00:43:11.821845 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-10b0d0602c96e7820c028cbd3d3c6089998674383cceb82aae3370a9edd7abeb-rootfs.mount: Deactivated successfully. May 13 00:43:11.867017 env[1308]: time="2025-05-13T00:43:11.866933806Z" level=info msg="shim disconnected" id=10b0d0602c96e7820c028cbd3d3c6089998674383cceb82aae3370a9edd7abeb May 13 00:43:11.867017 env[1308]: time="2025-05-13T00:43:11.867000985Z" level=warning msg="cleaning up after shim disconnected" id=10b0d0602c96e7820c028cbd3d3c6089998674383cceb82aae3370a9edd7abeb namespace=k8s.io May 13 00:43:11.867017 env[1308]: time="2025-05-13T00:43:11.867013380Z" level=info msg="cleaning up dead shim" May 13 00:43:11.882952 env[1308]: time="2025-05-13T00:43:11.882840650Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:43:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3322 runtime=io.containerd.runc.v2\n" May 13 00:43:11.940895 env[1308]: time="2025-05-13T00:43:11.940783543Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:43:11.947136 env[1308]: time="2025-05-13T00:43:11.947069795Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:43:11.951189 env[1308]: time="2025-05-13T00:43:11.951054839Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:43:11.951762 env[1308]: time="2025-05-13T00:43:11.951675757Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 13 00:43:11.957004 env[1308]: time="2025-05-13T00:43:11.956877698Z" level=info msg="CreateContainer within sandbox \"65bcfa674cd2b4b33eda0666b13b2a6a33df99c17a36b61f229dc024dbef38ce\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 13 00:43:11.985063 env[1308]: time="2025-05-13T00:43:11.984620125Z" level=info msg="CreateContainer within sandbox \"65bcfa674cd2b4b33eda0666b13b2a6a33df99c17a36b61f229dc024dbef38ce\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"0b2e2a2d29c36c15a8fa508edc240b5f11b60b564b4acaf903fc2107cbf34f31\"" May 13 00:43:11.987492 env[1308]: time="2025-05-13T00:43:11.986234107Z" level=info msg="StartContainer for \"0b2e2a2d29c36c15a8fa508edc240b5f11b60b564b4acaf903fc2107cbf34f31\"" May 13 00:43:12.097206 env[1308]: time="2025-05-13T00:43:12.095199369Z" level=info msg="StartContainer for \"0b2e2a2d29c36c15a8fa508edc240b5f11b60b564b4acaf903fc2107cbf34f31\" returns successfully" May 13 00:43:12.426784 kubelet[1563]: E0513 00:43:12.426734 1563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:43:12.433031 kubelet[1563]: E0513 00:43:12.432579 1563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:43:12.444111 env[1308]: time="2025-05-13T00:43:12.441956681Z" level=info msg="CreateContainer within sandbox \"3d0d0749232e520f692455629bbeb59f9e108e41ff5b42d2856614b82e5660f0\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 13 00:43:12.444265 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1396867922.mount: Deactivated successfully. May 13 00:43:12.456834 kubelet[1563]: I0513 00:43:12.456696 1563 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-27mhl" podStartSLOduration=1.867611323 podStartE2EDuration="5.456673388s" podCreationTimestamp="2025-05-13 00:43:07 +0000 UTC" firstStartedPulling="2025-05-13 00:43:08.365236209 +0000 UTC m=+60.479331414" lastFinishedPulling="2025-05-13 00:43:11.954298254 +0000 UTC m=+64.068393479" observedRunningTime="2025-05-13 00:43:12.455648022 +0000 UTC m=+64.569743257" watchObservedRunningTime="2025-05-13 00:43:12.456673388 +0000 UTC m=+64.570768623" May 13 00:43:12.516111 kubelet[1563]: E0513 00:43:12.515993 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:43:12.521017 env[1308]: time="2025-05-13T00:43:12.520667485Z" level=info msg="CreateContainer within sandbox \"3d0d0749232e520f692455629bbeb59f9e108e41ff5b42d2856614b82e5660f0\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7fe2ab238485ac239987d0e3316229c40c097e11c73b84dd78d2dd3fd5107ad1\"" May 13 00:43:12.522133 env[1308]: time="2025-05-13T00:43:12.522059056Z" level=info msg="StartContainer for \"7fe2ab238485ac239987d0e3316229c40c097e11c73b84dd78d2dd3fd5107ad1\"" May 13 00:43:12.611900 env[1308]: time="2025-05-13T00:43:12.611783322Z" level=info msg="StartContainer for \"7fe2ab238485ac239987d0e3316229c40c097e11c73b84dd78d2dd3fd5107ad1\" returns successfully" May 13 00:43:12.653558 env[1308]: time="2025-05-13T00:43:12.653458635Z" level=info msg="shim disconnected" id=7fe2ab238485ac239987d0e3316229c40c097e11c73b84dd78d2dd3fd5107ad1 May 13 00:43:12.653558 env[1308]: time="2025-05-13T00:43:12.653533839Z" level=warning msg="cleaning up after shim disconnected" id=7fe2ab238485ac239987d0e3316229c40c097e11c73b84dd78d2dd3fd5107ad1 namespace=k8s.io May 13 00:43:12.653558 env[1308]: time="2025-05-13T00:43:12.653550431Z" level=info msg="cleaning up dead shim" May 13 00:43:12.672933 env[1308]: time="2025-05-13T00:43:12.672702675Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:43:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3415 runtime=io.containerd.runc.v2\n" May 13 00:43:13.444174 kubelet[1563]: E0513 00:43:13.440684 1563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:43:13.444174 kubelet[1563]: E0513 00:43:13.440713 1563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:43:13.443177 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7fe2ab238485ac239987d0e3316229c40c097e11c73b84dd78d2dd3fd5107ad1-rootfs.mount: Deactivated successfully. May 13 00:43:13.444976 env[1308]: time="2025-05-13T00:43:13.442584010Z" level=info msg="CreateContainer within sandbox \"3d0d0749232e520f692455629bbeb59f9e108e41ff5b42d2856614b82e5660f0\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 13 00:43:13.473521 env[1308]: time="2025-05-13T00:43:13.473298590Z" level=info msg="CreateContainer within sandbox \"3d0d0749232e520f692455629bbeb59f9e108e41ff5b42d2856614b82e5660f0\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b29a720a9f26b9b1416e572a9d24a88b5b7efd78b0bbdc25d4cbf1cbbe81f6e4\"" May 13 00:43:13.475198 env[1308]: time="2025-05-13T00:43:13.475086722Z" level=info msg="StartContainer for \"b29a720a9f26b9b1416e572a9d24a88b5b7efd78b0bbdc25d4cbf1cbbe81f6e4\"" May 13 00:43:13.517108 kubelet[1563]: E0513 00:43:13.517021 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:43:13.539045 env[1308]: time="2025-05-13T00:43:13.538979938Z" level=info msg="StartContainer for \"b29a720a9f26b9b1416e572a9d24a88b5b7efd78b0bbdc25d4cbf1cbbe81f6e4\" returns successfully" May 13 00:43:13.557562 env[1308]: time="2025-05-13T00:43:13.557503835Z" level=info msg="shim disconnected" id=b29a720a9f26b9b1416e572a9d24a88b5b7efd78b0bbdc25d4cbf1cbbe81f6e4 May 13 00:43:13.557562 env[1308]: time="2025-05-13T00:43:13.557563580Z" level=warning msg="cleaning up after shim disconnected" id=b29a720a9f26b9b1416e572a9d24a88b5b7efd78b0bbdc25d4cbf1cbbe81f6e4 namespace=k8s.io May 13 00:43:13.557819 env[1308]: time="2025-05-13T00:43:13.557579470Z" level=info msg="cleaning up dead shim" May 13 00:43:13.564437 env[1308]: time="2025-05-13T00:43:13.564365807Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:43:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3468 runtime=io.containerd.runc.v2\n" May 13 00:43:14.077534 kubelet[1563]: E0513 00:43:14.077458 1563 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 13 00:43:14.443169 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b29a720a9f26b9b1416e572a9d24a88b5b7efd78b0bbdc25d4cbf1cbbe81f6e4-rootfs.mount: Deactivated successfully. May 13 00:43:14.446193 kubelet[1563]: E0513 00:43:14.446158 1563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:43:14.449011 env[1308]: time="2025-05-13T00:43:14.448932905Z" level=info msg="CreateContainer within sandbox \"3d0d0749232e520f692455629bbeb59f9e108e41ff5b42d2856614b82e5660f0\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 13 00:43:14.470774 env[1308]: time="2025-05-13T00:43:14.470712071Z" level=info msg="CreateContainer within sandbox \"3d0d0749232e520f692455629bbeb59f9e108e41ff5b42d2856614b82e5660f0\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9483fb6c3c4073da1a78a6c84adab43ddce0115c5af3218619ab4104f69b9f92\"" May 13 00:43:14.471458 env[1308]: time="2025-05-13T00:43:14.471382542Z" level=info msg="StartContainer for \"9483fb6c3c4073da1a78a6c84adab43ddce0115c5af3218619ab4104f69b9f92\"" May 13 00:43:14.517313 kubelet[1563]: E0513 00:43:14.517269 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:43:14.525253 env[1308]: time="2025-05-13T00:43:14.525182212Z" level=info msg="StartContainer for \"9483fb6c3c4073da1a78a6c84adab43ddce0115c5af3218619ab4104f69b9f92\" returns successfully" May 13 00:43:14.954450 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) May 13 00:43:15.443264 systemd[1]: run-containerd-runc-k8s.io-9483fb6c3c4073da1a78a6c84adab43ddce0115c5af3218619ab4104f69b9f92-runc.wjI6xN.mount: Deactivated successfully. May 13 00:43:15.453810 kubelet[1563]: E0513 00:43:15.453767 1563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:43:15.472772 kubelet[1563]: I0513 00:43:15.472666 1563 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-4k2g9" podStartSLOduration=6.47263892 podStartE2EDuration="6.47263892s" podCreationTimestamp="2025-05-13 00:43:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:43:15.472021553 +0000 UTC m=+67.586116788" watchObservedRunningTime="2025-05-13 00:43:15.47263892 +0000 UTC m=+67.586734135" May 13 00:43:15.518634 kubelet[1563]: E0513 00:43:15.518499 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:43:16.455632 kubelet[1563]: E0513 00:43:16.455574 1563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:43:16.519875 kubelet[1563]: E0513 00:43:16.519785 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:43:17.459055 kubelet[1563]: E0513 00:43:17.458899 1563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:43:17.520020 kubelet[1563]: E0513 00:43:17.519962 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:43:18.399128 systemd-networkd[1079]: lxc_health: Link UP May 13 00:43:18.410206 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 13 00:43:18.409200 systemd-networkd[1079]: lxc_health: Gained carrier May 13 00:43:18.521120 kubelet[1563]: E0513 00:43:18.521028 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:43:18.874611 systemd[1]: run-containerd-runc-k8s.io-9483fb6c3c4073da1a78a6c84adab43ddce0115c5af3218619ab4104f69b9f92-runc.UdN0PB.mount: Deactivated successfully. May 13 00:43:19.521343 kubelet[1563]: E0513 00:43:19.521279 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:43:19.605123 kubelet[1563]: E0513 00:43:19.605081 1563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:43:20.132669 systemd-networkd[1079]: lxc_health: Gained IPv6LL May 13 00:43:20.463768 kubelet[1563]: E0513 00:43:20.463735 1563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:43:20.522467 kubelet[1563]: E0513 00:43:20.522417 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:43:21.523309 kubelet[1563]: E0513 00:43:21.523244 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:43:22.523975 kubelet[1563]: E0513 00:43:22.523900 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:43:23.524249 kubelet[1563]: E0513 00:43:23.524184 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:43:24.524809 kubelet[1563]: E0513 00:43:24.524752 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:43:25.145267 systemd[1]: run-containerd-runc-k8s.io-9483fb6c3c4073da1a78a6c84adab43ddce0115c5af3218619ab4104f69b9f92-runc.sdMx6h.mount: Deactivated successfully. May 13 00:43:25.525455 kubelet[1563]: E0513 00:43:25.525289 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:43:26.525789 kubelet[1563]: E0513 00:43:26.525745 1563 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"