May 17 00:31:02.586921 kernel: Linux version 5.15.182-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri May 16 23:09:52 -00 2025 May 17 00:31:02.586952 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4aad7caeadb0359f379975532748a0b4ae6bb9b229507353e0f5ae84cb9335a0 May 17 00:31:02.586963 kernel: BIOS-provided physical RAM map: May 17 00:31:02.586971 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 17 00:31:02.586979 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 17 00:31:02.586986 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 17 00:31:02.586996 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable May 17 00:31:02.587004 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved May 17 00:31:02.587013 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 17 00:31:02.587021 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved May 17 00:31:02.587029 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 17 00:31:02.587036 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 17 00:31:02.587044 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 17 00:31:02.587052 kernel: NX (Execute Disable) protection: active May 17 00:31:02.587063 kernel: SMBIOS 2.8 present. May 17 00:31:02.587072 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 May 17 00:31:02.587080 kernel: Hypervisor detected: KVM May 17 00:31:02.587088 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 17 00:31:02.587096 kernel: kvm-clock: cpu 0, msr 9c19a001, primary cpu clock May 17 00:31:02.587104 kernel: kvm-clock: using sched offset of 3872198278 cycles May 17 00:31:02.587113 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 17 00:31:02.587122 kernel: tsc: Detected 2794.748 MHz processor May 17 00:31:02.587131 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 17 00:31:02.587143 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 17 00:31:02.587151 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 May 17 00:31:02.587160 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 17 00:31:02.587168 kernel: Using GB pages for direct mapping May 17 00:31:02.587281 kernel: ACPI: Early table checksum verification disabled May 17 00:31:02.587291 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) May 17 00:31:02.587300 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:31:02.587309 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:31:02.587318 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:31:02.587328 kernel: ACPI: FACS 0x000000009CFE0000 000040 May 17 00:31:02.587336 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:31:02.587345 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:31:02.587353 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:31:02.587361 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:31:02.587370 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] May 17 00:31:02.587379 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] May 17 00:31:02.587388 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] May 17 00:31:02.587403 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] May 17 00:31:02.587413 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] May 17 00:31:02.587432 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] May 17 00:31:02.587441 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] May 17 00:31:02.587450 kernel: No NUMA configuration found May 17 00:31:02.587460 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] May 17 00:31:02.587472 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] May 17 00:31:02.587481 kernel: Zone ranges: May 17 00:31:02.587491 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 17 00:31:02.587500 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] May 17 00:31:02.587509 kernel: Normal empty May 17 00:31:02.587519 kernel: Movable zone start for each node May 17 00:31:02.587528 kernel: Early memory node ranges May 17 00:31:02.587538 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 17 00:31:02.587547 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] May 17 00:31:02.587556 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] May 17 00:31:02.587568 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 17 00:31:02.587577 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 17 00:31:02.587587 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges May 17 00:31:02.587596 kernel: ACPI: PM-Timer IO Port: 0x608 May 17 00:31:02.587606 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 17 00:31:02.587616 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 17 00:31:02.587626 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 17 00:31:02.587636 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 17 00:31:02.587646 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 17 00:31:02.587658 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 17 00:31:02.587668 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 17 00:31:02.587678 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 17 00:31:02.587688 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 17 00:31:02.587698 kernel: TSC deadline timer available May 17 00:31:02.587707 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs May 17 00:31:02.587717 kernel: kvm-guest: KVM setup pv remote TLB flush May 17 00:31:02.587726 kernel: kvm-guest: setup PV sched yield May 17 00:31:02.587736 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices May 17 00:31:02.587748 kernel: Booting paravirtualized kernel on KVM May 17 00:31:02.587757 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 17 00:31:02.587767 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 May 17 00:31:02.587776 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 May 17 00:31:02.587786 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 May 17 00:31:02.587795 kernel: pcpu-alloc: [0] 0 1 2 3 May 17 00:31:02.587804 kernel: kvm-guest: setup async PF for cpu 0 May 17 00:31:02.587813 kernel: kvm-guest: stealtime: cpu 0, msr 9cc1c0c0 May 17 00:31:02.587822 kernel: kvm-guest: PV spinlocks enabled May 17 00:31:02.587834 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 17 00:31:02.587843 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 May 17 00:31:02.587852 kernel: Policy zone: DMA32 May 17 00:31:02.587862 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4aad7caeadb0359f379975532748a0b4ae6bb9b229507353e0f5ae84cb9335a0 May 17 00:31:02.587873 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 17 00:31:02.587882 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 17 00:31:02.587892 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 17 00:31:02.587902 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 17 00:31:02.587915 kernel: Memory: 2436696K/2571752K available (12294K kernel code, 2276K rwdata, 13724K rodata, 47472K init, 4108K bss, 134796K reserved, 0K cma-reserved) May 17 00:31:02.587924 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 17 00:31:02.587933 kernel: ftrace: allocating 34585 entries in 136 pages May 17 00:31:02.587943 kernel: ftrace: allocated 136 pages with 2 groups May 17 00:31:02.587952 kernel: rcu: Hierarchical RCU implementation. May 17 00:31:02.587962 kernel: rcu: RCU event tracing is enabled. May 17 00:31:02.587972 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 17 00:31:02.587982 kernel: Rude variant of Tasks RCU enabled. May 17 00:31:02.587992 kernel: Tracing variant of Tasks RCU enabled. May 17 00:31:02.588005 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 17 00:31:02.588014 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 17 00:31:02.588025 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 17 00:31:02.588034 kernel: random: crng init done May 17 00:31:02.588044 kernel: Console: colour VGA+ 80x25 May 17 00:31:02.588054 kernel: printk: console [ttyS0] enabled May 17 00:31:02.588065 kernel: ACPI: Core revision 20210730 May 17 00:31:02.588077 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 17 00:31:02.588087 kernel: APIC: Switch to symmetric I/O mode setup May 17 00:31:02.588099 kernel: x2apic enabled May 17 00:31:02.588110 kernel: Switched APIC routing to physical x2apic. May 17 00:31:02.588119 kernel: kvm-guest: setup PV IPIs May 17 00:31:02.588129 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 17 00:31:02.588139 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 17 00:31:02.588149 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) May 17 00:31:02.588158 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 17 00:31:02.588168 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 17 00:31:02.588193 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 17 00:31:02.588211 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 17 00:31:02.588220 kernel: Spectre V2 : Mitigation: Retpolines May 17 00:31:02.588230 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 17 00:31:02.588241 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 17 00:31:02.588251 kernel: RETBleed: Mitigation: untrained return thunk May 17 00:31:02.588261 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 17 00:31:02.588387 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp May 17 00:31:02.588397 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 17 00:31:02.588407 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 17 00:31:02.588429 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 17 00:31:02.588439 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 17 00:31:02.588450 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. May 17 00:31:02.588460 kernel: Freeing SMP alternatives memory: 32K May 17 00:31:02.588469 kernel: pid_max: default: 32768 minimum: 301 May 17 00:31:02.588479 kernel: LSM: Security Framework initializing May 17 00:31:02.588489 kernel: SELinux: Initializing. May 17 00:31:02.588501 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 17 00:31:02.588512 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 17 00:31:02.588522 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 17 00:31:02.588532 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 17 00:31:02.588541 kernel: ... version: 0 May 17 00:31:02.588551 kernel: ... bit width: 48 May 17 00:31:02.588561 kernel: ... generic registers: 6 May 17 00:31:02.588571 kernel: ... value mask: 0000ffffffffffff May 17 00:31:02.588580 kernel: ... max period: 00007fffffffffff May 17 00:31:02.588592 kernel: ... fixed-purpose events: 0 May 17 00:31:02.588601 kernel: ... event mask: 000000000000003f May 17 00:31:02.588611 kernel: signal: max sigframe size: 1776 May 17 00:31:02.588621 kernel: rcu: Hierarchical SRCU implementation. May 17 00:31:02.588631 kernel: smp: Bringing up secondary CPUs ... May 17 00:31:02.588641 kernel: x86: Booting SMP configuration: May 17 00:31:02.588651 kernel: .... node #0, CPUs: #1 May 17 00:31:02.588662 kernel: kvm-clock: cpu 1, msr 9c19a041, secondary cpu clock May 17 00:31:02.588672 kernel: kvm-guest: setup async PF for cpu 1 May 17 00:31:02.588684 kernel: kvm-guest: stealtime: cpu 1, msr 9cc9c0c0 May 17 00:31:02.588694 kernel: #2 May 17 00:31:02.588704 kernel: kvm-clock: cpu 2, msr 9c19a081, secondary cpu clock May 17 00:31:02.588714 kernel: kvm-guest: setup async PF for cpu 2 May 17 00:31:02.588724 kernel: kvm-guest: stealtime: cpu 2, msr 9cd1c0c0 May 17 00:31:02.588733 kernel: #3 May 17 00:31:02.588743 kernel: kvm-clock: cpu 3, msr 9c19a0c1, secondary cpu clock May 17 00:31:02.588753 kernel: kvm-guest: setup async PF for cpu 3 May 17 00:31:02.588763 kernel: kvm-guest: stealtime: cpu 3, msr 9cd9c0c0 May 17 00:31:02.588772 kernel: smp: Brought up 1 node, 4 CPUs May 17 00:31:02.588784 kernel: smpboot: Max logical packages: 1 May 17 00:31:02.588794 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) May 17 00:31:02.588804 kernel: devtmpfs: initialized May 17 00:31:02.588813 kernel: x86/mm: Memory block size: 128MB May 17 00:31:02.588823 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 17 00:31:02.588833 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 17 00:31:02.588842 kernel: pinctrl core: initialized pinctrl subsystem May 17 00:31:02.588852 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 17 00:31:02.588862 kernel: audit: initializing netlink subsys (disabled) May 17 00:31:02.588873 kernel: audit: type=2000 audit(1747441861.779:1): state=initialized audit_enabled=0 res=1 May 17 00:31:02.588883 kernel: thermal_sys: Registered thermal governor 'step_wise' May 17 00:31:02.588893 kernel: thermal_sys: Registered thermal governor 'user_space' May 17 00:31:02.588901 kernel: cpuidle: using governor menu May 17 00:31:02.588911 kernel: ACPI: bus type PCI registered May 17 00:31:02.588920 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 17 00:31:02.588930 kernel: dca service started, version 1.12.1 May 17 00:31:02.588941 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) May 17 00:31:02.588950 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 May 17 00:31:02.588961 kernel: PCI: Using configuration type 1 for base access May 17 00:31:02.588970 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 17 00:31:02.588980 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages May 17 00:31:02.588990 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages May 17 00:31:02.589000 kernel: ACPI: Added _OSI(Module Device) May 17 00:31:02.589010 kernel: ACPI: Added _OSI(Processor Device) May 17 00:31:02.589020 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 17 00:31:02.589030 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 17 00:31:02.589040 kernel: ACPI: Added _OSI(Linux-Dell-Video) May 17 00:31:02.589053 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) May 17 00:31:02.589063 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) May 17 00:31:02.589073 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 17 00:31:02.589083 kernel: ACPI: Interpreter enabled May 17 00:31:02.589093 kernel: ACPI: PM: (supports S0 S3 S5) May 17 00:31:02.589104 kernel: ACPI: Using IOAPIC for interrupt routing May 17 00:31:02.589114 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 17 00:31:02.589125 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 17 00:31:02.589134 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 17 00:31:02.589341 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 17 00:31:02.589472 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 17 00:31:02.589586 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 17 00:31:02.589603 kernel: PCI host bridge to bus 0000:00 May 17 00:31:02.589726 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 17 00:31:02.589830 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 17 00:31:02.589935 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 17 00:31:02.590037 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] May 17 00:31:02.590151 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 17 00:31:02.590262 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] May 17 00:31:02.590353 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 17 00:31:02.593726 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 17 00:31:02.593854 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 May 17 00:31:02.593969 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] May 17 00:31:02.594091 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] May 17 00:31:02.594241 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] May 17 00:31:02.594353 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 17 00:31:02.594486 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 May 17 00:31:02.594593 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] May 17 00:31:02.594701 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] May 17 00:31:02.594808 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] May 17 00:31:02.594921 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 May 17 00:31:02.595040 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] May 17 00:31:02.595189 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] May 17 00:31:02.595297 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] May 17 00:31:02.595413 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 May 17 00:31:02.595537 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] May 17 00:31:02.595640 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] May 17 00:31:02.595750 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] May 17 00:31:02.595858 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] May 17 00:31:02.595970 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 17 00:31:02.596084 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 17 00:31:02.596257 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 17 00:31:02.596374 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] May 17 00:31:02.596490 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] May 17 00:31:02.596595 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 17 00:31:02.596690 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] May 17 00:31:02.596705 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 17 00:31:02.596716 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 17 00:31:02.596726 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 17 00:31:02.596735 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 17 00:31:02.596747 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 17 00:31:02.596756 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 17 00:31:02.596767 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 17 00:31:02.596777 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 17 00:31:02.596787 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 17 00:31:02.596797 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 17 00:31:02.596807 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 17 00:31:02.596817 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 17 00:31:02.596828 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 17 00:31:02.596840 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 17 00:31:02.596851 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 17 00:31:02.596861 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 17 00:31:02.596872 kernel: iommu: Default domain type: Translated May 17 00:31:02.596882 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 17 00:31:02.596997 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 17 00:31:02.597110 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 17 00:31:02.597234 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 17 00:31:02.597253 kernel: vgaarb: loaded May 17 00:31:02.597264 kernel: pps_core: LinuxPPS API ver. 1 registered May 17 00:31:02.597274 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 17 00:31:02.597285 kernel: PTP clock support registered May 17 00:31:02.597295 kernel: PCI: Using ACPI for IRQ routing May 17 00:31:02.597305 kernel: PCI: pci_cache_line_size set to 64 bytes May 17 00:31:02.597315 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 17 00:31:02.597325 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] May 17 00:31:02.597335 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 17 00:31:02.597346 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 17 00:31:02.597359 kernel: clocksource: Switched to clocksource kvm-clock May 17 00:31:02.597369 kernel: VFS: Disk quotas dquot_6.6.0 May 17 00:31:02.597380 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 17 00:31:02.597391 kernel: pnp: PnP ACPI init May 17 00:31:02.597522 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved May 17 00:31:02.597539 kernel: pnp: PnP ACPI: found 6 devices May 17 00:31:02.597550 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 17 00:31:02.597560 kernel: NET: Registered PF_INET protocol family May 17 00:31:02.597573 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 17 00:31:02.597584 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 17 00:31:02.597594 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 17 00:31:02.597605 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 17 00:31:02.597615 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) May 17 00:31:02.597626 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 17 00:31:02.597636 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 17 00:31:02.597647 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 17 00:31:02.597657 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 17 00:31:02.597670 kernel: NET: Registered PF_XDP protocol family May 17 00:31:02.597777 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 17 00:31:02.597879 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 17 00:31:02.597980 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 17 00:31:02.598086 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] May 17 00:31:02.598213 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 17 00:31:02.598303 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] May 17 00:31:02.598318 kernel: PCI: CLS 0 bytes, default 64 May 17 00:31:02.598332 kernel: Initialise system trusted keyrings May 17 00:31:02.598342 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 17 00:31:02.598352 kernel: Key type asymmetric registered May 17 00:31:02.598362 kernel: Asymmetric key parser 'x509' registered May 17 00:31:02.598372 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 17 00:31:02.598381 kernel: io scheduler mq-deadline registered May 17 00:31:02.598390 kernel: io scheduler kyber registered May 17 00:31:02.598399 kernel: io scheduler bfq registered May 17 00:31:02.598409 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 17 00:31:02.598430 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 17 00:31:02.598440 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 17 00:31:02.598450 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 17 00:31:02.598459 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 17 00:31:02.598469 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 17 00:31:02.598479 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 17 00:31:02.598489 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 17 00:31:02.598499 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 17 00:31:02.598510 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 17 00:31:02.598622 kernel: rtc_cmos 00:04: RTC can wake from S4 May 17 00:31:02.598720 kernel: rtc_cmos 00:04: registered as rtc0 May 17 00:31:02.598821 kernel: rtc_cmos 00:04: setting system clock to 2025-05-17T00:31:01 UTC (1747441861) May 17 00:31:02.598924 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 17 00:31:02.598940 kernel: NET: Registered PF_INET6 protocol family May 17 00:31:02.598950 kernel: Segment Routing with IPv6 May 17 00:31:02.598960 kernel: In-situ OAM (IOAM) with IPv6 May 17 00:31:02.598970 kernel: NET: Registered PF_PACKET protocol family May 17 00:31:02.598983 kernel: Key type dns_resolver registered May 17 00:31:02.598993 kernel: IPI shorthand broadcast: enabled May 17 00:31:02.599003 kernel: sched_clock: Marking stable (805046009, 139765082)->(1018094544, -73283453) May 17 00:31:02.599013 kernel: registered taskstats version 1 May 17 00:31:02.599023 kernel: Loading compiled-in X.509 certificates May 17 00:31:02.599033 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.182-flatcar: 01ca23caa8e5879327538f9287e5164b3e97ac0c' May 17 00:31:02.599043 kernel: Key type .fscrypt registered May 17 00:31:02.599053 kernel: Key type fscrypt-provisioning registered May 17 00:31:02.599063 kernel: ima: No TPM chip found, activating TPM-bypass! May 17 00:31:02.599078 kernel: ima: Allocated hash algorithm: sha1 May 17 00:31:02.599090 kernel: ima: No architecture policies found May 17 00:31:02.599101 kernel: clk: Disabling unused clocks May 17 00:31:02.599111 kernel: Freeing unused kernel image (initmem) memory: 47472K May 17 00:31:02.599121 kernel: Write protecting the kernel read-only data: 28672k May 17 00:31:02.599131 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K May 17 00:31:02.599141 kernel: Freeing unused kernel image (rodata/data gap) memory: 612K May 17 00:31:02.599151 kernel: Run /init as init process May 17 00:31:02.599162 kernel: with arguments: May 17 00:31:02.599191 kernel: /init May 17 00:31:02.599201 kernel: with environment: May 17 00:31:02.599211 kernel: HOME=/ May 17 00:31:02.599222 kernel: TERM=linux May 17 00:31:02.599232 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 17 00:31:02.599246 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 17 00:31:02.599261 systemd[1]: Detected virtualization kvm. May 17 00:31:02.599273 systemd[1]: Detected architecture x86-64. May 17 00:31:02.599287 systemd[1]: Running in initrd. May 17 00:31:02.599297 systemd[1]: No hostname configured, using default hostname. May 17 00:31:02.599307 systemd[1]: Hostname set to . May 17 00:31:02.599318 systemd[1]: Initializing machine ID from VM UUID. May 17 00:31:02.599329 systemd[1]: Queued start job for default target initrd.target. May 17 00:31:02.599339 systemd[1]: Started systemd-ask-password-console.path. May 17 00:31:02.599350 systemd[1]: Reached target cryptsetup.target. May 17 00:31:02.599359 systemd[1]: Reached target paths.target. May 17 00:31:02.599373 systemd[1]: Reached target slices.target. May 17 00:31:02.599392 systemd[1]: Reached target swap.target. May 17 00:31:02.599405 systemd[1]: Reached target timers.target. May 17 00:31:02.599425 systemd[1]: Listening on iscsid.socket. May 17 00:31:02.599437 systemd[1]: Listening on iscsiuio.socket. May 17 00:31:02.599450 systemd[1]: Listening on systemd-journald-audit.socket. May 17 00:31:02.599461 systemd[1]: Listening on systemd-journald-dev-log.socket. May 17 00:31:02.599473 systemd[1]: Listening on systemd-journald.socket. May 17 00:31:02.599484 systemd[1]: Listening on systemd-networkd.socket. May 17 00:31:02.599496 systemd[1]: Listening on systemd-udevd-control.socket. May 17 00:31:02.599508 systemd[1]: Listening on systemd-udevd-kernel.socket. May 17 00:31:02.599520 systemd[1]: Reached target sockets.target. May 17 00:31:02.599532 systemd[1]: Starting kmod-static-nodes.service... May 17 00:31:02.599544 systemd[1]: Finished network-cleanup.service. May 17 00:31:02.599558 systemd[1]: Starting systemd-fsck-usr.service... May 17 00:31:02.599570 systemd[1]: Starting systemd-journald.service... May 17 00:31:02.599582 systemd[1]: Starting systemd-modules-load.service... May 17 00:31:02.599593 systemd[1]: Starting systemd-resolved.service... May 17 00:31:02.599604 systemd[1]: Starting systemd-vconsole-setup.service... May 17 00:31:02.599616 systemd[1]: Finished kmod-static-nodes.service. May 17 00:31:02.599627 systemd[1]: Finished systemd-fsck-usr.service. May 17 00:31:02.599639 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 17 00:31:02.599651 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 17 00:31:02.599666 systemd[1]: Finished systemd-vconsole-setup.service. May 17 00:31:02.599677 systemd[1]: Starting dracut-cmdline-ask.service... May 17 00:31:02.599689 systemd[1]: Finished dracut-cmdline-ask.service. May 17 00:31:02.599700 systemd[1]: Starting dracut-cmdline.service... May 17 00:31:02.599716 systemd-journald[197]: Journal started May 17 00:31:02.599784 systemd-journald[197]: Runtime Journal (/run/log/journal/581fc9d72688477caa762b28c2d282c3) is 6.0M, max 48.5M, 42.5M free. May 17 00:31:02.222332 systemd-modules-load[198]: Inserted module 'overlay' May 17 00:31:02.602462 systemd[1]: Started systemd-journald.service. May 17 00:31:02.602906 dracut-cmdline[215]: dracut-dracut-053 May 17 00:31:02.602906 dracut-cmdline[215]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA May 17 00:31:02.602906 dracut-cmdline[215]: BEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4aad7caeadb0359f379975532748a0b4ae6bb9b229507353e0f5ae84cb9335a0 May 17 00:31:02.618305 kernel: audit: type=1130 audit(1747441862.603:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:02.603000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:02.717035 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 17 00:31:02.771032 kernel: Bridge firewalling registered May 17 00:31:02.770345 systemd-modules-load[198]: Inserted module 'br_netfilter' May 17 00:31:02.779216 kernel: SCSI subsystem initialized May 17 00:31:02.795600 systemd-resolved[199]: Positive Trust Anchors: May 17 00:31:02.795638 systemd-resolved[199]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:31:02.795672 systemd-resolved[199]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 17 00:31:02.801340 systemd-resolved[199]: Defaulting to hostname 'linux'. May 17 00:31:02.802184 systemd[1]: Started systemd-resolved.service. May 17 00:31:02.821587 systemd[1]: Reached target nss-lookup.target. May 17 00:31:02.821000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:02.836216 kernel: audit: type=1130 audit(1747441862.821:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:02.852479 kernel: Loading iSCSI transport class v2.0-870. May 17 00:31:02.858263 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 17 00:31:02.858330 kernel: device-mapper: uevent: version 1.0.3 May 17 00:31:02.858344 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com May 17 00:31:02.864479 systemd-modules-load[198]: Inserted module 'dm_multipath' May 17 00:31:02.865854 systemd[1]: Finished systemd-modules-load.service. May 17 00:31:02.876470 systemd[1]: Starting systemd-sysctl.service... May 17 00:31:02.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:02.884211 kernel: audit: type=1130 audit(1747441862.875:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:02.892968 systemd[1]: Finished systemd-sysctl.service. May 17 00:31:02.900518 kernel: audit: type=1130 audit(1747441862.893:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:02.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:02.903301 kernel: iscsi: registered transport (tcp) May 17 00:31:02.939626 kernel: iscsi: registered transport (qla4xxx) May 17 00:31:02.939716 kernel: QLogic iSCSI HBA Driver May 17 00:31:02.978517 systemd[1]: Finished dracut-cmdline.service. May 17 00:31:02.982000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:02.986530 systemd[1]: Starting dracut-pre-udev.service... May 17 00:31:03.031521 kernel: audit: type=1130 audit(1747441862.982:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:03.164223 kernel: raid6: avx2x4 gen() 20849 MB/s May 17 00:31:03.181221 kernel: raid6: avx2x4 xor() 4546 MB/s May 17 00:31:03.198213 kernel: raid6: avx2x2 gen() 21780 MB/s May 17 00:31:03.215220 kernel: raid6: avx2x2 xor() 13720 MB/s May 17 00:31:03.232212 kernel: raid6: avx2x1 gen() 17178 MB/s May 17 00:31:03.249207 kernel: raid6: avx2x1 xor() 10699 MB/s May 17 00:31:03.266213 kernel: raid6: sse2x4 gen() 10535 MB/s May 17 00:31:03.283218 kernel: raid6: sse2x4 xor() 4757 MB/s May 17 00:31:03.300209 kernel: raid6: sse2x2 gen() 11166 MB/s May 17 00:31:03.317247 kernel: raid6: sse2x2 xor() 6972 MB/s May 17 00:31:03.346429 kernel: raid6: sse2x1 gen() 8299 MB/s May 17 00:31:03.386232 kernel: raid6: sse2x1 xor() 5299 MB/s May 17 00:31:03.386316 kernel: raid6: using algorithm avx2x2 gen() 21780 MB/s May 17 00:31:03.386328 kernel: raid6: .... xor() 13720 MB/s, rmw enabled May 17 00:31:03.386339 kernel: raid6: using avx2x2 recovery algorithm May 17 00:31:03.404403 kernel: xor: automatically using best checksumming function avx May 17 00:31:03.587063 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no May 17 00:31:03.604739 systemd[1]: Finished dracut-pre-udev.service. May 17 00:31:03.608000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:03.623317 kernel: audit: type=1130 audit(1747441863.608:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:03.623000 audit: BPF prog-id=7 op=LOAD May 17 00:31:03.626255 kernel: audit: type=1334 audit(1747441863.623:8): prog-id=7 op=LOAD May 17 00:31:03.626000 audit: BPF prog-id=8 op=LOAD May 17 00:31:03.629225 kernel: audit: type=1334 audit(1747441863.626:9): prog-id=8 op=LOAD May 17 00:31:03.641112 systemd[1]: Starting systemd-udevd.service... May 17 00:31:03.681921 systemd-udevd[401]: Using default interface naming scheme 'v252'. May 17 00:31:03.687222 systemd[1]: Started systemd-udevd.service. May 17 00:31:03.692879 systemd[1]: Starting dracut-pre-trigger.service... May 17 00:31:03.690000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:03.714647 kernel: audit: type=1130 audit(1747441863.690:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:03.724070 dracut-pre-trigger[402]: rd.md=0: removing MD RAID activation May 17 00:31:03.786775 systemd[1]: Finished dracut-pre-trigger.service. May 17 00:31:03.792000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:03.811008 systemd[1]: Starting systemd-udev-trigger.service... May 17 00:31:03.878541 systemd[1]: Finished systemd-udev-trigger.service. May 17 00:31:03.884000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:04.037937 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 17 00:31:04.058485 kernel: cryptd: max_cpu_qlen set to 1000 May 17 00:31:04.058503 kernel: libata version 3.00 loaded. May 17 00:31:04.058515 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 17 00:31:04.058526 kernel: GPT:9289727 != 19775487 May 17 00:31:04.058536 kernel: GPT:Alternate GPT header not at the end of the disk. May 17 00:31:04.058546 kernel: GPT:9289727 != 19775487 May 17 00:31:04.058561 kernel: GPT: Use GNU Parted to correct GPT errors. May 17 00:31:04.058571 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 17 00:31:04.130203 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (444) May 17 00:31:04.158442 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 17 00:31:04.176744 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. May 17 00:31:04.178573 kernel: AVX2 version of gcm_enc/dec engaged. May 17 00:31:04.180117 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. May 17 00:31:04.193876 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. May 17 00:31:04.198902 kernel: ahci 0000:00:1f.2: version 3.0 May 17 00:31:04.237505 kernel: AES CTR mode by8 optimization enabled May 17 00:31:04.237528 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 17 00:31:04.237548 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 17 00:31:04.237684 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 17 00:31:04.237796 kernel: scsi host0: ahci May 17 00:31:04.237937 kernel: scsi host1: ahci May 17 00:31:04.238832 kernel: scsi host2: ahci May 17 00:31:04.238973 kernel: scsi host3: ahci May 17 00:31:04.239111 kernel: scsi host4: ahci May 17 00:31:04.241506 kernel: scsi host5: ahci May 17 00:31:04.241637 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 31 May 17 00:31:04.241651 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 31 May 17 00:31:04.241663 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 31 May 17 00:31:04.241675 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 31 May 17 00:31:04.241687 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 31 May 17 00:31:04.241698 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 31 May 17 00:31:04.241715 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 17 00:31:04.210910 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. May 17 00:31:04.213146 systemd[1]: Starting disk-uuid.service... May 17 00:31:04.246731 disk-uuid[494]: Primary Header is updated. May 17 00:31:04.246731 disk-uuid[494]: Secondary Entries is updated. May 17 00:31:04.246731 disk-uuid[494]: Secondary Header is updated. May 17 00:31:04.252421 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 17 00:31:04.548020 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 17 00:31:04.548102 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 17 00:31:04.548114 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 17 00:31:04.548135 kernel: ata3.00: applying bridge limits May 17 00:31:04.553690 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 17 00:31:04.559583 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 17 00:31:04.560195 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 17 00:31:04.564247 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 17 00:31:04.565736 kernel: ata3.00: configured for UDMA/100 May 17 00:31:04.567914 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 17 00:31:04.648852 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 17 00:31:04.671996 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 17 00:31:04.672018 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 17 00:31:05.280745 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 17 00:31:05.283833 disk-uuid[521]: The operation has completed successfully. May 17 00:31:05.336717 systemd[1]: disk-uuid.service: Deactivated successfully. May 17 00:31:05.340536 systemd[1]: Finished disk-uuid.service. May 17 00:31:05.349000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:05.349000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:05.369289 systemd[1]: Starting verity-setup.service... May 17 00:31:05.415241 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" May 17 00:31:05.508243 systemd[1]: Found device dev-mapper-usr.device. May 17 00:31:05.513428 systemd[1]: Mounting sysusr-usr.mount... May 17 00:31:05.519228 systemd[1]: Finished verity-setup.service. May 17 00:31:05.522000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:05.703109 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. May 17 00:31:05.705699 systemd[1]: Mounted sysusr-usr.mount. May 17 00:31:05.715695 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. May 17 00:31:05.729566 systemd[1]: Starting ignition-setup.service... May 17 00:31:05.731267 systemd[1]: Starting parse-ip-for-networkd.service... May 17 00:31:05.760799 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:31:05.760862 kernel: BTRFS info (device vda6): using free space tree May 17 00:31:05.760876 kernel: BTRFS info (device vda6): has skinny extents May 17 00:31:05.810494 systemd[1]: mnt-oem.mount: Deactivated successfully. May 17 00:31:05.942724 systemd[1]: Finished ignition-setup.service. May 17 00:31:05.964351 kernel: kauditd_printk_skb: 5 callbacks suppressed May 17 00:31:05.964387 kernel: audit: type=1130 audit(1747441865.950:16): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:05.950000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:05.951662 systemd[1]: Starting ignition-fetch-offline.service... May 17 00:31:05.964736 systemd[1]: Finished parse-ip-for-networkd.service. May 17 00:31:05.972000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:05.982181 kernel: audit: type=1130 audit(1747441865.972:17): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:05.982232 kernel: audit: type=1334 audit(1747441865.978:18): prog-id=9 op=LOAD May 17 00:31:05.978000 audit: BPF prog-id=9 op=LOAD May 17 00:31:05.983474 systemd[1]: Starting systemd-networkd.service... May 17 00:31:06.060683 systemd-networkd[716]: lo: Link UP May 17 00:31:06.062413 systemd-networkd[716]: lo: Gained carrier May 17 00:31:06.062982 systemd-networkd[716]: Enumeration completed May 17 00:31:06.063284 systemd-networkd[716]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:31:06.076694 systemd[1]: Started systemd-networkd.service. May 17 00:31:06.079600 systemd-networkd[716]: eth0: Link UP May 17 00:31:06.080188 systemd-networkd[716]: eth0: Gained carrier May 17 00:31:06.086000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:06.086406 systemd[1]: Reached target network.target. May 17 00:31:06.101716 kernel: audit: type=1130 audit(1747441866.086:19): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:06.094886 systemd[1]: Starting iscsiuio.service... May 17 00:31:06.104793 systemd[1]: Started iscsiuio.service. May 17 00:31:06.148144 kernel: audit: type=1130 audit(1747441866.111:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:06.148187 kernel: audit: type=1130 audit(1747441866.131:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:06.111000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:06.131000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:06.112365 systemd[1]: Starting iscsid.service... May 17 00:31:06.159435 iscsid[728]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi May 17 00:31:06.159435 iscsid[728]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log May 17 00:31:06.159435 iscsid[728]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. May 17 00:31:06.159435 iscsid[728]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. May 17 00:31:06.159435 iscsid[728]: If using hardware iscsi like qla4xxx this message can be ignored. May 17 00:31:06.159435 iscsid[728]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi May 17 00:31:06.159435 iscsid[728]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf May 17 00:31:06.203140 kernel: audit: type=1130 audit(1747441866.168:22): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:06.168000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:06.132461 systemd[1]: Started iscsid.service. May 17 00:31:06.199987 ignition[710]: Ignition 2.14.0 May 17 00:31:06.134266 systemd[1]: Starting dracut-initqueue.service... May 17 00:31:06.199998 ignition[710]: Stage: fetch-offline May 17 00:31:06.208000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:06.166705 systemd[1]: Finished dracut-initqueue.service. May 17 00:31:06.223956 kernel: audit: type=1130 audit(1747441866.208:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:06.200048 ignition[710]: no configs at "/usr/lib/ignition/base.d" May 17 00:31:06.169303 systemd[1]: Reached target remote-fs-pre.target. May 17 00:31:06.200058 ignition[710]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 17 00:31:06.176362 systemd-networkd[716]: eth0: DHCPv4 address 10.0.0.61/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 17 00:31:06.200242 ignition[710]: parsed url from cmdline: "" May 17 00:31:06.183744 systemd[1]: Reached target remote-cryptsetup.target. May 17 00:31:06.200246 ignition[710]: no config URL provided May 17 00:31:06.190917 systemd[1]: Reached target remote-fs.target. May 17 00:31:06.200252 ignition[710]: reading system config file "/usr/lib/ignition/user.ign" May 17 00:31:06.196060 systemd[1]: Starting dracut-pre-mount.service... May 17 00:31:06.200262 ignition[710]: no config at "/usr/lib/ignition/user.ign" May 17 00:31:06.206777 systemd[1]: Finished dracut-pre-mount.service. May 17 00:31:06.200284 ignition[710]: op(1): [started] loading QEMU firmware config module May 17 00:31:06.200291 ignition[710]: op(1): executing: "modprobe" "qemu_fw_cfg" May 17 00:31:06.243861 ignition[710]: op(1): [finished] loading QEMU firmware config module May 17 00:31:06.245961 ignition[710]: parsing config with SHA512: fe50de9b822edad890db44562b98576cad201872791f4a28a8a6fa03b65d00a1bcb5d4a878d854a35523ad49c994159da45c069baf050e334abbea4f26a548b2 May 17 00:31:06.256276 unknown[710]: fetched base config from "system" May 17 00:31:06.256518 unknown[710]: fetched user config from "qemu" May 17 00:31:06.257221 ignition[710]: fetch-offline: fetch-offline passed May 17 00:31:06.259717 systemd[1]: Finished ignition-fetch-offline.service. May 17 00:31:06.259000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:06.257327 ignition[710]: Ignition finished successfully May 17 00:31:06.278706 kernel: audit: type=1130 audit(1747441866.259:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:06.261674 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 17 00:31:06.265501 systemd[1]: Starting ignition-kargs.service... May 17 00:31:06.296410 ignition[743]: Ignition 2.14.0 May 17 00:31:06.296427 ignition[743]: Stage: kargs May 17 00:31:06.296557 ignition[743]: no configs at "/usr/lib/ignition/base.d" May 17 00:31:06.296570 ignition[743]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 17 00:31:06.301652 ignition[743]: kargs: kargs passed May 17 00:31:06.302512 ignition[743]: Ignition finished successfully May 17 00:31:06.306335 systemd[1]: Finished ignition-kargs.service. May 17 00:31:06.320473 kernel: audit: type=1130 audit(1747441866.307:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:06.307000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:06.308497 systemd[1]: Starting ignition-disks.service... May 17 00:31:06.332463 ignition[749]: Ignition 2.14.0 May 17 00:31:06.332482 ignition[749]: Stage: disks May 17 00:31:06.332667 ignition[749]: no configs at "/usr/lib/ignition/base.d" May 17 00:31:06.332680 ignition[749]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 17 00:31:06.333689 ignition[749]: disks: disks passed May 17 00:31:06.333746 ignition[749]: Ignition finished successfully May 17 00:31:06.341708 systemd[1]: Finished ignition-disks.service. May 17 00:31:06.346000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:06.346684 systemd[1]: Reached target initrd-root-device.target. May 17 00:31:06.349299 systemd[1]: Reached target local-fs-pre.target. May 17 00:31:06.350360 systemd[1]: Reached target local-fs.target. May 17 00:31:06.352389 systemd[1]: Reached target sysinit.target. May 17 00:31:06.357511 systemd[1]: Reached target basic.target. May 17 00:31:06.361487 systemd[1]: Starting systemd-fsck-root.service... May 17 00:31:06.380031 systemd-fsck[757]: ROOT: clean, 619/553520 files, 56023/553472 blocks May 17 00:31:06.546000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:06.545418 systemd[1]: Finished systemd-fsck-root.service. May 17 00:31:06.547583 systemd[1]: Mounting sysroot.mount... May 17 00:31:06.570228 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. May 17 00:31:06.571402 systemd[1]: Mounted sysroot.mount. May 17 00:31:06.573254 systemd[1]: Reached target initrd-root-fs.target. May 17 00:31:06.589750 systemd[1]: Mounting sysroot-usr.mount... May 17 00:31:06.594862 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. May 17 00:31:06.595795 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 17 00:31:06.604611 systemd[1]: Reached target ignition-diskful.target. May 17 00:31:06.638020 systemd[1]: Mounted sysroot-usr.mount. May 17 00:31:06.659429 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 17 00:31:06.664186 systemd[1]: Starting initrd-setup-root.service... May 17 00:31:06.681955 initrd-setup-root[768]: cut: /sysroot/etc/passwd: No such file or directory May 17 00:31:06.685276 initrd-setup-root[776]: cut: /sysroot/etc/group: No such file or directory May 17 00:31:06.700198 initrd-setup-root[784]: cut: /sysroot/etc/shadow: No such file or directory May 17 00:31:06.711677 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (763) May 17 00:31:06.711721 initrd-setup-root[792]: cut: /sysroot/etc/gshadow: No such file or directory May 17 00:31:06.720225 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:31:06.720291 kernel: BTRFS info (device vda6): using free space tree May 17 00:31:06.720319 kernel: BTRFS info (device vda6): has skinny extents May 17 00:31:06.749348 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 17 00:31:06.794610 systemd[1]: Finished initrd-setup-root.service. May 17 00:31:06.795000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:06.800612 systemd[1]: Starting ignition-mount.service... May 17 00:31:06.809308 systemd[1]: Starting sysroot-boot.service... May 17 00:31:06.818719 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. May 17 00:31:06.818840 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. May 17 00:31:06.915382 ignition[829]: INFO : Ignition 2.14.0 May 17 00:31:06.915382 ignition[829]: INFO : Stage: mount May 17 00:31:06.915382 ignition[829]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:31:06.915382 ignition[829]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 17 00:31:06.915382 ignition[829]: INFO : mount: mount passed May 17 00:31:06.915382 ignition[829]: INFO : Ignition finished successfully May 17 00:31:06.924503 systemd[1]: Finished ignition-mount.service. May 17 00:31:06.951000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:06.954508 systemd[1]: Starting ignition-files.service... May 17 00:31:06.989916 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 17 00:31:07.004125 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (839) May 17 00:31:07.004197 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:31:07.004213 kernel: BTRFS info (device vda6): using free space tree May 17 00:31:07.004231 kernel: BTRFS info (device vda6): has skinny extents May 17 00:31:07.052807 systemd[1]: Finished sysroot-boot.service. May 17 00:31:07.058000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:07.065960 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 17 00:31:07.083024 ignition[858]: INFO : Ignition 2.14.0 May 17 00:31:07.083024 ignition[858]: INFO : Stage: files May 17 00:31:07.096445 ignition[858]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:31:07.096445 ignition[858]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 17 00:31:07.096445 ignition[858]: DEBUG : files: compiled without relabeling support, skipping May 17 00:31:07.112899 ignition[858]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 17 00:31:07.112899 ignition[858]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 17 00:31:07.135543 ignition[858]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 17 00:31:07.141357 ignition[858]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 17 00:31:07.143937 unknown[858]: wrote ssh authorized keys file for user: core May 17 00:31:07.145778 ignition[858]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 17 00:31:07.155594 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" May 17 00:31:07.155594 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" May 17 00:31:07.163562 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:31:07.163562 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:31:07.163562 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 17 00:31:07.163562 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 17 00:31:07.163562 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 17 00:31:07.163562 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 May 17 00:31:07.639083 systemd-networkd[716]: eth0: Gained IPv6LL May 17 00:31:07.914408 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK May 17 00:31:08.591647 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 17 00:31:08.591647 ignition[858]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" May 17 00:31:08.608575 ignition[858]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 17 00:31:08.608575 ignition[858]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 17 00:31:08.608575 ignition[858]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" May 17 00:31:08.608575 ignition[858]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" May 17 00:31:08.608575 ignition[858]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" May 17 00:31:08.757299 ignition[858]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 17 00:31:08.757299 ignition[858]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" May 17 00:31:08.757299 ignition[858]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" May 17 00:31:08.757299 ignition[858]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" May 17 00:31:08.757299 ignition[858]: INFO : files: files passed May 17 00:31:08.757299 ignition[858]: INFO : Ignition finished successfully May 17 00:31:08.773491 systemd[1]: Finished ignition-files.service. May 17 00:31:08.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:08.787527 systemd[1]: Starting initrd-setup-root-after-ignition.service... May 17 00:31:08.790183 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). May 17 00:31:08.797755 systemd[1]: Starting ignition-quench.service... May 17 00:31:08.810645 systemd[1]: ignition-quench.service: Deactivated successfully. May 17 00:31:08.810779 systemd[1]: Finished ignition-quench.service. May 17 00:31:08.814000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:08.814000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:08.818345 systemd[1]: Finished initrd-setup-root-after-ignition.service. May 17 00:31:08.820000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:08.821409 initrd-setup-root-after-ignition[883]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory May 17 00:31:08.820275 systemd[1]: Reached target ignition-complete.target. May 17 00:31:08.827544 initrd-setup-root-after-ignition[885]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:31:08.823523 systemd[1]: Starting initrd-parse-etc.service... May 17 00:31:08.852098 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 17 00:31:08.852269 systemd[1]: Finished initrd-parse-etc.service. May 17 00:31:08.858000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:08.858000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:08.858598 systemd[1]: Reached target initrd-fs.target. May 17 00:31:08.860338 systemd[1]: Reached target initrd.target. May 17 00:31:08.862243 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. May 17 00:31:08.863323 systemd[1]: Starting dracut-pre-pivot.service... May 17 00:31:08.883462 systemd[1]: Finished dracut-pre-pivot.service. May 17 00:31:08.883000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:08.884634 systemd[1]: Starting initrd-cleanup.service... May 17 00:31:08.901017 systemd[1]: Stopped target nss-lookup.target. May 17 00:31:08.901321 systemd[1]: Stopped target remote-cryptsetup.target. May 17 00:31:08.904376 systemd[1]: Stopped target timers.target. May 17 00:31:08.906439 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 17 00:31:08.908000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:08.906615 systemd[1]: Stopped dracut-pre-pivot.service. May 17 00:31:08.908558 systemd[1]: Stopped target initrd.target. May 17 00:31:08.910723 systemd[1]: Stopped target basic.target. May 17 00:31:08.911663 systemd[1]: Stopped target ignition-complete.target. May 17 00:31:08.914461 systemd[1]: Stopped target ignition-diskful.target. May 17 00:31:08.920812 systemd[1]: Stopped target initrd-root-device.target. May 17 00:31:08.922963 systemd[1]: Stopped target remote-fs.target. May 17 00:31:08.925049 systemd[1]: Stopped target remote-fs-pre.target. May 17 00:31:08.926265 systemd[1]: Stopped target sysinit.target. May 17 00:31:08.943937 systemd[1]: Stopped target local-fs.target. May 17 00:31:08.945891 systemd[1]: Stopped target local-fs-pre.target. May 17 00:31:08.992585 systemd[1]: Stopped target swap.target. May 17 00:31:09.007000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:08.994973 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 17 00:31:08.995131 systemd[1]: Stopped dracut-pre-mount.service. May 17 00:31:09.007451 systemd[1]: Stopped target cryptsetup.target. May 17 00:31:09.012000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:09.009849 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 17 00:31:09.010003 systemd[1]: Stopped dracut-initqueue.service. May 17 00:31:09.013194 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 17 00:31:09.013716 systemd[1]: Stopped ignition-fetch-offline.service. May 17 00:31:09.019000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:09.020601 systemd[1]: Stopped target paths.target. May 17 00:31:09.022678 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 17 00:31:09.026486 systemd[1]: Stopped systemd-ask-password-console.path. May 17 00:31:09.029347 systemd[1]: Stopped target slices.target. May 17 00:31:09.029540 systemd[1]: Stopped target sockets.target. May 17 00:31:09.032397 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 17 00:31:09.037000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:09.032551 systemd[1]: Stopped initrd-setup-root-after-ignition.service. May 17 00:31:09.039000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:09.037776 systemd[1]: ignition-files.service: Deactivated successfully. May 17 00:31:09.048951 iscsid[728]: iscsid shutting down. May 17 00:31:09.037934 systemd[1]: Stopped ignition-files.service. May 17 00:31:09.041364 systemd[1]: Stopping ignition-mount.service... May 17 00:31:09.048583 systemd[1]: Stopping iscsid.service... May 17 00:31:09.068469 ignition[898]: INFO : Ignition 2.14.0 May 17 00:31:09.068469 ignition[898]: INFO : Stage: umount May 17 00:31:09.068469 ignition[898]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:31:09.068469 ignition[898]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 17 00:31:09.072000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:09.073000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:09.084000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:09.070102 systemd[1]: Stopping sysroot-boot.service... May 17 00:31:09.103154 ignition[898]: INFO : umount: umount passed May 17 00:31:09.103154 ignition[898]: INFO : Ignition finished successfully May 17 00:31:09.071054 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 17 00:31:09.071238 systemd[1]: Stopped systemd-udev-trigger.service. May 17 00:31:09.072506 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 17 00:31:09.072613 systemd[1]: Stopped dracut-pre-trigger.service. May 17 00:31:09.075631 systemd[1]: iscsid.service: Deactivated successfully. May 17 00:31:09.075734 systemd[1]: Stopped iscsid.service. May 17 00:31:09.086680 systemd[1]: ignition-mount.service: Deactivated successfully. May 17 00:31:09.086777 systemd[1]: Stopped ignition-mount.service. May 17 00:31:09.132923 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 17 00:31:09.131000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:09.133627 systemd[1]: iscsid.socket: Deactivated successfully. May 17 00:31:09.136000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:09.133709 systemd[1]: Closed iscsid.socket. May 17 00:31:09.140000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:09.143000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:09.135253 systemd[1]: ignition-disks.service: Deactivated successfully. May 17 00:31:09.135294 systemd[1]: Stopped ignition-disks.service. May 17 00:31:09.157000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:09.158000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:09.158000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:09.136415 systemd[1]: ignition-kargs.service: Deactivated successfully. May 17 00:31:09.136454 systemd[1]: Stopped ignition-kargs.service. May 17 00:31:09.141161 systemd[1]: ignition-setup.service: Deactivated successfully. May 17 00:31:09.141238 systemd[1]: Stopped ignition-setup.service. May 17 00:31:09.147372 systemd[1]: Stopping iscsiuio.service... May 17 00:31:09.155014 systemd[1]: iscsiuio.service: Deactivated successfully. May 17 00:31:09.155120 systemd[1]: Stopped iscsiuio.service. May 17 00:31:09.157579 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 17 00:31:09.157660 systemd[1]: Finished initrd-cleanup.service. May 17 00:31:09.159550 systemd[1]: Stopped target network.target. May 17 00:31:09.165733 systemd[1]: iscsiuio.socket: Deactivated successfully. May 17 00:31:09.165784 systemd[1]: Closed iscsiuio.socket. May 17 00:31:09.166961 systemd[1]: Stopping systemd-networkd.service... May 17 00:31:09.171544 systemd[1]: Stopping systemd-resolved.service... May 17 00:31:09.181247 systemd-networkd[716]: eth0: DHCPv6 lease lost May 17 00:31:09.210573 systemd[1]: systemd-resolved.service: Deactivated successfully. May 17 00:31:09.213295 systemd[1]: Stopped systemd-resolved.service. May 17 00:31:09.215000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:09.217860 systemd[1]: systemd-networkd.service: Deactivated successfully. May 17 00:31:09.218002 systemd[1]: Stopped systemd-networkd.service. May 17 00:31:09.220000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:09.220786 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 17 00:31:09.220821 systemd[1]: Closed systemd-networkd.socket. May 17 00:31:09.241884 systemd[1]: Stopping network-cleanup.service... May 17 00:31:09.271000 audit: BPF prog-id=6 op=UNLOAD May 17 00:31:09.262127 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 17 00:31:09.262258 systemd[1]: Stopped parse-ip-for-networkd.service. May 17 00:31:09.273000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:09.273000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:09.273355 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 00:31:09.273444 systemd[1]: Stopped systemd-sysctl.service. May 17 00:31:09.280000 audit: BPF prog-id=9 op=UNLOAD May 17 00:31:09.282164 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 17 00:31:09.282264 systemd[1]: Stopped systemd-modules-load.service. May 17 00:31:09.288000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:09.289273 systemd[1]: Stopping systemd-udevd.service... May 17 00:31:09.297421 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 17 00:31:09.327366 systemd[1]: network-cleanup.service: Deactivated successfully. May 17 00:31:09.330000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:09.346000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:09.327498 systemd[1]: Stopped network-cleanup.service. May 17 00:31:09.334548 systemd[1]: systemd-udevd.service: Deactivated successfully. May 17 00:31:09.335077 systemd[1]: Stopped systemd-udevd.service. May 17 00:31:09.381000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:09.418000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:09.350409 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 17 00:31:09.420000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:09.350461 systemd[1]: Closed systemd-udevd-control.socket. May 17 00:31:09.363303 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 17 00:31:09.363345 systemd[1]: Closed systemd-udevd-kernel.socket. May 17 00:31:09.380795 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 17 00:31:09.380874 systemd[1]: Stopped dracut-pre-udev.service. May 17 00:31:09.438000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:09.381977 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 17 00:31:09.382017 systemd[1]: Stopped dracut-cmdline.service. May 17 00:31:09.418693 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:31:09.418763 systemd[1]: Stopped dracut-cmdline-ask.service. May 17 00:31:09.455000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:09.422039 systemd[1]: Starting initrd-udevadm-cleanup-db.service... May 17 00:31:09.437789 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 17 00:31:09.459000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:09.438152 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. May 17 00:31:09.441682 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 17 00:31:09.441737 systemd[1]: Stopped kmod-static-nodes.service. May 17 00:31:09.456493 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:31:09.456567 systemd[1]: Stopped systemd-vconsole-setup.service. May 17 00:31:09.461379 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 17 00:31:09.522000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:09.524000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:09.524000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:09.481063 systemd[1]: sysroot-boot.service: Deactivated successfully. May 17 00:31:09.534000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:09.481205 systemd[1]: Stopped sysroot-boot.service. May 17 00:31:09.523216 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 17 00:31:09.523308 systemd[1]: Finished initrd-udevadm-cleanup-db.service. May 17 00:31:09.525333 systemd[1]: Reached target initrd-switch-root.target. May 17 00:31:09.529398 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 17 00:31:09.529467 systemd[1]: Stopped initrd-setup-root.service. May 17 00:31:09.536071 systemd[1]: Starting initrd-switch-root.service... May 17 00:31:09.559154 systemd[1]: Switching root. May 17 00:31:09.583357 systemd-journald[197]: Journal stopped May 17 00:31:15.124874 systemd-journald[197]: Received SIGTERM from PID 1 (systemd). May 17 00:31:15.124936 kernel: SELinux: Class mctp_socket not defined in policy. May 17 00:31:15.124955 kernel: SELinux: Class anon_inode not defined in policy. May 17 00:31:15.124969 kernel: SELinux: the above unknown classes and permissions will be allowed May 17 00:31:15.124983 kernel: SELinux: policy capability network_peer_controls=1 May 17 00:31:15.125000 kernel: SELinux: policy capability open_perms=1 May 17 00:31:15.125016 kernel: SELinux: policy capability extended_socket_class=1 May 17 00:31:15.125033 kernel: SELinux: policy capability always_check_network=0 May 17 00:31:15.125058 kernel: SELinux: policy capability cgroup_seclabel=1 May 17 00:31:15.125071 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 17 00:31:15.125085 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 17 00:31:15.125099 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 17 00:31:15.125121 systemd[1]: Successfully loaded SELinux policy in 77.235ms. May 17 00:31:15.125157 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.378ms. May 17 00:31:15.125199 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 17 00:31:15.125217 systemd[1]: Detected virtualization kvm. May 17 00:31:15.125232 systemd[1]: Detected architecture x86-64. May 17 00:31:15.125247 systemd[1]: Detected first boot. May 17 00:31:15.125263 systemd[1]: Initializing machine ID from VM UUID. May 17 00:31:15.125281 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). May 17 00:31:15.125296 systemd[1]: Populated /etc with preset unit settings. May 17 00:31:15.125314 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 00:31:15.125332 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 00:31:15.125349 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:31:15.125364 kernel: kauditd_printk_skb: 63 callbacks suppressed May 17 00:31:15.125379 kernel: audit: type=1334 audit(1747441874.813:82): prog-id=12 op=LOAD May 17 00:31:15.125392 kernel: audit: type=1334 audit(1747441874.814:83): prog-id=3 op=UNLOAD May 17 00:31:15.125409 kernel: audit: type=1334 audit(1747441874.819:84): prog-id=13 op=LOAD May 17 00:31:15.125424 kernel: audit: type=1334 audit(1747441874.824:85): prog-id=14 op=LOAD May 17 00:31:15.125440 kernel: audit: type=1334 audit(1747441874.825:86): prog-id=4 op=UNLOAD May 17 00:31:15.125454 kernel: audit: type=1334 audit(1747441874.829:87): prog-id=5 op=UNLOAD May 17 00:31:15.125472 kernel: audit: type=1334 audit(1747441874.833:88): prog-id=15 op=LOAD May 17 00:31:15.125486 kernel: audit: type=1334 audit(1747441874.834:89): prog-id=12 op=UNLOAD May 17 00:31:15.125499 kernel: audit: type=1334 audit(1747441874.838:90): prog-id=16 op=LOAD May 17 00:31:15.125513 kernel: audit: type=1334 audit(1747441874.839:91): prog-id=17 op=LOAD May 17 00:31:15.125527 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 17 00:31:15.125547 systemd[1]: Stopped initrd-switch-root.service. May 17 00:31:15.125563 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 17 00:31:15.125582 systemd[1]: Created slice system-addon\x2dconfig.slice. May 17 00:31:15.125598 systemd[1]: Created slice system-addon\x2drun.slice. May 17 00:31:15.125613 systemd[1]: Created slice system-getty.slice. May 17 00:31:15.125628 systemd[1]: Created slice system-modprobe.slice. May 17 00:31:15.125643 systemd[1]: Created slice system-serial\x2dgetty.slice. May 17 00:31:15.125661 systemd[1]: Created slice system-system\x2dcloudinit.slice. May 17 00:31:15.125682 systemd[1]: Created slice system-systemd\x2dfsck.slice. May 17 00:31:15.125700 systemd[1]: Created slice user.slice. May 17 00:31:15.125719 systemd[1]: Started systemd-ask-password-console.path. May 17 00:31:15.125738 systemd[1]: Started systemd-ask-password-wall.path. May 17 00:31:15.125759 systemd[1]: Set up automount boot.automount. May 17 00:31:15.125778 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. May 17 00:31:15.125798 systemd[1]: Stopped target initrd-switch-root.target. May 17 00:31:15.125815 systemd[1]: Stopped target initrd-fs.target. May 17 00:31:15.125830 systemd[1]: Stopped target initrd-root-fs.target. May 17 00:31:15.125845 systemd[1]: Reached target integritysetup.target. May 17 00:31:15.125860 systemd[1]: Reached target remote-cryptsetup.target. May 17 00:31:15.125875 systemd[1]: Reached target remote-fs.target. May 17 00:31:15.125890 systemd[1]: Reached target slices.target. May 17 00:31:15.125905 systemd[1]: Reached target swap.target. May 17 00:31:15.125920 systemd[1]: Reached target torcx.target. May 17 00:31:15.125935 systemd[1]: Reached target veritysetup.target. May 17 00:31:15.125950 systemd[1]: Listening on systemd-coredump.socket. May 17 00:31:15.125967 systemd[1]: Listening on systemd-initctl.socket. May 17 00:31:15.125981 systemd[1]: Listening on systemd-networkd.socket. May 17 00:31:15.125996 systemd[1]: Listening on systemd-udevd-control.socket. May 17 00:31:15.126009 systemd[1]: Listening on systemd-udevd-kernel.socket. May 17 00:31:15.126023 systemd[1]: Listening on systemd-userdbd.socket. May 17 00:31:15.126037 systemd[1]: Mounting dev-hugepages.mount... May 17 00:31:15.126062 systemd[1]: Mounting dev-mqueue.mount... May 17 00:31:15.126078 systemd[1]: Mounting media.mount... May 17 00:31:15.126096 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:31:15.126120 systemd[1]: Mounting sys-kernel-debug.mount... May 17 00:31:15.126141 systemd[1]: Mounting sys-kernel-tracing.mount... May 17 00:31:15.126165 systemd[1]: Mounting tmp.mount... May 17 00:31:15.126213 systemd[1]: Starting flatcar-tmpfiles.service... May 17 00:31:15.126230 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:31:15.126245 systemd[1]: Starting kmod-static-nodes.service... May 17 00:31:15.126260 systemd[1]: Starting modprobe@configfs.service... May 17 00:31:15.126275 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:31:15.126292 systemd[1]: Starting modprobe@drm.service... May 17 00:31:15.126311 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:31:15.126327 systemd[1]: Starting modprobe@fuse.service... May 17 00:31:15.126342 systemd[1]: Starting modprobe@loop.service... May 17 00:31:15.126358 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 17 00:31:15.126374 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 17 00:31:15.126389 systemd[1]: Stopped systemd-fsck-root.service. May 17 00:31:15.126405 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 17 00:31:15.126420 systemd[1]: Stopped systemd-fsck-usr.service. May 17 00:31:15.126435 systemd[1]: Stopped systemd-journald.service. May 17 00:31:15.126452 systemd[1]: systemd-journald.service: Consumed 1.003s CPU time. May 17 00:31:15.126467 systemd[1]: Starting systemd-journald.service... May 17 00:31:15.126483 kernel: fuse: init (API version 7.34) May 17 00:31:15.126497 kernel: loop: module loaded May 17 00:31:15.126511 systemd[1]: Starting systemd-modules-load.service... May 17 00:31:15.126527 systemd[1]: Starting systemd-network-generator.service... May 17 00:31:15.126542 systemd[1]: Starting systemd-remount-fs.service... May 17 00:31:15.126557 systemd[1]: Starting systemd-udev-trigger.service... May 17 00:31:15.126572 systemd[1]: verity-setup.service: Deactivated successfully. May 17 00:31:15.126590 systemd[1]: Stopped verity-setup.service. May 17 00:31:15.126608 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:31:15.126623 systemd[1]: Mounted dev-hugepages.mount. May 17 00:31:15.126638 systemd[1]: Mounted dev-mqueue.mount. May 17 00:31:15.126655 systemd[1]: Mounted media.mount. May 17 00:31:15.126675 systemd[1]: Mounted sys-kernel-debug.mount. May 17 00:31:15.126694 systemd[1]: Mounted sys-kernel-tracing.mount. May 17 00:31:15.126713 systemd[1]: Mounted tmp.mount. May 17 00:31:15.126732 systemd[1]: Finished flatcar-tmpfiles.service. May 17 00:31:15.126754 systemd[1]: Finished kmod-static-nodes.service. May 17 00:31:15.126773 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 17 00:31:15.126793 systemd[1]: Finished modprobe@configfs.service. May 17 00:31:15.126815 systemd-journald[1063]: Journal started May 17 00:31:15.126880 systemd-journald[1063]: Runtime Journal (/run/log/journal/581fc9d72688477caa762b28c2d282c3) is 6.0M, max 48.5M, 42.5M free. May 17 00:31:09.704000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 May 17 00:31:10.202000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 17 00:31:10.202000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 17 00:31:10.203000 audit: BPF prog-id=10 op=LOAD May 17 00:31:10.203000 audit: BPF prog-id=10 op=UNLOAD May 17 00:31:10.203000 audit: BPF prog-id=11 op=LOAD May 17 00:31:10.203000 audit: BPF prog-id=11 op=UNLOAD May 17 00:31:10.302000 audit[986]: AVC avc: denied { associate } for pid=986 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" May 17 00:31:10.302000 audit[986]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001800b2 a1=c000194000 a2=c000192000 a3=32 items=0 ppid=969 pid=986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:31:10.302000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 17 00:31:10.302000 audit[986]: AVC avc: denied { associate } for pid=986 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 May 17 00:31:10.302000 audit[986]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c000180189 a2=1ed a3=0 items=2 ppid=969 pid=986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:31:10.302000 audit: CWD cwd="/" May 17 00:31:10.302000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:10.302000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:10.302000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 17 00:31:14.813000 audit: BPF prog-id=12 op=LOAD May 17 00:31:14.814000 audit: BPF prog-id=3 op=UNLOAD May 17 00:31:14.819000 audit: BPF prog-id=13 op=LOAD May 17 00:31:14.824000 audit: BPF prog-id=14 op=LOAD May 17 00:31:14.825000 audit: BPF prog-id=4 op=UNLOAD May 17 00:31:14.829000 audit: BPF prog-id=5 op=UNLOAD May 17 00:31:14.833000 audit: BPF prog-id=15 op=LOAD May 17 00:31:14.834000 audit: BPF prog-id=12 op=UNLOAD May 17 00:31:14.838000 audit: BPF prog-id=16 op=LOAD May 17 00:31:14.839000 audit: BPF prog-id=17 op=LOAD May 17 00:31:14.839000 audit: BPF prog-id=13 op=UNLOAD May 17 00:31:14.839000 audit: BPF prog-id=14 op=UNLOAD May 17 00:31:14.840000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:14.845000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:14.845000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:14.873000 audit: BPF prog-id=15 op=UNLOAD May 17 00:31:15.057000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:15.059000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:15.061000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:15.061000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:15.065000 audit: BPF prog-id=18 op=LOAD May 17 00:31:15.065000 audit: BPF prog-id=19 op=LOAD May 17 00:31:15.065000 audit: BPF prog-id=20 op=LOAD May 17 00:31:15.065000 audit: BPF prog-id=16 op=UNLOAD May 17 00:31:15.065000 audit: BPF prog-id=17 op=UNLOAD May 17 00:31:15.100000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:15.122000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:15.122000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 17 00:31:15.122000 audit[1063]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7ffc00240ee0 a2=4000 a3=7ffc00240f7c items=0 ppid=1 pid=1063 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:31:15.122000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" May 17 00:31:15.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:15.130000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:15.130000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:14.811120 systemd[1]: Queued start job for default target multi-user.target. May 17 00:31:10.282082 /usr/lib/systemd/system-generators/torcx-generator[986]: time="2025-05-17T00:31:10Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 17 00:31:15.132227 systemd[1]: Started systemd-journald.service. May 17 00:31:14.811136 systemd[1]: Unnecessary job was removed for dev-vda6.device. May 17 00:31:10.286997 /usr/lib/systemd/system-generators/torcx-generator[986]: time="2025-05-17T00:31:10Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 17 00:31:14.841325 systemd[1]: systemd-journald.service: Deactivated successfully. May 17 00:31:10.287022 /usr/lib/systemd/system-generators/torcx-generator[986]: time="2025-05-17T00:31:10Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 17 00:31:14.841752 systemd[1]: systemd-journald.service: Consumed 1.003s CPU time. May 17 00:31:10.287065 /usr/lib/systemd/system-generators/torcx-generator[986]: time="2025-05-17T00:31:10Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" May 17 00:31:10.287079 /usr/lib/systemd/system-generators/torcx-generator[986]: time="2025-05-17T00:31:10Z" level=debug msg="skipped missing lower profile" missing profile=oem May 17 00:31:10.287128 /usr/lib/systemd/system-generators/torcx-generator[986]: time="2025-05-17T00:31:10Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" May 17 00:31:10.287149 /usr/lib/systemd/system-generators/torcx-generator[986]: time="2025-05-17T00:31:10Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= May 17 00:31:10.287438 /usr/lib/systemd/system-generators/torcx-generator[986]: time="2025-05-17T00:31:10Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack May 17 00:31:15.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:10.287486 /usr/lib/systemd/system-generators/torcx-generator[986]: time="2025-05-17T00:31:10Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 17 00:31:10.287502 /usr/lib/systemd/system-generators/torcx-generator[986]: time="2025-05-17T00:31:10Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 17 00:31:10.301929 /usr/lib/systemd/system-generators/torcx-generator[986]: time="2025-05-17T00:31:10Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 May 17 00:31:10.301988 /usr/lib/systemd/system-generators/torcx-generator[986]: time="2025-05-17T00:31:10Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl May 17 00:31:15.133691 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:31:10.302017 /usr/lib/systemd/system-generators/torcx-generator[986]: time="2025-05-17T00:31:10Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.7: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.7 May 17 00:31:15.133904 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:31:10.302037 /usr/lib/systemd/system-generators/torcx-generator[986]: time="2025-05-17T00:31:10Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store May 17 00:31:10.302065 /usr/lib/systemd/system-generators/torcx-generator[986]: time="2025-05-17T00:31:10Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.7: no such file or directory" path=/var/lib/torcx/store/3510.3.7 May 17 00:31:10.302083 /usr/lib/systemd/system-generators/torcx-generator[986]: time="2025-05-17T00:31:10Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store May 17 00:31:14.200233 /usr/lib/systemd/system-generators/torcx-generator[986]: time="2025-05-17T00:31:14Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 17 00:31:14.200545 /usr/lib/systemd/system-generators/torcx-generator[986]: time="2025-05-17T00:31:14Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 17 00:31:14.200730 /usr/lib/systemd/system-generators/torcx-generator[986]: time="2025-05-17T00:31:14Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 17 00:31:14.200932 /usr/lib/systemd/system-generators/torcx-generator[986]: time="2025-05-17T00:31:14Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 17 00:31:14.200989 /usr/lib/systemd/system-generators/torcx-generator[986]: time="2025-05-17T00:31:14Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= May 17 00:31:14.201060 /usr/lib/systemd/system-generators/torcx-generator[986]: time="2025-05-17T00:31:14Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx May 17 00:31:15.134000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:15.134000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:15.136462 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:31:15.136665 systemd[1]: Finished modprobe@drm.service. May 17 00:31:15.137000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:15.137000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:15.137903 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:31:15.138077 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:31:15.139000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:15.139000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:15.139543 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 17 00:31:15.139722 systemd[1]: Finished modprobe@fuse.service. May 17 00:31:15.140000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:15.140000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:15.141004 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:31:15.141169 systemd[1]: Finished modprobe@loop.service. May 17 00:31:15.141000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:15.141000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:15.142470 systemd[1]: Finished systemd-modules-load.service. May 17 00:31:15.143000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:15.144061 systemd[1]: Finished systemd-network-generator.service. May 17 00:31:15.145000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:15.145453 systemd[1]: Finished systemd-remount-fs.service. May 17 00:31:15.146000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:15.147140 systemd[1]: Reached target network-pre.target. May 17 00:31:15.150785 systemd[1]: Mounting sys-fs-fuse-connections.mount... May 17 00:31:15.153199 systemd[1]: Mounting sys-kernel-config.mount... May 17 00:31:15.154399 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 17 00:31:15.166296 systemd[1]: Starting systemd-hwdb-update.service... May 17 00:31:15.169323 systemd[1]: Starting systemd-journal-flush.service... May 17 00:31:15.170705 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:31:15.172412 systemd[1]: Starting systemd-random-seed.service... May 17 00:31:15.173555 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 00:31:15.176527 systemd[1]: Starting systemd-sysctl.service... May 17 00:31:15.184442 systemd-journald[1063]: Time spent on flushing to /var/log/journal/581fc9d72688477caa762b28c2d282c3 is 22.960ms for 1102 entries. May 17 00:31:15.184442 systemd-journald[1063]: System Journal (/var/log/journal/581fc9d72688477caa762b28c2d282c3) is 8.0M, max 195.6M, 187.6M free. May 17 00:31:15.625401 systemd-journald[1063]: Received client request to flush runtime journal. May 17 00:31:15.189000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:15.338000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:15.363000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:15.553000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:15.583000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:15.183260 systemd[1]: Starting systemd-sysusers.service... May 17 00:31:15.188099 systemd[1]: Finished systemd-udev-trigger.service. May 17 00:31:15.190833 systemd[1]: Mounted sys-fs-fuse-connections.mount. May 17 00:31:15.626382 udevadm[1089]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 17 00:31:15.192025 systemd[1]: Mounted sys-kernel-config.mount. May 17 00:31:15.198068 systemd[1]: Starting systemd-udev-settle.service... May 17 00:31:15.338651 systemd[1]: Finished systemd-sysctl.service. May 17 00:31:15.362829 systemd[1]: Finished systemd-sysusers.service. May 17 00:31:15.365702 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 17 00:31:15.552117 systemd[1]: Finished systemd-random-seed.service. May 17 00:31:15.553420 systemd[1]: Reached target first-boot-complete.target. May 17 00:31:15.582132 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 17 00:31:15.627456 systemd[1]: Finished systemd-journal-flush.service. May 17 00:31:15.628000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:16.300778 systemd[1]: Finished systemd-hwdb-update.service. May 17 00:31:16.301000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:16.302000 audit: BPF prog-id=21 op=LOAD May 17 00:31:16.303000 audit: BPF prog-id=22 op=LOAD May 17 00:31:16.303000 audit: BPF prog-id=7 op=UNLOAD May 17 00:31:16.303000 audit: BPF prog-id=8 op=UNLOAD May 17 00:31:16.304430 systemd[1]: Starting systemd-udevd.service... May 17 00:31:16.338662 systemd-udevd[1094]: Using default interface naming scheme 'v252'. May 17 00:31:16.379789 systemd[1]: Started systemd-udevd.service. May 17 00:31:16.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:16.395000 audit: BPF prog-id=23 op=LOAD May 17 00:31:16.397137 systemd[1]: Starting systemd-networkd.service... May 17 00:31:16.410000 audit: BPF prog-id=24 op=LOAD May 17 00:31:16.410000 audit: BPF prog-id=25 op=LOAD May 17 00:31:16.410000 audit: BPF prog-id=26 op=LOAD May 17 00:31:16.412047 systemd[1]: Starting systemd-userdbd.service... May 17 00:31:16.443811 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. May 17 00:31:16.516417 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 17 00:31:16.584203 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 17 00:31:16.606210 kernel: ACPI: button: Power Button [PWRF] May 17 00:31:16.567000 audit[1111]: AVC avc: denied { confidentiality } for pid=1111 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 May 17 00:31:16.639193 systemd[1]: Started systemd-userdbd.service. May 17 00:31:16.643000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:16.567000 audit[1111]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=557290ace830 a1=338ac a2=7f9bb6e7abc5 a3=5 items=110 ppid=1094 pid=1111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:31:16.567000 audit: CWD cwd="/" May 17 00:31:16.567000 audit: PATH item=0 name=(null) inode=32 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=1 name=(null) inode=11194 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=2 name=(null) inode=11194 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=3 name=(null) inode=11195 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=4 name=(null) inode=11194 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=5 name=(null) inode=11196 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=6 name=(null) inode=11194 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=7 name=(null) inode=11197 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=8 name=(null) inode=11197 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=9 name=(null) inode=11198 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=10 name=(null) inode=11197 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=11 name=(null) inode=11199 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=12 name=(null) inode=11197 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=13 name=(null) inode=11200 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=14 name=(null) inode=11197 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=15 name=(null) inode=11201 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=16 name=(null) inode=11197 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=17 name=(null) inode=11202 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=18 name=(null) inode=11194 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=19 name=(null) inode=11203 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=20 name=(null) inode=11203 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=21 name=(null) inode=11204 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=22 name=(null) inode=11203 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=23 name=(null) inode=11205 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=24 name=(null) inode=11203 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=25 name=(null) inode=11206 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=26 name=(null) inode=11203 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=27 name=(null) inode=11207 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=28 name=(null) inode=11203 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=29 name=(null) inode=11208 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=30 name=(null) inode=11194 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=31 name=(null) inode=11209 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=32 name=(null) inode=11209 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=33 name=(null) inode=11210 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=34 name=(null) inode=11209 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=35 name=(null) inode=11211 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=36 name=(null) inode=11209 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=37 name=(null) inode=11212 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=38 name=(null) inode=11209 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=39 name=(null) inode=11213 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=40 name=(null) inode=11209 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=41 name=(null) inode=11214 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=42 name=(null) inode=11194 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=43 name=(null) inode=11215 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=44 name=(null) inode=11215 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=45 name=(null) inode=11216 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=46 name=(null) inode=11215 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=47 name=(null) inode=11217 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=48 name=(null) inode=11215 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=49 name=(null) inode=11218 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=50 name=(null) inode=11215 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=51 name=(null) inode=11219 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=52 name=(null) inode=11215 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=53 name=(null) inode=11220 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=54 name=(null) inode=32 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=55 name=(null) inode=11221 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=56 name=(null) inode=11221 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=57 name=(null) inode=11222 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=58 name=(null) inode=11221 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=59 name=(null) inode=11223 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=60 name=(null) inode=11221 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=61 name=(null) inode=11224 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=62 name=(null) inode=11224 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=63 name=(null) inode=11225 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=64 name=(null) inode=11224 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=65 name=(null) inode=11226 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=66 name=(null) inode=11224 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=67 name=(null) inode=11227 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=68 name=(null) inode=11224 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=69 name=(null) inode=11228 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=70 name=(null) inode=11224 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=71 name=(null) inode=11229 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=72 name=(null) inode=11221 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=73 name=(null) inode=11230 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=74 name=(null) inode=11230 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=75 name=(null) inode=11231 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=76 name=(null) inode=11230 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=77 name=(null) inode=11232 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=78 name=(null) inode=11230 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=79 name=(null) inode=11233 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=80 name=(null) inode=11230 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=81 name=(null) inode=11234 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=82 name=(null) inode=11230 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=83 name=(null) inode=11235 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=84 name=(null) inode=11221 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=85 name=(null) inode=11236 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=86 name=(null) inode=11236 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=87 name=(null) inode=11237 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=88 name=(null) inode=11236 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=89 name=(null) inode=11238 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=90 name=(null) inode=11236 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=91 name=(null) inode=11239 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=92 name=(null) inode=11236 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=93 name=(null) inode=11240 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=94 name=(null) inode=11236 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=95 name=(null) inode=11241 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=96 name=(null) inode=11221 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=97 name=(null) inode=11242 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=98 name=(null) inode=11242 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=99 name=(null) inode=11243 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=100 name=(null) inode=11242 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=101 name=(null) inode=11244 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=102 name=(null) inode=11242 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=103 name=(null) inode=11245 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=104 name=(null) inode=11242 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=105 name=(null) inode=11246 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=106 name=(null) inode=11242 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=107 name=(null) inode=11247 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PATH item=109 name=(null) inode=11249 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:31:16.567000 audit: PROCTITLE proctitle="(udev-worker)" May 17 00:31:16.851108 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 17 00:31:16.882060 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 17 00:31:16.889296 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) May 17 00:31:16.889495 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 17 00:31:16.919383 systemd-networkd[1105]: lo: Link UP May 17 00:31:16.919398 systemd-networkd[1105]: lo: Gained carrier May 17 00:31:16.919929 systemd-networkd[1105]: Enumeration completed May 17 00:31:16.920070 systemd-networkd[1105]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:31:16.921344 systemd-networkd[1105]: eth0: Link UP May 17 00:31:16.921357 systemd-networkd[1105]: eth0: Gained carrier May 17 00:31:16.921620 systemd[1]: Started systemd-networkd.service. May 17 00:31:16.932000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:17.046231 kernel: mousedev: PS/2 mouse device common for all mice May 17 00:31:17.104060 kernel: kvm: Nested Virtualization enabled May 17 00:31:17.104221 kernel: SVM: kvm: Nested Paging enabled May 17 00:31:17.104280 kernel: SVM: Virtual VMLOAD VMSAVE supported May 17 00:31:17.109795 kernel: SVM: Virtual GIF supported May 17 00:31:17.120432 systemd-networkd[1105]: eth0: DHCPv4 address 10.0.0.61/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 17 00:31:17.276259 kernel: EDAC MC: Ver: 3.0.0 May 17 00:31:17.316810 systemd[1]: Finished systemd-udev-settle.service. May 17 00:31:17.318000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:17.325581 systemd[1]: Starting lvm2-activation-early.service... May 17 00:31:17.349678 lvm[1129]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:31:17.389658 systemd[1]: Finished lvm2-activation-early.service. May 17 00:31:17.391245 systemd[1]: Reached target cryptsetup.target. May 17 00:31:17.390000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:17.399137 systemd[1]: Starting lvm2-activation.service... May 17 00:31:17.403210 lvm[1130]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:31:17.434737 systemd[1]: Finished lvm2-activation.service. May 17 00:31:17.436296 systemd[1]: Reached target local-fs-pre.target. May 17 00:31:17.435000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:17.437349 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 17 00:31:17.437376 systemd[1]: Reached target local-fs.target. May 17 00:31:17.438355 systemd[1]: Reached target machines.target. May 17 00:31:17.449243 systemd[1]: Starting ldconfig.service... May 17 00:31:17.453797 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:31:17.453855 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:31:17.459341 systemd[1]: Starting systemd-boot-update.service... May 17 00:31:17.463894 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... May 17 00:31:17.472758 systemd[1]: Starting systemd-machine-id-commit.service... May 17 00:31:17.479415 systemd[1]: Starting systemd-sysext.service... May 17 00:31:17.487948 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1132 (bootctl) May 17 00:31:17.491328 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... May 17 00:31:17.507257 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. May 17 00:31:17.505000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:17.509524 systemd[1]: Unmounting usr-share-oem.mount... May 17 00:31:17.519192 systemd[1]: usr-share-oem.mount: Deactivated successfully. May 17 00:31:17.519377 systemd[1]: Unmounted usr-share-oem.mount. May 17 00:31:17.562780 kernel: loop0: detected capacity change from 0 to 229808 May 17 00:31:17.568938 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 17 00:31:17.574522 systemd[1]: Finished systemd-machine-id-commit.service. May 17 00:31:17.573000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:17.601283 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 17 00:31:17.629241 systemd-fsck[1141]: fsck.fat 4.2 (2021-01-31) May 17 00:31:17.629241 systemd-fsck[1141]: /dev/vda1: 790 files, 120726/258078 clusters May 17 00:31:17.635453 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. May 17 00:31:17.637228 kernel: loop1: detected capacity change from 0 to 229808 May 17 00:31:17.639000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:17.646874 systemd[1]: Mounting boot.mount... May 17 00:31:17.665525 (sd-sysext)[1144]: Using extensions 'kubernetes'. May 17 00:31:17.665900 (sd-sysext)[1144]: Merged extensions into '/usr'. May 17 00:31:17.696214 systemd[1]: Mounted boot.mount. May 17 00:31:17.707982 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:31:17.709752 systemd[1]: Mounting usr-share-oem.mount... May 17 00:31:17.711026 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:31:17.713108 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:31:17.715246 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:31:17.723255 systemd[1]: Starting modprobe@loop.service... May 17 00:31:17.724532 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:31:17.724670 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:31:17.724811 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:31:17.731106 systemd[1]: Finished systemd-boot-update.service. May 17 00:31:17.732000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:17.732788 systemd[1]: Mounted usr-share-oem.mount. May 17 00:31:17.736680 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:31:17.736891 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:31:17.738000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:17.738000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:17.738586 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:31:17.738766 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:31:17.742000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:17.742000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:17.742922 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:31:17.743081 systemd[1]: Finished modprobe@loop.service. May 17 00:31:17.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:17.746000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:17.746723 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:31:17.746844 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 00:31:17.747868 systemd[1]: Finished systemd-sysext.service. May 17 00:31:17.748000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:17.750473 systemd[1]: Starting ensure-sysext.service... May 17 00:31:17.754711 systemd[1]: Starting systemd-tmpfiles-setup.service... May 17 00:31:17.769583 systemd[1]: Reloading. May 17 00:31:17.772232 systemd-tmpfiles[1152]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. May 17 00:31:17.773274 systemd-tmpfiles[1152]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 17 00:31:17.777884 systemd-tmpfiles[1152]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 17 00:31:17.856451 /usr/lib/systemd/system-generators/torcx-generator[1171]: time="2025-05-17T00:31:17Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 17 00:31:17.856482 /usr/lib/systemd/system-generators/torcx-generator[1171]: time="2025-05-17T00:31:17Z" level=info msg="torcx already run" May 17 00:31:17.921352 ldconfig[1131]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 17 00:31:18.018580 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 00:31:18.018604 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 00:31:18.047458 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:31:18.138000 audit: BPF prog-id=27 op=LOAD May 17 00:31:18.138000 audit: BPF prog-id=24 op=UNLOAD May 17 00:31:18.139000 audit: BPF prog-id=28 op=LOAD May 17 00:31:18.139000 audit: BPF prog-id=29 op=LOAD May 17 00:31:18.139000 audit: BPF prog-id=25 op=UNLOAD May 17 00:31:18.139000 audit: BPF prog-id=26 op=UNLOAD May 17 00:31:18.141000 audit: BPF prog-id=30 op=LOAD May 17 00:31:18.142000 audit: BPF prog-id=31 op=LOAD May 17 00:31:18.142000 audit: BPF prog-id=21 op=UNLOAD May 17 00:31:18.142000 audit: BPF prog-id=22 op=UNLOAD May 17 00:31:18.143000 audit: BPF prog-id=32 op=LOAD May 17 00:31:18.143000 audit: BPF prog-id=18 op=UNLOAD May 17 00:31:18.143000 audit: BPF prog-id=33 op=LOAD May 17 00:31:18.143000 audit: BPF prog-id=34 op=LOAD May 17 00:31:18.144000 audit: BPF prog-id=19 op=UNLOAD May 17 00:31:18.144000 audit: BPF prog-id=20 op=UNLOAD May 17 00:31:18.146000 audit: BPF prog-id=35 op=LOAD May 17 00:31:18.146000 audit: BPF prog-id=23 op=UNLOAD May 17 00:31:18.148742 systemd[1]: Finished ldconfig.service. May 17 00:31:18.149000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:18.151215 systemd[1]: Finished systemd-tmpfiles-setup.service. May 17 00:31:18.153000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:18.159976 systemd[1]: Starting audit-rules.service... May 17 00:31:18.162452 systemd[1]: Starting clean-ca-certificates.service... May 17 00:31:18.165269 systemd[1]: Starting systemd-journal-catalog-update.service... May 17 00:31:18.177000 audit: BPF prog-id=36 op=LOAD May 17 00:31:18.178548 systemd[1]: Starting systemd-resolved.service... May 17 00:31:18.188000 audit: BPF prog-id=37 op=LOAD May 17 00:31:18.190096 systemd[1]: Starting systemd-timesyncd.service... May 17 00:31:18.192591 systemd[1]: Starting systemd-update-utmp.service... May 17 00:31:18.194329 systemd[1]: Finished clean-ca-certificates.service. May 17 00:31:18.195000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:18.198165 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 00:31:18.215321 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:31:18.218000 audit[1220]: SYSTEM_BOOT pid=1220 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' May 17 00:31:18.221701 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:31:18.231748 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:31:18.236479 systemd[1]: Starting modprobe@loop.service... May 17 00:31:18.237537 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:31:18.237738 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:31:18.237937 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 00:31:18.239686 systemd[1]: Finished systemd-journal-catalog-update.service. May 17 00:31:18.241000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:18.241672 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:31:18.241805 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:31:18.247000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:18.247000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:18.248145 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:31:18.248284 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:31:18.255000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:18.255000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:18.256018 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:31:18.256233 systemd[1]: Finished modprobe@loop.service. May 17 00:31:18.260000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:18.260000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:18.265137 systemd[1]: Finished systemd-update-utmp.service. May 17 00:31:18.266000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:18.272791 systemd[1]: Finished ensure-sysext.service. May 17 00:31:18.273000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:18.275069 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:31:18.280086 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:31:18.285083 systemd[1]: Starting modprobe@drm.service... May 17 00:31:18.290976 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:31:18.293328 systemd[1]: Starting modprobe@loop.service... May 17 00:31:18.294411 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:31:18.294475 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:31:18.303159 systemd[1]: Starting systemd-networkd-wait-online.service... May 17 00:31:18.321180 systemd[1]: Starting systemd-update-done.service... May 17 00:31:18.327277 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 00:31:18.332000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:18.332000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:18.333000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:18.333000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:31:18.331083 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:31:18.331267 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:31:18.332794 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:31:18.332928 systemd[1]: Finished modprobe@drm.service. May 17 00:31:18.334255 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:31:18.334374 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:31:18.337300 augenrules[1242]: No rules May 17 00:31:18.334000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 May 17 00:31:18.334000 audit[1242]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd79573130 a2=420 a3=0 items=0 ppid=1214 pid=1242 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:31:18.334000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 May 17 00:31:18.339923 systemd[1]: Finished audit-rules.service. May 17 00:31:18.344495 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:31:18.344660 systemd[1]: Finished modprobe@loop.service. May 17 00:31:18.345899 systemd[1]: Finished systemd-update-done.service. May 17 00:31:18.349705 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:31:18.349779 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 00:31:18.378109 systemd-resolved[1217]: Positive Trust Anchors: May 17 00:31:18.378129 systemd-resolved[1217]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:31:18.378180 systemd-resolved[1217]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 17 00:31:18.392373 systemd[1]: Started systemd-timesyncd.service. May 17 00:31:19.581999 systemd-timesyncd[1219]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 17 00:31:19.582073 systemd-timesyncd[1219]: Initial clock synchronization to Sat 2025-05-17 00:31:19.581830 UTC. May 17 00:31:19.583089 systemd[1]: Reached target time-set.target. May 17 00:31:19.601906 systemd-resolved[1217]: Defaulting to hostname 'linux'. May 17 00:31:19.603996 systemd[1]: Started systemd-resolved.service. May 17 00:31:19.611874 systemd[1]: Reached target network.target. May 17 00:31:19.621394 systemd[1]: Reached target nss-lookup.target. May 17 00:31:19.625832 systemd[1]: Reached target sysinit.target. May 17 00:31:19.633392 systemd[1]: Started motdgen.path. May 17 00:31:19.639766 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. May 17 00:31:19.647558 systemd[1]: Started logrotate.timer. May 17 00:31:19.652654 systemd[1]: Started mdadm.timer. May 17 00:31:19.658833 systemd[1]: Started systemd-tmpfiles-clean.timer. May 17 00:31:19.666009 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 17 00:31:19.667370 systemd[1]: Reached target paths.target. May 17 00:31:19.674849 systemd[1]: Reached target timers.target. May 17 00:31:19.678359 systemd[1]: Listening on dbus.socket. May 17 00:31:19.681552 systemd[1]: Starting docker.socket... May 17 00:31:19.687364 systemd[1]: Listening on sshd.socket. May 17 00:31:19.692459 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:31:19.695247 systemd[1]: Listening on docker.socket. May 17 00:31:19.700531 systemd[1]: Reached target sockets.target. May 17 00:31:19.701867 systemd[1]: Reached target basic.target. May 17 00:31:19.705881 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:31:19.706787 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. May 17 00:31:19.707300 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. May 17 00:31:19.708564 systemd[1]: Starting containerd.service... May 17 00:31:19.721621 systemd[1]: Starting dbus.service... May 17 00:31:19.729579 systemd[1]: Starting enable-oem-cloudinit.service... May 17 00:31:19.731839 systemd[1]: Starting extend-filesystems.service... May 17 00:31:19.732827 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). May 17 00:31:19.737928 jq[1253]: false May 17 00:31:19.741095 systemd[1]: Starting motdgen.service... May 17 00:31:19.748583 systemd[1]: Starting ssh-key-proc-cmdline.service... May 17 00:31:19.757620 systemd[1]: Starting sshd-keygen.service... May 17 00:31:19.779859 extend-filesystems[1254]: Found loop1 May 17 00:31:19.779859 extend-filesystems[1254]: Found sr0 May 17 00:31:19.779859 extend-filesystems[1254]: Found vda May 17 00:31:19.779859 extend-filesystems[1254]: Found vda1 May 17 00:31:19.779859 extend-filesystems[1254]: Found vda2 May 17 00:31:19.779859 extend-filesystems[1254]: Found vda3 May 17 00:31:19.779859 extend-filesystems[1254]: Found usr May 17 00:31:19.779859 extend-filesystems[1254]: Found vda4 May 17 00:31:19.779859 extend-filesystems[1254]: Found vda6 May 17 00:31:19.779859 extend-filesystems[1254]: Found vda7 May 17 00:31:19.779859 extend-filesystems[1254]: Found vda9 May 17 00:31:19.779859 extend-filesystems[1254]: Checking size of /dev/vda9 May 17 00:31:19.810566 systemd-networkd[1105]: eth0: Gained IPv6LL May 17 00:31:19.895805 extend-filesystems[1254]: Resized partition /dev/vda9 May 17 00:31:19.861319 dbus-daemon[1252]: [system] SELinux support is enabled May 17 00:31:19.857743 systemd[1]: Starting systemd-logind.service... May 17 00:31:19.947861 extend-filesystems[1276]: resize2fs 1.46.5 (30-Dec-2021) May 17 00:31:19.869827 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:31:19.869910 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 17 00:31:19.950932 jq[1275]: true May 17 00:31:19.870601 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 17 00:31:19.872224 systemd[1]: Starting update-engine.service... May 17 00:31:19.879664 systemd[1]: Starting update-ssh-keys-after-ignition.service... May 17 00:31:19.903420 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:31:19.904160 systemd[1]: Started dbus.service. May 17 00:31:19.912197 systemd[1]: Finished systemd-networkd-wait-online.service. May 17 00:31:19.926861 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 17 00:31:19.927078 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. May 17 00:31:19.927411 systemd[1]: motdgen.service: Deactivated successfully. May 17 00:31:19.927590 systemd[1]: Finished motdgen.service. May 17 00:31:19.935076 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 17 00:31:19.935287 systemd[1]: Finished ssh-key-proc-cmdline.service. May 17 00:31:19.984575 jq[1279]: true May 17 00:31:19.986404 systemd[1]: Reached target network-online.target. May 17 00:31:19.988023 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 17 00:31:20.003482 systemd[1]: Starting kubelet.service... May 17 00:31:20.004482 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 17 00:31:20.004532 systemd[1]: Reached target system-config.target. May 17 00:31:20.006205 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 17 00:31:20.006238 systemd[1]: Reached target user-config.target. May 17 00:31:20.014291 update_engine[1272]: I0517 00:31:20.011811 1272 main.cc:92] Flatcar Update Engine starting May 17 00:31:20.022223 update_engine[1272]: I0517 00:31:20.017937 1272 update_check_scheduler.cc:74] Next update check in 10m49s May 17 00:31:20.019557 systemd[1]: Started update-engine.service. May 17 00:31:20.032131 systemd[1]: Started locksmithd.service. May 17 00:31:20.090417 systemd-logind[1271]: Watching system buttons on /dev/input/event1 (Power Button) May 17 00:31:20.090451 systemd-logind[1271]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 17 00:31:20.093836 systemd-logind[1271]: New seat seat0. May 17 00:31:20.096366 systemd[1]: Started systemd-logind.service. May 17 00:31:20.106620 env[1280]: time="2025-05-17T00:31:20.106533793Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 May 17 00:31:20.129813 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 17 00:31:20.189048 env[1280]: time="2025-05-17T00:31:20.147778346Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 17 00:31:20.189575 env[1280]: time="2025-05-17T00:31:20.189316570Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 17 00:31:20.192480 env[1280]: time="2025-05-17T00:31:20.191357919Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.182-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 17 00:31:20.192480 env[1280]: time="2025-05-17T00:31:20.191399587Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 17 00:31:20.192480 env[1280]: time="2025-05-17T00:31:20.191702706Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:31:20.192480 env[1280]: time="2025-05-17T00:31:20.191729035Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 17 00:31:20.192480 env[1280]: time="2025-05-17T00:31:20.191744955Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" May 17 00:31:20.192480 env[1280]: time="2025-05-17T00:31:20.191759362Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 17 00:31:20.192480 env[1280]: time="2025-05-17T00:31:20.191837268Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 17 00:31:20.192480 env[1280]: time="2025-05-17T00:31:20.192063212Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 17 00:31:20.192480 env[1280]: time="2025-05-17T00:31:20.192199267Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:31:20.192480 env[1280]: time="2025-05-17T00:31:20.192233561Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 17 00:31:20.192843 env[1280]: time="2025-05-17T00:31:20.192289296Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" May 17 00:31:20.192843 env[1280]: time="2025-05-17T00:31:20.192304274Z" level=info msg="metadata content store policy set" policy=shared May 17 00:31:20.200250 extend-filesystems[1276]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 17 00:31:20.200250 extend-filesystems[1276]: old_desc_blocks = 1, new_desc_blocks = 1 May 17 00:31:20.200250 extend-filesystems[1276]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 17 00:31:20.212469 extend-filesystems[1254]: Resized filesystem in /dev/vda9 May 17 00:31:20.208476 systemd[1]: extend-filesystems.service: Deactivated successfully. May 17 00:31:20.208706 systemd[1]: Finished extend-filesystems.service. May 17 00:31:20.237886 bash[1303]: Updated "/home/core/.ssh/authorized_keys" May 17 00:31:20.236098 systemd[1]: Finished update-ssh-keys-after-ignition.service. May 17 00:31:20.241115 env[1280]: time="2025-05-17T00:31:20.240095547Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 17 00:31:20.241115 env[1280]: time="2025-05-17T00:31:20.240154207Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 17 00:31:20.241115 env[1280]: time="2025-05-17T00:31:20.240176448Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 17 00:31:20.241115 env[1280]: time="2025-05-17T00:31:20.240221062Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 17 00:31:20.241115 env[1280]: time="2025-05-17T00:31:20.240243244Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 17 00:31:20.241115 env[1280]: time="2025-05-17T00:31:20.240261809Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 17 00:31:20.241115 env[1280]: time="2025-05-17T00:31:20.240279692Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 17 00:31:20.241115 env[1280]: time="2025-05-17T00:31:20.240297686Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 17 00:31:20.241115 env[1280]: time="2025-05-17T00:31:20.240316231Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 May 17 00:31:20.241115 env[1280]: time="2025-05-17T00:31:20.240337651Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 17 00:31:20.241115 env[1280]: time="2025-05-17T00:31:20.240355254Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 17 00:31:20.241115 env[1280]: time="2025-05-17T00:31:20.240372286Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 17 00:31:20.241115 env[1280]: time="2025-05-17T00:31:20.240562112Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 17 00:31:20.241115 env[1280]: time="2025-05-17T00:31:20.240682949Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 17 00:31:20.242108 env[1280]: time="2025-05-17T00:31:20.241109388Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 17 00:31:20.242108 env[1280]: time="2025-05-17T00:31:20.241145717Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 17 00:31:20.242108 env[1280]: time="2025-05-17T00:31:20.241164953Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 17 00:31:20.242108 env[1280]: time="2025-05-17T00:31:20.241217702Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 17 00:31:20.242108 env[1280]: time="2025-05-17T00:31:20.241237048Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 17 00:31:20.242108 env[1280]: time="2025-05-17T00:31:20.241254611Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 17 00:31:20.242108 env[1280]: time="2025-05-17T00:31:20.241271623Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 17 00:31:20.242108 env[1280]: time="2025-05-17T00:31:20.241288023Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 17 00:31:20.242108 env[1280]: time="2025-05-17T00:31:20.241304444Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 17 00:31:20.242108 env[1280]: time="2025-05-17T00:31:20.241319132Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 17 00:31:20.242108 env[1280]: time="2025-05-17T00:31:20.241336224Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 17 00:31:20.242108 env[1280]: time="2025-05-17T00:31:20.241354699Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 17 00:31:20.242108 env[1280]: time="2025-05-17T00:31:20.241511272Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 17 00:31:20.242108 env[1280]: time="2025-05-17T00:31:20.241535397Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 17 00:31:20.242108 env[1280]: time="2025-05-17T00:31:20.241553461Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 17 00:31:20.242629 env[1280]: time="2025-05-17T00:31:20.241569101Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 17 00:31:20.242629 env[1280]: time="2025-05-17T00:31:20.241590601Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 May 17 00:31:20.242629 env[1280]: time="2025-05-17T00:31:20.241607773Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 17 00:31:20.242629 env[1280]: time="2025-05-17T00:31:20.241642128Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" May 17 00:31:20.242629 env[1280]: time="2025-05-17T00:31:20.241723440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 17 00:31:20.242859 env[1280]: time="2025-05-17T00:31:20.241965755Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 17 00:31:20.242859 env[1280]: time="2025-05-17T00:31:20.242038671Z" level=info msg="Connect containerd service" May 17 00:31:20.242859 env[1280]: time="2025-05-17T00:31:20.242072134Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 17 00:31:20.242859 env[1280]: time="2025-05-17T00:31:20.242761337Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 00:31:20.259748 env[1280]: time="2025-05-17T00:31:20.243130389Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 17 00:31:20.259748 env[1280]: time="2025-05-17T00:31:20.243180984Z" level=info msg=serving... address=/run/containerd/containerd.sock May 17 00:31:20.259748 env[1280]: time="2025-05-17T00:31:20.243168471Z" level=info msg="Start subscribing containerd event" May 17 00:31:20.259748 env[1280]: time="2025-05-17T00:31:20.253694735Z" level=info msg="Start recovering state" May 17 00:31:20.259748 env[1280]: time="2025-05-17T00:31:20.253830449Z" level=info msg="Start event monitor" May 17 00:31:20.259748 env[1280]: time="2025-05-17T00:31:20.253861728Z" level=info msg="Start snapshots syncer" May 17 00:31:20.259748 env[1280]: time="2025-05-17T00:31:20.253880493Z" level=info msg="Start cni network conf syncer for default" May 17 00:31:20.259748 env[1280]: time="2025-05-17T00:31:20.253891103Z" level=info msg="Start streaming server" May 17 00:31:20.259748 env[1280]: time="2025-05-17T00:31:20.254073616Z" level=info msg="containerd successfully booted in 0.181750s" May 17 00:31:20.243305 systemd[1]: Started containerd.service. May 17 00:31:20.323901 locksmithd[1299]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 17 00:31:21.538559 sshd_keygen[1269]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 17 00:31:21.572080 systemd[1]: Finished sshd-keygen.service. May 17 00:31:21.578563 systemd[1]: Starting issuegen.service... May 17 00:31:21.592526 systemd[1]: issuegen.service: Deactivated successfully. May 17 00:31:21.592726 systemd[1]: Finished issuegen.service. May 17 00:31:21.601125 systemd[1]: Starting systemd-user-sessions.service... May 17 00:31:21.612778 systemd[1]: Finished systemd-user-sessions.service. May 17 00:31:21.618905 systemd[1]: Started getty@tty1.service. May 17 00:31:21.621552 systemd[1]: Started serial-getty@ttyS0.service. May 17 00:31:21.623332 systemd[1]: Reached target getty.target. May 17 00:31:21.735686 systemd[1]: Started kubelet.service. May 17 00:31:21.748735 systemd[1]: Reached target multi-user.target. May 17 00:31:21.754087 systemd[1]: Starting systemd-update-utmp-runlevel.service... May 17 00:31:21.765996 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. May 17 00:31:21.766211 systemd[1]: Finished systemd-update-utmp-runlevel.service. May 17 00:31:21.771191 systemd[1]: Startup finished in 1.103s (kernel) + 7.811s (initrd) + 10.984s (userspace) = 19.899s. May 17 00:31:22.760650 kubelet[1330]: E0517 00:31:22.760549 1330 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:31:22.763443 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:31:22.763627 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:31:22.764162 systemd[1]: kubelet.service: Consumed 1.429s CPU time. May 17 00:31:29.054124 systemd[1]: Created slice system-sshd.slice. May 17 00:31:29.055601 systemd[1]: Started sshd@0-10.0.0.61:22-10.0.0.1:35458.service. May 17 00:31:29.151080 sshd[1340]: Accepted publickey for core from 10.0.0.1 port 35458 ssh2: RSA SHA256:zHGb6zFE5uWTPnbfHFhmjGeDUJxvuwQSpK8sihWDiq0 May 17 00:31:29.153901 sshd[1340]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:31:29.166481 systemd[1]: Created slice user-500.slice. May 17 00:31:29.167831 systemd[1]: Starting user-runtime-dir@500.service... May 17 00:31:29.180489 systemd-logind[1271]: New session 1 of user core. May 17 00:31:29.195527 systemd[1]: Finished user-runtime-dir@500.service. May 17 00:31:29.197413 systemd[1]: Starting user@500.service... May 17 00:31:29.209478 (systemd)[1343]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 17 00:31:29.360212 systemd[1343]: Queued start job for default target default.target. May 17 00:31:29.360975 systemd[1343]: Reached target paths.target. May 17 00:31:29.361001 systemd[1343]: Reached target sockets.target. May 17 00:31:29.361020 systemd[1343]: Reached target timers.target. May 17 00:31:29.361038 systemd[1343]: Reached target basic.target. May 17 00:31:29.361100 systemd[1343]: Reached target default.target. May 17 00:31:29.361134 systemd[1343]: Startup finished in 139ms. May 17 00:31:29.361662 systemd[1]: Started user@500.service. May 17 00:31:29.373265 systemd[1]: Started session-1.scope. May 17 00:31:29.495246 systemd[1]: Started sshd@1-10.0.0.61:22-10.0.0.1:35466.service. May 17 00:31:29.540608 sshd[1352]: Accepted publickey for core from 10.0.0.1 port 35466 ssh2: RSA SHA256:zHGb6zFE5uWTPnbfHFhmjGeDUJxvuwQSpK8sihWDiq0 May 17 00:31:29.544855 sshd[1352]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:31:29.575418 systemd-logind[1271]: New session 2 of user core. May 17 00:31:29.577360 systemd[1]: Started session-2.scope. May 17 00:31:29.653720 sshd[1352]: pam_unix(sshd:session): session closed for user core May 17 00:31:29.660351 systemd[1]: sshd@1-10.0.0.61:22-10.0.0.1:35466.service: Deactivated successfully. May 17 00:31:29.661209 systemd[1]: session-2.scope: Deactivated successfully. May 17 00:31:29.664836 systemd[1]: Started sshd@2-10.0.0.61:22-10.0.0.1:35472.service. May 17 00:31:29.665290 systemd-logind[1271]: Session 2 logged out. Waiting for processes to exit. May 17 00:31:29.669336 systemd-logind[1271]: Removed session 2. May 17 00:31:29.713382 sshd[1358]: Accepted publickey for core from 10.0.0.1 port 35472 ssh2: RSA SHA256:zHGb6zFE5uWTPnbfHFhmjGeDUJxvuwQSpK8sihWDiq0 May 17 00:31:29.715309 sshd[1358]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:31:29.722608 systemd-logind[1271]: New session 3 of user core. May 17 00:31:29.723876 systemd[1]: Started session-3.scope. May 17 00:31:29.791244 sshd[1358]: pam_unix(sshd:session): session closed for user core May 17 00:31:29.797218 systemd[1]: sshd@2-10.0.0.61:22-10.0.0.1:35472.service: Deactivated successfully. May 17 00:31:29.797969 systemd[1]: session-3.scope: Deactivated successfully. May 17 00:31:29.804221 systemd-logind[1271]: Session 3 logged out. Waiting for processes to exit. May 17 00:31:29.804353 systemd[1]: Started sshd@3-10.0.0.61:22-10.0.0.1:35480.service. May 17 00:31:29.811622 systemd-logind[1271]: Removed session 3. May 17 00:31:29.853492 sshd[1365]: Accepted publickey for core from 10.0.0.1 port 35480 ssh2: RSA SHA256:zHGb6zFE5uWTPnbfHFhmjGeDUJxvuwQSpK8sihWDiq0 May 17 00:31:29.855036 sshd[1365]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:31:29.860072 systemd-logind[1271]: New session 4 of user core. May 17 00:31:29.861581 systemd[1]: Started session-4.scope. May 17 00:31:30.032075 sshd[1365]: pam_unix(sshd:session): session closed for user core May 17 00:31:30.036751 systemd[1]: sshd@3-10.0.0.61:22-10.0.0.1:35480.service: Deactivated successfully. May 17 00:31:30.037432 systemd[1]: session-4.scope: Deactivated successfully. May 17 00:31:30.043967 systemd-logind[1271]: Session 4 logged out. Waiting for processes to exit. May 17 00:31:30.046019 systemd[1]: Started sshd@4-10.0.0.61:22-10.0.0.1:35494.service. May 17 00:31:30.047226 systemd-logind[1271]: Removed session 4. May 17 00:31:30.094285 sshd[1371]: Accepted publickey for core from 10.0.0.1 port 35494 ssh2: RSA SHA256:zHGb6zFE5uWTPnbfHFhmjGeDUJxvuwQSpK8sihWDiq0 May 17 00:31:30.095792 sshd[1371]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:31:30.100171 systemd-logind[1271]: New session 5 of user core. May 17 00:31:30.101200 systemd[1]: Started session-5.scope. May 17 00:31:30.197399 sudo[1374]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 17 00:31:30.197740 sudo[1374]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 17 00:31:30.217872 systemd[1]: Starting coreos-metadata.service... May 17 00:31:30.235763 systemd[1]: coreos-metadata.service: Deactivated successfully. May 17 00:31:30.239094 systemd[1]: Finished coreos-metadata.service. May 17 00:31:31.530099 systemd[1]: Stopped kubelet.service. May 17 00:31:31.530310 systemd[1]: kubelet.service: Consumed 1.429s CPU time. May 17 00:31:31.532801 systemd[1]: Starting kubelet.service... May 17 00:31:31.587034 systemd[1]: Reloading. May 17 00:31:31.697208 /usr/lib/systemd/system-generators/torcx-generator[1434]: time="2025-05-17T00:31:31Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 17 00:31:31.697257 /usr/lib/systemd/system-generators/torcx-generator[1434]: time="2025-05-17T00:31:31Z" level=info msg="torcx already run" May 17 00:31:31.933205 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 00:31:31.933234 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 00:31:31.970543 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:31:32.189227 systemd[1]: Started kubelet.service. May 17 00:31:32.196098 systemd[1]: Stopping kubelet.service... May 17 00:31:32.200396 systemd[1]: kubelet.service: Deactivated successfully. May 17 00:31:32.200625 systemd[1]: Stopped kubelet.service. May 17 00:31:32.202734 systemd[1]: Starting kubelet.service... May 17 00:31:32.420932 systemd[1]: Started kubelet.service. May 17 00:31:32.541707 kubelet[1485]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:31:32.541707 kubelet[1485]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 17 00:31:32.541707 kubelet[1485]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:31:32.542177 kubelet[1485]: I0517 00:31:32.541716 1485 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:31:33.112863 kubelet[1485]: I0517 00:31:33.112818 1485 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" May 17 00:31:33.116087 kubelet[1485]: I0517 00:31:33.113146 1485 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:31:33.116087 kubelet[1485]: I0517 00:31:33.113790 1485 server.go:956] "Client rotation is on, will bootstrap in background" May 17 00:31:33.157286 kubelet[1485]: I0517 00:31:33.157239 1485 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:31:33.164806 kubelet[1485]: E0517 00:31:33.164753 1485 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:31:33.164806 kubelet[1485]: I0517 00:31:33.164795 1485 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:31:33.178810 kubelet[1485]: I0517 00:31:33.177945 1485 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:31:33.178810 kubelet[1485]: I0517 00:31:33.178292 1485 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:31:33.178810 kubelet[1485]: I0517 00:31:33.178320 1485 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.61","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 17 00:31:33.178810 kubelet[1485]: I0517 00:31:33.178518 1485 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:31:33.179266 kubelet[1485]: I0517 00:31:33.178529 1485 container_manager_linux.go:303] "Creating device plugin manager" May 17 00:31:33.180189 kubelet[1485]: I0517 00:31:33.179457 1485 state_mem.go:36] "Initialized new in-memory state store" May 17 00:31:33.196231 kubelet[1485]: I0517 00:31:33.196156 1485 kubelet.go:480] "Attempting to sync node with API server" May 17 00:31:33.196231 kubelet[1485]: I0517 00:31:33.196221 1485 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:31:33.196415 kubelet[1485]: I0517 00:31:33.196257 1485 kubelet.go:386] "Adding apiserver pod source" May 17 00:31:33.196415 kubelet[1485]: I0517 00:31:33.196274 1485 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:31:33.199017 kubelet[1485]: E0517 00:31:33.198965 1485 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:31:33.199151 kubelet[1485]: E0517 00:31:33.199061 1485 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:31:33.221567 kubelet[1485]: I0517 00:31:33.221538 1485 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 17 00:31:33.222338 kubelet[1485]: I0517 00:31:33.222303 1485 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" May 17 00:31:33.223048 kubelet[1485]: E0517 00:31:33.222957 1485 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" May 17 00:31:33.223229 kubelet[1485]: E0517 00:31:33.222772 1485 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"10.0.0.61\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" May 17 00:31:33.224946 kubelet[1485]: W0517 00:31:33.223786 1485 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 17 00:31:33.241596 kubelet[1485]: I0517 00:31:33.241552 1485 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 17 00:31:33.241756 kubelet[1485]: I0517 00:31:33.241635 1485 server.go:1289] "Started kubelet" May 17 00:31:33.242706 kubelet[1485]: I0517 00:31:33.241923 1485 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:31:33.245690 kubelet[1485]: I0517 00:31:33.245620 1485 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:31:33.249094 kubelet[1485]: I0517 00:31:33.246492 1485 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:31:33.249094 kubelet[1485]: I0517 00:31:33.246694 1485 server.go:317] "Adding debug handlers to kubelet server" May 17 00:31:33.253915 kubelet[1485]: E0517 00:31:33.253344 1485 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 00:31:33.255333 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). May 17 00:31:33.257993 kubelet[1485]: I0517 00:31:33.255517 1485 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:31:33.257993 kubelet[1485]: I0517 00:31:33.256295 1485 volume_manager.go:297] "Starting Kubelet Volume Manager" May 17 00:31:33.257993 kubelet[1485]: I0517 00:31:33.256391 1485 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:31:33.257993 kubelet[1485]: E0517 00:31:33.257307 1485 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.61\" not found" May 17 00:31:33.257993 kubelet[1485]: I0517 00:31:33.257484 1485 reconciler.go:26] "Reconciler: start to sync state" May 17 00:31:33.257993 kubelet[1485]: I0517 00:31:33.257504 1485 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 17 00:31:33.258563 kubelet[1485]: I0517 00:31:33.258539 1485 factory.go:223] Registration of the systemd container factory successfully May 17 00:31:33.258721 kubelet[1485]: I0517 00:31:33.258684 1485 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:31:33.260384 kubelet[1485]: I0517 00:31:33.260357 1485 factory.go:223] Registration of the containerd container factory successfully May 17 00:31:33.284685 kubelet[1485]: I0517 00:31:33.284397 1485 cpu_manager.go:221] "Starting CPU manager" policy="none" May 17 00:31:33.284685 kubelet[1485]: I0517 00:31:33.284415 1485 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 17 00:31:33.284685 kubelet[1485]: I0517 00:31:33.284431 1485 state_mem.go:36] "Initialized new in-memory state store" May 17 00:31:33.291868 kubelet[1485]: E0517 00:31:33.291616 1485 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.61\" not found" node="10.0.0.61" May 17 00:31:33.299706 kubelet[1485]: I0517 00:31:33.299373 1485 policy_none.go:49] "None policy: Start" May 17 00:31:33.299706 kubelet[1485]: I0517 00:31:33.299402 1485 memory_manager.go:186] "Starting memorymanager" policy="None" May 17 00:31:33.299706 kubelet[1485]: I0517 00:31:33.299415 1485 state_mem.go:35] "Initializing new in-memory state store" May 17 00:31:33.311937 systemd[1]: Created slice kubepods.slice. May 17 00:31:33.322080 systemd[1]: Created slice kubepods-burstable.slice. May 17 00:31:33.329920 systemd[1]: Created slice kubepods-besteffort.slice. May 17 00:31:33.340157 kubelet[1485]: E0517 00:31:33.337866 1485 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" May 17 00:31:33.340157 kubelet[1485]: I0517 00:31:33.338044 1485 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:31:33.340157 kubelet[1485]: I0517 00:31:33.338055 1485 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:31:33.340483 kubelet[1485]: I0517 00:31:33.340441 1485 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:31:33.343133 kubelet[1485]: E0517 00:31:33.343114 1485 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 17 00:31:33.343273 kubelet[1485]: E0517 00:31:33.343256 1485 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.61\" not found" May 17 00:31:33.451985 kubelet[1485]: I0517 00:31:33.450211 1485 kubelet_node_status.go:75] "Attempting to register node" node="10.0.0.61" May 17 00:31:33.471003 kubelet[1485]: I0517 00:31:33.470966 1485 kubelet_node_status.go:78] "Successfully registered node" node="10.0.0.61" May 17 00:31:33.471268 kubelet[1485]: E0517 00:31:33.471250 1485 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"10.0.0.61\": node \"10.0.0.61\" not found" May 17 00:31:33.499430 kubelet[1485]: E0517 00:31:33.499368 1485 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.61\" not found" May 17 00:31:33.557455 sudo[1374]: pam_unix(sudo:session): session closed for user root May 17 00:31:33.566943 sshd[1371]: pam_unix(sshd:session): session closed for user core May 17 00:31:33.569864 systemd[1]: sshd@4-10.0.0.61:22-10.0.0.1:35494.service: Deactivated successfully. May 17 00:31:33.573960 systemd[1]: session-5.scope: Deactivated successfully. May 17 00:31:33.574453 systemd-logind[1271]: Session 5 logged out. Waiting for processes to exit. May 17 00:31:33.578871 systemd-logind[1271]: Removed session 5. May 17 00:31:33.604258 kubelet[1485]: E0517 00:31:33.604159 1485 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.61\" not found" May 17 00:31:33.635146 kubelet[1485]: I0517 00:31:33.635076 1485 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" May 17 00:31:33.636777 kubelet[1485]: I0517 00:31:33.636734 1485 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" May 17 00:31:33.636777 kubelet[1485]: I0517 00:31:33.636768 1485 status_manager.go:230] "Starting to sync pod status with apiserver" May 17 00:31:33.636867 kubelet[1485]: I0517 00:31:33.636801 1485 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 17 00:31:33.636867 kubelet[1485]: I0517 00:31:33.636812 1485 kubelet.go:2436] "Starting kubelet main sync loop" May 17 00:31:33.636867 kubelet[1485]: E0517 00:31:33.636863 1485 kubelet.go:2460] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" May 17 00:31:33.705535 kubelet[1485]: E0517 00:31:33.705354 1485 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.61\" not found" May 17 00:31:33.806560 kubelet[1485]: E0517 00:31:33.806450 1485 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.61\" not found" May 17 00:31:33.907502 kubelet[1485]: E0517 00:31:33.907394 1485 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.61\" not found" May 17 00:31:34.008438 kubelet[1485]: E0517 00:31:34.008373 1485 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.61\" not found" May 17 00:31:34.109383 kubelet[1485]: E0517 00:31:34.109308 1485 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.61\" not found" May 17 00:31:34.117692 kubelet[1485]: I0517 00:31:34.117614 1485 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" May 17 00:31:34.117902 kubelet[1485]: I0517 00:31:34.117862 1485 reflector.go:556] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" May 17 00:31:34.117993 kubelet[1485]: I0517 00:31:34.117956 1485 reflector.go:556] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" May 17 00:31:34.202098 kubelet[1485]: E0517 00:31:34.199828 1485 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:31:34.209854 kubelet[1485]: E0517 00:31:34.209767 1485 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.61\" not found" May 17 00:31:34.310929 kubelet[1485]: E0517 00:31:34.310762 1485 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.61\" not found" May 17 00:31:34.411847 kubelet[1485]: E0517 00:31:34.411754 1485 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.61\" not found" May 17 00:31:34.512726 kubelet[1485]: E0517 00:31:34.512606 1485 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.61\" not found" May 17 00:31:34.613951 kubelet[1485]: E0517 00:31:34.613430 1485 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.61\" not found" May 17 00:31:34.714026 kubelet[1485]: E0517 00:31:34.713882 1485 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.61\" not found" May 17 00:31:34.816477 kubelet[1485]: I0517 00:31:34.816448 1485 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" May 17 00:31:34.817275 env[1280]: time="2025-05-17T00:31:34.817147580Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 17 00:31:34.819131 kubelet[1485]: I0517 00:31:34.819109 1485 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" May 17 00:31:35.200884 kubelet[1485]: I0517 00:31:35.200471 1485 apiserver.go:52] "Watching apiserver" May 17 00:31:35.200884 kubelet[1485]: E0517 00:31:35.200776 1485 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:31:35.242255 systemd[1]: Created slice kubepods-besteffort-pod3c974fcd_eff4_42b7_bae9_9fe8a45e6766.slice. May 17 00:31:35.259311 kubelet[1485]: I0517 00:31:35.259246 1485 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 17 00:31:35.275654 kubelet[1485]: I0517 00:31:35.275204 1485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dab1a348-85d4-4040-934d-4c0658d05311-cilium-run\") pod \"cilium-z6pt7\" (UID: \"dab1a348-85d4-4040-934d-4c0658d05311\") " pod="kube-system/cilium-z6pt7" May 17 00:31:35.275654 kubelet[1485]: I0517 00:31:35.275292 1485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dab1a348-85d4-4040-934d-4c0658d05311-etc-cni-netd\") pod \"cilium-z6pt7\" (UID: \"dab1a348-85d4-4040-934d-4c0658d05311\") " pod="kube-system/cilium-z6pt7" May 17 00:31:35.275654 kubelet[1485]: I0517 00:31:35.275317 1485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dab1a348-85d4-4040-934d-4c0658d05311-host-proc-sys-net\") pod \"cilium-z6pt7\" (UID: \"dab1a348-85d4-4040-934d-4c0658d05311\") " pod="kube-system/cilium-z6pt7" May 17 00:31:35.275360 systemd[1]: Created slice kubepods-burstable-poddab1a348_85d4_4040_934d_4c0658d05311.slice. May 17 00:31:35.276055 kubelet[1485]: I0517 00:31:35.275820 1485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dab1a348-85d4-4040-934d-4c0658d05311-host-proc-sys-kernel\") pod \"cilium-z6pt7\" (UID: \"dab1a348-85d4-4040-934d-4c0658d05311\") " pod="kube-system/cilium-z6pt7" May 17 00:31:35.276055 kubelet[1485]: I0517 00:31:35.275855 1485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlfjn\" (UniqueName: \"kubernetes.io/projected/dab1a348-85d4-4040-934d-4c0658d05311-kube-api-access-xlfjn\") pod \"cilium-z6pt7\" (UID: \"dab1a348-85d4-4040-934d-4c0658d05311\") " pod="kube-system/cilium-z6pt7" May 17 00:31:35.276055 kubelet[1485]: I0517 00:31:35.275879 1485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3c974fcd-eff4-42b7-bae9-9fe8a45e6766-kube-proxy\") pod \"kube-proxy-56b4m\" (UID: \"3c974fcd-eff4-42b7-bae9-9fe8a45e6766\") " pod="kube-system/kube-proxy-56b4m" May 17 00:31:35.276055 kubelet[1485]: I0517 00:31:35.275898 1485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dab1a348-85d4-4040-934d-4c0658d05311-bpf-maps\") pod \"cilium-z6pt7\" (UID: \"dab1a348-85d4-4040-934d-4c0658d05311\") " pod="kube-system/cilium-z6pt7" May 17 00:31:35.276055 kubelet[1485]: I0517 00:31:35.275917 1485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dab1a348-85d4-4040-934d-4c0658d05311-hostproc\") pod \"cilium-z6pt7\" (UID: \"dab1a348-85d4-4040-934d-4c0658d05311\") " pod="kube-system/cilium-z6pt7" May 17 00:31:35.276055 kubelet[1485]: I0517 00:31:35.275937 1485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dab1a348-85d4-4040-934d-4c0658d05311-cni-path\") pod \"cilium-z6pt7\" (UID: \"dab1a348-85d4-4040-934d-4c0658d05311\") " pod="kube-system/cilium-z6pt7" May 17 00:31:35.276336 kubelet[1485]: I0517 00:31:35.275957 1485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dab1a348-85d4-4040-934d-4c0658d05311-hubble-tls\") pod \"cilium-z6pt7\" (UID: \"dab1a348-85d4-4040-934d-4c0658d05311\") " pod="kube-system/cilium-z6pt7" May 17 00:31:35.276336 kubelet[1485]: I0517 00:31:35.275992 1485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dab1a348-85d4-4040-934d-4c0658d05311-cilium-cgroup\") pod \"cilium-z6pt7\" (UID: \"dab1a348-85d4-4040-934d-4c0658d05311\") " pod="kube-system/cilium-z6pt7" May 17 00:31:35.276336 kubelet[1485]: I0517 00:31:35.276013 1485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dab1a348-85d4-4040-934d-4c0658d05311-lib-modules\") pod \"cilium-z6pt7\" (UID: \"dab1a348-85d4-4040-934d-4c0658d05311\") " pod="kube-system/cilium-z6pt7" May 17 00:31:35.276336 kubelet[1485]: I0517 00:31:35.276035 1485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dab1a348-85d4-4040-934d-4c0658d05311-xtables-lock\") pod \"cilium-z6pt7\" (UID: \"dab1a348-85d4-4040-934d-4c0658d05311\") " pod="kube-system/cilium-z6pt7" May 17 00:31:35.276336 kubelet[1485]: I0517 00:31:35.276057 1485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dab1a348-85d4-4040-934d-4c0658d05311-clustermesh-secrets\") pod \"cilium-z6pt7\" (UID: \"dab1a348-85d4-4040-934d-4c0658d05311\") " pod="kube-system/cilium-z6pt7" May 17 00:31:35.276336 kubelet[1485]: I0517 00:31:35.276078 1485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dab1a348-85d4-4040-934d-4c0658d05311-cilium-config-path\") pod \"cilium-z6pt7\" (UID: \"dab1a348-85d4-4040-934d-4c0658d05311\") " pod="kube-system/cilium-z6pt7" May 17 00:31:35.276606 kubelet[1485]: I0517 00:31:35.276098 1485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3c974fcd-eff4-42b7-bae9-9fe8a45e6766-xtables-lock\") pod \"kube-proxy-56b4m\" (UID: \"3c974fcd-eff4-42b7-bae9-9fe8a45e6766\") " pod="kube-system/kube-proxy-56b4m" May 17 00:31:35.276606 kubelet[1485]: I0517 00:31:35.276117 1485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3c974fcd-eff4-42b7-bae9-9fe8a45e6766-lib-modules\") pod \"kube-proxy-56b4m\" (UID: \"3c974fcd-eff4-42b7-bae9-9fe8a45e6766\") " pod="kube-system/kube-proxy-56b4m" May 17 00:31:35.276606 kubelet[1485]: I0517 00:31:35.276144 1485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfpc7\" (UniqueName: \"kubernetes.io/projected/3c974fcd-eff4-42b7-bae9-9fe8a45e6766-kube-api-access-jfpc7\") pod \"kube-proxy-56b4m\" (UID: \"3c974fcd-eff4-42b7-bae9-9fe8a45e6766\") " pod="kube-system/kube-proxy-56b4m" May 17 00:31:35.393578 kubelet[1485]: I0517 00:31:35.384133 1485 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" May 17 00:31:35.572568 kubelet[1485]: E0517 00:31:35.571798 1485 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:31:35.574032 env[1280]: time="2025-05-17T00:31:35.573423086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-56b4m,Uid:3c974fcd-eff4-42b7-bae9-9fe8a45e6766,Namespace:kube-system,Attempt:0,}" May 17 00:31:35.599012 kubelet[1485]: E0517 00:31:35.597537 1485 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:31:35.599192 env[1280]: time="2025-05-17T00:31:35.598172910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-z6pt7,Uid:dab1a348-85d4-4040-934d-4c0658d05311,Namespace:kube-system,Attempt:0,}" May 17 00:31:36.201761 kubelet[1485]: E0517 00:31:36.201701 1485 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:31:36.428197 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount843199.mount: Deactivated successfully. May 17 00:31:36.448415 env[1280]: time="2025-05-17T00:31:36.448259329Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:31:36.452833 env[1280]: time="2025-05-17T00:31:36.452480357Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:31:36.463582 env[1280]: time="2025-05-17T00:31:36.463454230Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:31:36.470076 env[1280]: time="2025-05-17T00:31:36.467420680Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:31:36.473475 env[1280]: time="2025-05-17T00:31:36.473062202Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:31:36.476197 env[1280]: time="2025-05-17T00:31:36.476062169Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:31:36.481064 env[1280]: time="2025-05-17T00:31:36.480930060Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:31:36.485343 env[1280]: time="2025-05-17T00:31:36.485274609Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:31:36.537331 env[1280]: time="2025-05-17T00:31:36.530799902Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:31:36.537331 env[1280]: time="2025-05-17T00:31:36.531825255Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:31:36.551996 env[1280]: time="2025-05-17T00:31:36.535871344Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:31:36.551996 env[1280]: time="2025-05-17T00:31:36.538585946Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ecf38527bcbe6af487e49dc334ad2500068cef8f30eaf38d1e692f893491160c pid=1545 runtime=io.containerd.runc.v2 May 17 00:31:36.576998 env[1280]: time="2025-05-17T00:31:36.573997077Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:31:36.576998 env[1280]: time="2025-05-17T00:31:36.574083169Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:31:36.576998 env[1280]: time="2025-05-17T00:31:36.574107284Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:31:36.576998 env[1280]: time="2025-05-17T00:31:36.574242968Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/59f6fa83fd7220e5b748604b423480785b25fd7663686e97311265697b687129 pid=1569 runtime=io.containerd.runc.v2 May 17 00:31:36.593309 systemd[1]: Started cri-containerd-ecf38527bcbe6af487e49dc334ad2500068cef8f30eaf38d1e692f893491160c.scope. May 17 00:31:36.610750 systemd[1]: Started cri-containerd-59f6fa83fd7220e5b748604b423480785b25fd7663686e97311265697b687129.scope. May 17 00:31:36.650309 env[1280]: time="2025-05-17T00:31:36.648543325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-56b4m,Uid:3c974fcd-eff4-42b7-bae9-9fe8a45e6766,Namespace:kube-system,Attempt:0,} returns sandbox id \"ecf38527bcbe6af487e49dc334ad2500068cef8f30eaf38d1e692f893491160c\"" May 17 00:31:36.650489 kubelet[1485]: E0517 00:31:36.649451 1485 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:31:36.653368 env[1280]: time="2025-05-17T00:31:36.653327528Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.1\"" May 17 00:31:36.679016 env[1280]: time="2025-05-17T00:31:36.678966851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-z6pt7,Uid:dab1a348-85d4-4040-934d-4c0658d05311,Namespace:kube-system,Attempt:0,} returns sandbox id \"59f6fa83fd7220e5b748604b423480785b25fd7663686e97311265697b687129\"" May 17 00:31:36.683761 kubelet[1485]: E0517 00:31:36.683178 1485 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:31:37.204112 kubelet[1485]: E0517 00:31:37.202475 1485 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:31:38.202989 kubelet[1485]: E0517 00:31:38.202923 1485 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:31:39.207755 kubelet[1485]: E0517 00:31:39.203134 1485 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:31:39.268727 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3036823680.mount: Deactivated successfully. May 17 00:31:40.207596 kubelet[1485]: E0517 00:31:40.207487 1485 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:31:41.208501 kubelet[1485]: E0517 00:31:41.208433 1485 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:31:41.210246 env[1280]: time="2025-05-17T00:31:41.209359362Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.33.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:31:41.213172 env[1280]: time="2025-05-17T00:31:41.212715938Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b79c189b052cdbe0e837d0caa6faf1d9fd696d8664fcc462f67d9ea51f26fef2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:31:41.215537 env[1280]: time="2025-05-17T00:31:41.215492486Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.33.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:31:41.222393 env[1280]: time="2025-05-17T00:31:41.222300366Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:7ddf379897139ae8ade8b33cb9373b70c632a4d5491da6e234f5d830e0a50807,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:31:41.222893 env[1280]: time="2025-05-17T00:31:41.222831632Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.1\" returns image reference \"sha256:b79c189b052cdbe0e837d0caa6faf1d9fd696d8664fcc462f67d9ea51f26fef2\"" May 17 00:31:41.225688 env[1280]: time="2025-05-17T00:31:41.225409788Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 17 00:31:41.232314 env[1280]: time="2025-05-17T00:31:41.232278251Z" level=info msg="CreateContainer within sandbox \"ecf38527bcbe6af487e49dc334ad2500068cef8f30eaf38d1e692f893491160c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 17 00:31:41.301280 env[1280]: time="2025-05-17T00:31:41.301208635Z" level=info msg="CreateContainer within sandbox \"ecf38527bcbe6af487e49dc334ad2500068cef8f30eaf38d1e692f893491160c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4b05a2ce4526cf15ddfa3e6f3d341370c28bd3133c7d204507fce930aeffb638\"" May 17 00:31:41.306556 env[1280]: time="2025-05-17T00:31:41.306511712Z" level=info msg="StartContainer for \"4b05a2ce4526cf15ddfa3e6f3d341370c28bd3133c7d204507fce930aeffb638\"" May 17 00:31:41.662394 systemd[1]: run-containerd-runc-k8s.io-4b05a2ce4526cf15ddfa3e6f3d341370c28bd3133c7d204507fce930aeffb638-runc.3QWFnU.mount: Deactivated successfully. May 17 00:31:41.735186 systemd[1]: Started cri-containerd-4b05a2ce4526cf15ddfa3e6f3d341370c28bd3133c7d204507fce930aeffb638.scope. May 17 00:31:41.967450 env[1280]: time="2025-05-17T00:31:41.967232435Z" level=info msg="StartContainer for \"4b05a2ce4526cf15ddfa3e6f3d341370c28bd3133c7d204507fce930aeffb638\" returns successfully" May 17 00:31:42.210059 kubelet[1485]: E0517 00:31:42.209989 1485 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:31:42.984807 kubelet[1485]: E0517 00:31:42.979500 1485 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:31:43.211082 kubelet[1485]: E0517 00:31:43.211027 1485 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:31:43.988810 kubelet[1485]: E0517 00:31:43.988692 1485 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:31:44.218719 kubelet[1485]: E0517 00:31:44.216153 1485 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:31:45.219129 kubelet[1485]: E0517 00:31:45.217325 1485 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:31:46.219814 kubelet[1485]: E0517 00:31:46.219703 1485 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:31:47.220569 kubelet[1485]: E0517 00:31:47.220467 1485 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:31:48.221374 kubelet[1485]: E0517 00:31:48.221100 1485 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:31:49.222146 kubelet[1485]: E0517 00:31:49.222064 1485 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:31:50.226571 kubelet[1485]: E0517 00:31:50.222813 1485 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:31:51.224040 kubelet[1485]: E0517 00:31:51.223961 1485 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:31:51.736759 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1676059808.mount: Deactivated successfully. May 17 00:31:52.224978 kubelet[1485]: E0517 00:31:52.224900 1485 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:31:53.196436 kubelet[1485]: E0517 00:31:53.196338 1485 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:31:53.225507 kubelet[1485]: E0517 00:31:53.225444 1485 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:31:54.226066 kubelet[1485]: E0517 00:31:54.226006 1485 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:31:55.226286 kubelet[1485]: E0517 00:31:55.226120 1485 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:31:56.226923 kubelet[1485]: E0517 00:31:56.226846 1485 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:31:57.227958 kubelet[1485]: E0517 00:31:57.227897 1485 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:31:57.335557 env[1280]: time="2025-05-17T00:31:57.335468309Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:31:57.338644 env[1280]: time="2025-05-17T00:31:57.338571677Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:31:57.342227 env[1280]: time="2025-05-17T00:31:57.342168631Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:31:57.342792 env[1280]: time="2025-05-17T00:31:57.342736416Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 17 00:31:57.348393 env[1280]: time="2025-05-17T00:31:57.348168650Z" level=info msg="CreateContainer within sandbox \"59f6fa83fd7220e5b748604b423480785b25fd7663686e97311265697b687129\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 17 00:31:57.362721 env[1280]: time="2025-05-17T00:31:57.362640219Z" level=info msg="CreateContainer within sandbox \"59f6fa83fd7220e5b748604b423480785b25fd7663686e97311265697b687129\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"31b3e6005a8d04602b0568c24e87cdaa04e0caa3b5512e65ea2471b6806d9d7b\"" May 17 00:31:57.363232 env[1280]: time="2025-05-17T00:31:57.363205959Z" level=info msg="StartContainer for \"31b3e6005a8d04602b0568c24e87cdaa04e0caa3b5512e65ea2471b6806d9d7b\"" May 17 00:31:57.381842 systemd[1]: run-containerd-runc-k8s.io-31b3e6005a8d04602b0568c24e87cdaa04e0caa3b5512e65ea2471b6806d9d7b-runc.NM3a4d.mount: Deactivated successfully. May 17 00:31:57.383089 systemd[1]: Started cri-containerd-31b3e6005a8d04602b0568c24e87cdaa04e0caa3b5512e65ea2471b6806d9d7b.scope. May 17 00:31:57.411659 env[1280]: time="2025-05-17T00:31:57.411598858Z" level=info msg="StartContainer for \"31b3e6005a8d04602b0568c24e87cdaa04e0caa3b5512e65ea2471b6806d9d7b\" returns successfully" May 17 00:31:57.418768 systemd[1]: cri-containerd-31b3e6005a8d04602b0568c24e87cdaa04e0caa3b5512e65ea2471b6806d9d7b.scope: Deactivated successfully. May 17 00:31:57.971448 env[1280]: time="2025-05-17T00:31:57.971384091Z" level=info msg="shim disconnected" id=31b3e6005a8d04602b0568c24e87cdaa04e0caa3b5512e65ea2471b6806d9d7b May 17 00:31:57.971691 env[1280]: time="2025-05-17T00:31:57.971455945Z" level=warning msg="cleaning up after shim disconnected" id=31b3e6005a8d04602b0568c24e87cdaa04e0caa3b5512e65ea2471b6806d9d7b namespace=k8s.io May 17 00:31:57.971691 env[1280]: time="2025-05-17T00:31:57.971470308Z" level=info msg="cleaning up dead shim" May 17 00:31:57.978792 env[1280]: time="2025-05-17T00:31:57.978723880Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:31:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1843 runtime=io.containerd.runc.v2\n" May 17 00:31:58.031847 kubelet[1485]: E0517 00:31:58.031813 1485 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:31:58.037170 env[1280]: time="2025-05-17T00:31:58.037116702Z" level=info msg="CreateContainer within sandbox \"59f6fa83fd7220e5b748604b423480785b25fd7663686e97311265697b687129\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 17 00:31:58.053215 kubelet[1485]: I0517 00:31:58.053155 1485 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-56b4m" podStartSLOduration=20.480622169 podStartE2EDuration="25.053138443s" podCreationTimestamp="2025-05-17 00:31:33 +0000 UTC" firstStartedPulling="2025-05-17 00:31:36.652018984 +0000 UTC m=+4.220260894" lastFinishedPulling="2025-05-17 00:31:41.224535258 +0000 UTC m=+8.792777168" observedRunningTime="2025-05-17 00:31:43.014046048 +0000 UTC m=+10.582287978" watchObservedRunningTime="2025-05-17 00:31:58.053138443 +0000 UTC m=+25.621380353" May 17 00:31:58.056275 env[1280]: time="2025-05-17T00:31:58.056218510Z" level=info msg="CreateContainer within sandbox \"59f6fa83fd7220e5b748604b423480785b25fd7663686e97311265697b687129\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6c2348e676fbadf49bad7d837aa69ab23eb4a8c9fe4ef6da82451e079180154d\"" May 17 00:31:58.056792 env[1280]: time="2025-05-17T00:31:58.056757647Z" level=info msg="StartContainer for \"6c2348e676fbadf49bad7d837aa69ab23eb4a8c9fe4ef6da82451e079180154d\"" May 17 00:31:58.074300 systemd[1]: Started cri-containerd-6c2348e676fbadf49bad7d837aa69ab23eb4a8c9fe4ef6da82451e079180154d.scope. May 17 00:31:58.102707 env[1280]: time="2025-05-17T00:31:58.101414866Z" level=info msg="StartContainer for \"6c2348e676fbadf49bad7d837aa69ab23eb4a8c9fe4ef6da82451e079180154d\" returns successfully" May 17 00:31:58.113054 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 00:31:58.113396 systemd[1]: Stopped systemd-sysctl.service. May 17 00:31:58.113840 systemd[1]: Stopping systemd-sysctl.service... May 17 00:31:58.115438 systemd[1]: Starting systemd-sysctl.service... May 17 00:31:58.115758 systemd[1]: cri-containerd-6c2348e676fbadf49bad7d837aa69ab23eb4a8c9fe4ef6da82451e079180154d.scope: Deactivated successfully. May 17 00:31:58.123997 systemd[1]: Finished systemd-sysctl.service. May 17 00:31:58.139285 env[1280]: time="2025-05-17T00:31:58.139209150Z" level=info msg="shim disconnected" id=6c2348e676fbadf49bad7d837aa69ab23eb4a8c9fe4ef6da82451e079180154d May 17 00:31:58.139285 env[1280]: time="2025-05-17T00:31:58.139273323Z" level=warning msg="cleaning up after shim disconnected" id=6c2348e676fbadf49bad7d837aa69ab23eb4a8c9fe4ef6da82451e079180154d namespace=k8s.io May 17 00:31:58.139285 env[1280]: time="2025-05-17T00:31:58.139283580Z" level=info msg="cleaning up dead shim" May 17 00:31:58.146079 env[1280]: time="2025-05-17T00:31:58.146029706Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:31:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1906 runtime=io.containerd.runc.v2\n" May 17 00:31:58.228517 kubelet[1485]: E0517 00:31:58.228421 1485 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:31:58.357162 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-31b3e6005a8d04602b0568c24e87cdaa04e0caa3b5512e65ea2471b6806d9d7b-rootfs.mount: Deactivated successfully. May 17 00:31:59.034630 kubelet[1485]: E0517 00:31:59.034585 1485 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:31:59.039416 env[1280]: time="2025-05-17T00:31:59.039365474Z" level=info msg="CreateContainer within sandbox \"59f6fa83fd7220e5b748604b423480785b25fd7663686e97311265697b687129\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 17 00:31:59.056174 env[1280]: time="2025-05-17T00:31:59.056104377Z" level=info msg="CreateContainer within sandbox \"59f6fa83fd7220e5b748604b423480785b25fd7663686e97311265697b687129\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e5f9973a2849e55352e3bc1245186764ae9469895d0de642fb02f6022ccae741\"" May 17 00:31:59.056737 env[1280]: time="2025-05-17T00:31:59.056708269Z" level=info msg="StartContainer for \"e5f9973a2849e55352e3bc1245186764ae9469895d0de642fb02f6022ccae741\"" May 17 00:31:59.073213 systemd[1]: Started cri-containerd-e5f9973a2849e55352e3bc1245186764ae9469895d0de642fb02f6022ccae741.scope. May 17 00:31:59.098120 systemd[1]: cri-containerd-e5f9973a2849e55352e3bc1245186764ae9469895d0de642fb02f6022ccae741.scope: Deactivated successfully. May 17 00:31:59.098866 env[1280]: time="2025-05-17T00:31:59.098819502Z" level=info msg="StartContainer for \"e5f9973a2849e55352e3bc1245186764ae9469895d0de642fb02f6022ccae741\" returns successfully" May 17 00:31:59.118905 env[1280]: time="2025-05-17T00:31:59.118853153Z" level=info msg="shim disconnected" id=e5f9973a2849e55352e3bc1245186764ae9469895d0de642fb02f6022ccae741 May 17 00:31:59.119087 env[1280]: time="2025-05-17T00:31:59.118904697Z" level=warning msg="cleaning up after shim disconnected" id=e5f9973a2849e55352e3bc1245186764ae9469895d0de642fb02f6022ccae741 namespace=k8s.io May 17 00:31:59.119087 env[1280]: time="2025-05-17T00:31:59.118919250Z" level=info msg="cleaning up dead shim" May 17 00:31:59.125704 env[1280]: time="2025-05-17T00:31:59.125648270Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:31:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1962 runtime=io.containerd.runc.v2\n" May 17 00:31:59.228788 kubelet[1485]: E0517 00:31:59.228734 1485 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:31:59.356937 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e5f9973a2849e55352e3bc1245186764ae9469895d0de642fb02f6022ccae741-rootfs.mount: Deactivated successfully. May 17 00:32:00.038464 kubelet[1485]: E0517 00:32:00.038421 1485 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:32:00.043764 env[1280]: time="2025-05-17T00:32:00.043684523Z" level=info msg="CreateContainer within sandbox \"59f6fa83fd7220e5b748604b423480785b25fd7663686e97311265697b687129\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 17 00:32:00.061981 env[1280]: time="2025-05-17T00:32:00.061913875Z" level=info msg="CreateContainer within sandbox \"59f6fa83fd7220e5b748604b423480785b25fd7663686e97311265697b687129\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"3991c34766f6a2aea793b46c8bf3ddb3c93352eba7007828f3ca1bc3dee7febc\"" May 17 00:32:00.062549 env[1280]: time="2025-05-17T00:32:00.062503603Z" level=info msg="StartContainer for \"3991c34766f6a2aea793b46c8bf3ddb3c93352eba7007828f3ca1bc3dee7febc\"" May 17 00:32:00.080875 systemd[1]: Started cri-containerd-3991c34766f6a2aea793b46c8bf3ddb3c93352eba7007828f3ca1bc3dee7febc.scope. May 17 00:32:00.105831 systemd[1]: cri-containerd-3991c34766f6a2aea793b46c8bf3ddb3c93352eba7007828f3ca1bc3dee7febc.scope: Deactivated successfully. May 17 00:32:00.107701 env[1280]: time="2025-05-17T00:32:00.107625466Z" level=info msg="StartContainer for \"3991c34766f6a2aea793b46c8bf3ddb3c93352eba7007828f3ca1bc3dee7febc\" returns successfully" May 17 00:32:00.129203 env[1280]: time="2025-05-17T00:32:00.129154023Z" level=info msg="shim disconnected" id=3991c34766f6a2aea793b46c8bf3ddb3c93352eba7007828f3ca1bc3dee7febc May 17 00:32:00.129203 env[1280]: time="2025-05-17T00:32:00.129196893Z" level=warning msg="cleaning up after shim disconnected" id=3991c34766f6a2aea793b46c8bf3ddb3c93352eba7007828f3ca1bc3dee7febc namespace=k8s.io May 17 00:32:00.129203 env[1280]: time="2025-05-17T00:32:00.129206970Z" level=info msg="cleaning up dead shim" May 17 00:32:00.136309 env[1280]: time="2025-05-17T00:32:00.136272086Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:32:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2015 runtime=io.containerd.runc.v2\n" May 17 00:32:00.229754 kubelet[1485]: E0517 00:32:00.229692 1485 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:32:00.357155 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3991c34766f6a2aea793b46c8bf3ddb3c93352eba7007828f3ca1bc3dee7febc-rootfs.mount: Deactivated successfully. May 17 00:32:01.042349 kubelet[1485]: E0517 00:32:01.042313 1485 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:32:01.047104 env[1280]: time="2025-05-17T00:32:01.047054430Z" level=info msg="CreateContainer within sandbox \"59f6fa83fd7220e5b748604b423480785b25fd7663686e97311265697b687129\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 17 00:32:01.065060 env[1280]: time="2025-05-17T00:32:01.065007010Z" level=info msg="CreateContainer within sandbox \"59f6fa83fd7220e5b748604b423480785b25fd7663686e97311265697b687129\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8b34d8b7dc9ffb647308f871151e3fd3385d060755e37eb8418304d7e758baf2\"" May 17 00:32:01.065531 env[1280]: time="2025-05-17T00:32:01.065507079Z" level=info msg="StartContainer for \"8b34d8b7dc9ffb647308f871151e3fd3385d060755e37eb8418304d7e758baf2\"" May 17 00:32:01.080493 systemd[1]: Started cri-containerd-8b34d8b7dc9ffb647308f871151e3fd3385d060755e37eb8418304d7e758baf2.scope. May 17 00:32:01.105582 env[1280]: time="2025-05-17T00:32:01.105528987Z" level=info msg="StartContainer for \"8b34d8b7dc9ffb647308f871151e3fd3385d060755e37eb8418304d7e758baf2\" returns successfully" May 17 00:32:01.230746 kubelet[1485]: E0517 00:32:01.230692 1485 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:32:01.239642 kubelet[1485]: I0517 00:32:01.238852 1485 kubelet_node_status.go:501] "Fast updating node status as it just became ready" May 17 00:32:01.421690 kernel: Initializing XFRM netlink socket May 17 00:32:02.048649 kubelet[1485]: E0517 00:32:02.048608 1485 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:32:02.062329 kubelet[1485]: I0517 00:32:02.062260 1485 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-z6pt7" podStartSLOduration=8.402097021 podStartE2EDuration="29.062240007s" podCreationTimestamp="2025-05-17 00:31:33 +0000 UTC" firstStartedPulling="2025-05-17 00:31:36.683809875 +0000 UTC m=+4.252051785" lastFinishedPulling="2025-05-17 00:31:57.343952861 +0000 UTC m=+24.912194771" observedRunningTime="2025-05-17 00:32:02.062222027 +0000 UTC m=+29.630463927" watchObservedRunningTime="2025-05-17 00:32:02.062240007 +0000 UTC m=+29.630481917" May 17 00:32:02.231024 kubelet[1485]: E0517 00:32:02.230968 1485 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:32:03.041320 systemd-networkd[1105]: cilium_host: Link UP May 17 00:32:03.041422 systemd-networkd[1105]: cilium_net: Link UP May 17 00:32:03.043810 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready May 17 00:32:03.043855 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready May 17 00:32:03.043981 systemd-networkd[1105]: cilium_net: Gained carrier May 17 00:32:03.044136 systemd-networkd[1105]: cilium_host: Gained carrier May 17 00:32:03.052790 kubelet[1485]: E0517 00:32:03.052758 1485 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:32:03.118293 systemd-networkd[1105]: cilium_vxlan: Link UP May 17 00:32:03.118308 systemd-networkd[1105]: cilium_vxlan: Gained carrier May 17 00:32:03.231946 kubelet[1485]: E0517 00:32:03.231884 1485 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:32:03.315711 kernel: NET: Registered PF_ALG protocol family May 17 00:32:03.648828 systemd-networkd[1105]: cilium_host: Gained IPv6LL May 17 00:32:03.777873 systemd-networkd[1105]: cilium_net: Gained IPv6LL May 17 00:32:03.853267 systemd-networkd[1105]: lxc_health: Link UP May 17 00:32:03.862533 systemd-networkd[1105]: lxc_health: Gained carrier May 17 00:32:03.862749 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 17 00:32:04.054406 kubelet[1485]: E0517 00:32:04.054374 1485 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:32:04.232701 kubelet[1485]: E0517 00:32:04.232617 1485 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:32:04.736871 systemd-networkd[1105]: cilium_vxlan: Gained IPv6LL May 17 00:32:04.875772 update_engine[1272]: I0517 00:32:04.875730 1272 update_attempter.cc:509] Updating boot flags... May 17 00:32:05.233532 kubelet[1485]: E0517 00:32:05.233504 1485 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:32:05.600928 kubelet[1485]: E0517 00:32:05.600891 1485 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:32:05.760882 systemd-networkd[1105]: lxc_health: Gained IPv6LL May 17 00:32:05.831774 systemd[1]: Created slice kubepods-besteffort-pod4ca14168_c27e_4133_a73c_137a3ca61b44.slice. May 17 00:32:05.902938 kubelet[1485]: I0517 00:32:05.902863 1485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2q5s6\" (UniqueName: \"kubernetes.io/projected/4ca14168-c27e-4133-a73c-137a3ca61b44-kube-api-access-2q5s6\") pod \"nginx-deployment-7fcdb87857-crxlj\" (UID: \"4ca14168-c27e-4133-a73c-137a3ca61b44\") " pod="default/nginx-deployment-7fcdb87857-crxlj" May 17 00:32:06.057176 kubelet[1485]: E0517 00:32:06.057150 1485 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:32:06.134253 env[1280]: time="2025-05-17T00:32:06.134219055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-crxlj,Uid:4ca14168-c27e-4133-a73c-137a3ca61b44,Namespace:default,Attempt:0,}" May 17 00:32:06.170590 systemd-networkd[1105]: lxcb61b57a307fe: Link UP May 17 00:32:06.176697 kernel: eth0: renamed from tmp6c0cc May 17 00:32:06.184911 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 17 00:32:06.185019 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcb61b57a307fe: link becomes ready May 17 00:32:06.185171 systemd-networkd[1105]: lxcb61b57a307fe: Gained carrier May 17 00:32:06.234018 kubelet[1485]: E0517 00:32:06.233972 1485 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:32:07.058767 kubelet[1485]: E0517 00:32:07.058655 1485 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:32:07.234410 kubelet[1485]: E0517 00:32:07.234373 1485 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:32:08.109269 env[1280]: time="2025-05-17T00:32:08.109194687Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:32:08.109269 env[1280]: time="2025-05-17T00:32:08.109233023Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:32:08.109269 env[1280]: time="2025-05-17T00:32:08.109243972Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:32:08.109649 env[1280]: time="2025-05-17T00:32:08.109356958Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6c0ccd895efaf3c781cf0b54e26f2ad1d90e7b9b194cee9ac9794cd6cfa11ddd pid=2540 runtime=io.containerd.runc.v2 May 17 00:32:08.121492 systemd[1]: run-containerd-runc-k8s.io-6c0ccd895efaf3c781cf0b54e26f2ad1d90e7b9b194cee9ac9794cd6cfa11ddd-runc.BjrUA5.mount: Deactivated successfully. May 17 00:32:08.124226 systemd[1]: Started cri-containerd-6c0ccd895efaf3c781cf0b54e26f2ad1d90e7b9b194cee9ac9794cd6cfa11ddd.scope. May 17 00:32:08.128886 systemd-networkd[1105]: lxcb61b57a307fe: Gained IPv6LL May 17 00:32:08.140475 systemd-resolved[1217]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 17 00:32:08.162662 env[1280]: time="2025-05-17T00:32:08.162618646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-crxlj,Uid:4ca14168-c27e-4133-a73c-137a3ca61b44,Namespace:default,Attempt:0,} returns sandbox id \"6c0ccd895efaf3c781cf0b54e26f2ad1d90e7b9b194cee9ac9794cd6cfa11ddd\"" May 17 00:32:08.163813 env[1280]: time="2025-05-17T00:32:08.163783939Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" May 17 00:32:08.234629 kubelet[1485]: E0517 00:32:08.234591 1485 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:32:09.235538 kubelet[1485]: E0517 00:32:09.235476 1485 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:32:10.235902 kubelet[1485]: E0517 00:32:10.235848 1485 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:32:11.236755 kubelet[1485]: E0517 00:32:11.236687 1485 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:32:11.428245 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2172001431.mount: Deactivated successfully. May 17 00:32:12.237630 kubelet[1485]: E0517 00:32:12.237574 1485 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:32:13.196412 kubelet[1485]: E0517 00:32:13.196327 1485 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:32:13.237935 kubelet[1485]: E0517 00:32:13.237878 1485 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:32:13.435916 env[1280]: time="2025-05-17T00:32:13.435858860Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:32:13.437969 env[1280]: time="2025-05-17T00:32:13.437914989Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7e2dd24abce21cd256091445aca4b7eb00774264c2b0a8714701dd7091509efa,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:32:13.439758 env[1280]: time="2025-05-17T00:32:13.439725131Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:32:13.441608 env[1280]: time="2025-05-17T00:32:13.441575215Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:32:13.442301 env[1280]: time="2025-05-17T00:32:13.442259759Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:7e2dd24abce21cd256091445aca4b7eb00774264c2b0a8714701dd7091509efa\"" May 17 00:32:13.446736 env[1280]: time="2025-05-17T00:32:13.446641545Z" level=info msg="CreateContainer within sandbox \"6c0ccd895efaf3c781cf0b54e26f2ad1d90e7b9b194cee9ac9794cd6cfa11ddd\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" May 17 00:32:13.460051 env[1280]: time="2025-05-17T00:32:13.460002442Z" level=info msg="CreateContainer within sandbox \"6c0ccd895efaf3c781cf0b54e26f2ad1d90e7b9b194cee9ac9794cd6cfa11ddd\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"b67e872d75f6ce9d00dfa67be6a21e4f450bc51aeb9f1d6ebc69cb386c3ac818\"" May 17 00:32:13.460547 env[1280]: time="2025-05-17T00:32:13.460525400Z" level=info msg="StartContainer for \"b67e872d75f6ce9d00dfa67be6a21e4f450bc51aeb9f1d6ebc69cb386c3ac818\"" May 17 00:32:13.475719 systemd[1]: Started cri-containerd-b67e872d75f6ce9d00dfa67be6a21e4f450bc51aeb9f1d6ebc69cb386c3ac818.scope. May 17 00:32:13.497974 env[1280]: time="2025-05-17T00:32:13.497925907Z" level=info msg="StartContainer for \"b67e872d75f6ce9d00dfa67be6a21e4f450bc51aeb9f1d6ebc69cb386c3ac818\" returns successfully" May 17 00:32:14.238746 kubelet[1485]: E0517 00:32:14.238702 1485 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:32:15.239357 kubelet[1485]: E0517 00:32:15.239306 1485 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:32:16.239543 kubelet[1485]: E0517 00:32:16.239479 1485 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:32:17.240597 kubelet[1485]: E0517 00:32:17.240531 1485 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:32:17.469404 kubelet[1485]: I0517 00:32:17.469327 1485 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-crxlj" podStartSLOduration=7.189359654 podStartE2EDuration="12.4693035s" podCreationTimestamp="2025-05-17 00:32:05 +0000 UTC" firstStartedPulling="2025-05-17 00:32:08.163303384 +0000 UTC m=+35.731545295" lastFinishedPulling="2025-05-17 00:32:13.443247231 +0000 UTC m=+41.011489141" observedRunningTime="2025-05-17 00:32:14.107072044 +0000 UTC m=+41.675313974" watchObservedRunningTime="2025-05-17 00:32:17.4693035 +0000 UTC m=+45.037545410" May 17 00:32:17.484394 systemd[1]: Created slice kubepods-besteffort-podc801a0ee_b077_4aba_a5c8_efc22676b4a4.slice. May 17 00:32:17.668761 kubelet[1485]: I0517 00:32:17.668688 1485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95tbn\" (UniqueName: \"kubernetes.io/projected/c801a0ee-b077-4aba-a5c8-efc22676b4a4-kube-api-access-95tbn\") pod \"nfs-server-provisioner-0\" (UID: \"c801a0ee-b077-4aba-a5c8-efc22676b4a4\") " pod="default/nfs-server-provisioner-0" May 17 00:32:17.668761 kubelet[1485]: I0517 00:32:17.668749 1485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/c801a0ee-b077-4aba-a5c8-efc22676b4a4-data\") pod \"nfs-server-provisioner-0\" (UID: \"c801a0ee-b077-4aba-a5c8-efc22676b4a4\") " pod="default/nfs-server-provisioner-0" May 17 00:32:17.788335 env[1280]: time="2025-05-17T00:32:17.788276917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:c801a0ee-b077-4aba-a5c8-efc22676b4a4,Namespace:default,Attempt:0,}" May 17 00:32:17.912752 systemd-networkd[1105]: lxc2f2e39f7e0d3: Link UP May 17 00:32:17.919702 kernel: eth0: renamed from tmpcef3f May 17 00:32:17.928463 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 17 00:32:17.928556 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc2f2e39f7e0d3: link becomes ready May 17 00:32:17.928699 systemd-networkd[1105]: lxc2f2e39f7e0d3: Gained carrier May 17 00:32:18.131920 env[1280]: time="2025-05-17T00:32:18.131842826Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:32:18.131920 env[1280]: time="2025-05-17T00:32:18.131884240Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:32:18.131920 env[1280]: time="2025-05-17T00:32:18.131907763Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:32:18.132126 env[1280]: time="2025-05-17T00:32:18.132046453Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/cef3f41e4666dcdb9c383440addf8acb55a0026870e29b6a85d041d45c7982b2 pid=2671 runtime=io.containerd.runc.v2 May 17 00:32:18.142534 systemd[1]: Started cri-containerd-cef3f41e4666dcdb9c383440addf8acb55a0026870e29b6a85d041d45c7982b2.scope. May 17 00:32:18.152172 systemd-resolved[1217]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 17 00:32:18.172259 env[1280]: time="2025-05-17T00:32:18.172213831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:c801a0ee-b077-4aba-a5c8-efc22676b4a4,Namespace:default,Attempt:0,} returns sandbox id \"cef3f41e4666dcdb9c383440addf8acb55a0026870e29b6a85d041d45c7982b2\"" May 17 00:32:18.173540 env[1280]: time="2025-05-17T00:32:18.173510218Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" May 17 00:32:18.241661 kubelet[1485]: E0517 00:32:18.241519 1485 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:32:19.242773 kubelet[1485]: E0517 00:32:19.242710 1485 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:32:19.714561 systemd-networkd[1105]: lxc2f2e39f7e0d3: Gained IPv6LL May 17 00:32:20.242896 kubelet[1485]: E0517 00:32:20.242842 1485 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:32:20.815912 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3459775923.mount: Deactivated successfully. May 17 00:32:21.243957 kubelet[1485]: E0517 00:32:21.243825 1485 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:32:22.244523 kubelet[1485]: E0517 00:32:22.244462 1485 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:32:23.244658 kubelet[1485]: E0517 00:32:23.244598 1485 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:32:23.385642 env[1280]: time="2025-05-17T00:32:23.385580997Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:32:23.388186 env[1280]: time="2025-05-17T00:32:23.388160779Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:32:23.392873 env[1280]: time="2025-05-17T00:32:23.392834047Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:32:23.396345 env[1280]: time="2025-05-17T00:32:23.396284616Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:32:23.396893 env[1280]: time="2025-05-17T00:32:23.396851650Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" May 17 00:32:23.405730 env[1280]: time="2025-05-17T00:32:23.405683767Z" level=info msg="CreateContainer within sandbox \"cef3f41e4666dcdb9c383440addf8acb55a0026870e29b6a85d041d45c7982b2\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" May 17 00:32:23.424829 env[1280]: time="2025-05-17T00:32:23.424787535Z" level=info msg="CreateContainer within sandbox \"cef3f41e4666dcdb9c383440addf8acb55a0026870e29b6a85d041d45c7982b2\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"ef1c4c555af29a36a7744410b7c41e1999c34eb31d98ac3245cada89bd5bd9d4\"" May 17 00:32:23.425327 env[1280]: time="2025-05-17T00:32:23.425304067Z" level=info msg="StartContainer for \"ef1c4c555af29a36a7744410b7c41e1999c34eb31d98ac3245cada89bd5bd9d4\"" May 17 00:32:23.443648 systemd[1]: Started cri-containerd-ef1c4c555af29a36a7744410b7c41e1999c34eb31d98ac3245cada89bd5bd9d4.scope. May 17 00:32:23.525292 env[1280]: time="2025-05-17T00:32:23.525212762Z" level=info msg="StartContainer for \"ef1c4c555af29a36a7744410b7c41e1999c34eb31d98ac3245cada89bd5bd9d4\" returns successfully" May 17 00:32:24.135356 kubelet[1485]: I0517 00:32:24.135266 1485 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.909495476 podStartE2EDuration="7.135247577s" podCreationTimestamp="2025-05-17 00:32:17 +0000 UTC" firstStartedPulling="2025-05-17 00:32:18.173272489 +0000 UTC m=+45.741514399" lastFinishedPulling="2025-05-17 00:32:23.39902459 +0000 UTC m=+50.967266500" observedRunningTime="2025-05-17 00:32:24.135167591 +0000 UTC m=+51.703409501" watchObservedRunningTime="2025-05-17 00:32:24.135247577 +0000 UTC m=+51.703489488" May 17 00:32:24.245351 kubelet[1485]: E0517 00:32:24.245287 1485 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:32:25.246293 kubelet[1485]: E0517 00:32:25.246211 1485 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:32:26.247056 kubelet[1485]: E0517 00:32:26.247000 1485 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:32:27.247787 kubelet[1485]: E0517 00:32:27.247740 1485 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:32:28.247994 kubelet[1485]: E0517 00:32:28.247948 1485 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:32:28.697389 systemd[1]: Created slice kubepods-besteffort-podbaa0dc16_b16b_4e83_8708_d6eace797e4e.slice. May 17 00:32:28.821313 kubelet[1485]: I0517 00:32:28.821237 1485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-839fa52a-3cf7-4f3e-9714-ada9f4bb63ec\" (UniqueName: \"kubernetes.io/nfs/baa0dc16-b16b-4e83-8708-d6eace797e4e-pvc-839fa52a-3cf7-4f3e-9714-ada9f4bb63ec\") pod \"test-pod-1\" (UID: \"baa0dc16-b16b-4e83-8708-d6eace797e4e\") " pod="default/test-pod-1" May 17 00:32:28.821313 kubelet[1485]: I0517 00:32:28.821292 1485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cmw8\" (UniqueName: \"kubernetes.io/projected/baa0dc16-b16b-4e83-8708-d6eace797e4e-kube-api-access-2cmw8\") pod \"test-pod-1\" (UID: \"baa0dc16-b16b-4e83-8708-d6eace797e4e\") " pod="default/test-pod-1" May 17 00:32:28.943704 kernel: FS-Cache: Loaded May 17 00:32:28.983392 kernel: RPC: Registered named UNIX socket transport module. May 17 00:32:28.983476 kernel: RPC: Registered udp transport module. May 17 00:32:28.983502 kernel: RPC: Registered tcp transport module. May 17 00:32:28.985525 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. May 17 00:32:29.039701 kernel: FS-Cache: Netfs 'nfs' registered for caching May 17 00:32:29.224073 kernel: NFS: Registering the id_resolver key type May 17 00:32:29.224220 kernel: Key type id_resolver registered May 17 00:32:29.224239 kernel: Key type id_legacy registered May 17 00:32:29.247282 nfsidmap[2787]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' May 17 00:32:29.248455 kubelet[1485]: E0517 00:32:29.248416 1485 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:32:29.249980 nfsidmap[2790]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' May 17 00:32:29.299477 env[1280]: time="2025-05-17T00:32:29.299411361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:baa0dc16-b16b-4e83-8708-d6eace797e4e,Namespace:default,Attempt:0,}" May 17 00:32:30.056850 systemd-networkd[1105]: lxcd412266e0f1b: Link UP May 17 00:32:30.062714 kernel: eth0: renamed from tmpa21a6 May 17 00:32:30.071280 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 17 00:32:30.071367 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcd412266e0f1b: link becomes ready May 17 00:32:30.071383 systemd-networkd[1105]: lxcd412266e0f1b: Gained carrier May 17 00:32:30.248971 kubelet[1485]: E0517 00:32:30.248922 1485 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:32:30.319348 env[1280]: time="2025-05-17T00:32:30.319226468Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:32:30.319348 env[1280]: time="2025-05-17T00:32:30.319264527Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:32:30.319348 env[1280]: time="2025-05-17T00:32:30.319274175Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:32:30.319652 env[1280]: time="2025-05-17T00:32:30.319410337Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a21a65a37ca47d6c645fde2b6f51436da1e91e6d8d7ab8c2b35283cf04c0ad95 pid=2824 runtime=io.containerd.runc.v2 May 17 00:32:30.329780 systemd[1]: Started cri-containerd-a21a65a37ca47d6c645fde2b6f51436da1e91e6d8d7ab8c2b35283cf04c0ad95.scope. May 17 00:32:30.338617 systemd-resolved[1217]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 17 00:32:30.358095 env[1280]: time="2025-05-17T00:32:30.358059825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:baa0dc16-b16b-4e83-8708-d6eace797e4e,Namespace:default,Attempt:0,} returns sandbox id \"a21a65a37ca47d6c645fde2b6f51436da1e91e6d8d7ab8c2b35283cf04c0ad95\"" May 17 00:32:30.359230 env[1280]: time="2025-05-17T00:32:30.359194134Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" May 17 00:32:30.908654 env[1280]: time="2025-05-17T00:32:30.908581150Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:32:30.915094 env[1280]: time="2025-05-17T00:32:30.915020450Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7e2dd24abce21cd256091445aca4b7eb00774264c2b0a8714701dd7091509efa,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:32:30.916927 env[1280]: time="2025-05-17T00:32:30.916889342Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:32:30.918544 env[1280]: time="2025-05-17T00:32:30.918491464Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:32:30.919207 env[1280]: time="2025-05-17T00:32:30.919175514Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:7e2dd24abce21cd256091445aca4b7eb00774264c2b0a8714701dd7091509efa\"" May 17 00:32:30.927835 env[1280]: time="2025-05-17T00:32:30.927786945Z" level=info msg="CreateContainer within sandbox \"a21a65a37ca47d6c645fde2b6f51436da1e91e6d8d7ab8c2b35283cf04c0ad95\" for container &ContainerMetadata{Name:test,Attempt:0,}" May 17 00:32:30.943919 env[1280]: time="2025-05-17T00:32:30.943511799Z" level=info msg="CreateContainer within sandbox \"a21a65a37ca47d6c645fde2b6f51436da1e91e6d8d7ab8c2b35283cf04c0ad95\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"4778272013e0138c6f99cfaf85766874def9c36dc28bb8a5a23b2b7ff7c3bf3e\"" May 17 00:32:30.944443 env[1280]: time="2025-05-17T00:32:30.944409562Z" level=info msg="StartContainer for \"4778272013e0138c6f99cfaf85766874def9c36dc28bb8a5a23b2b7ff7c3bf3e\"" May 17 00:32:30.958912 systemd[1]: Started cri-containerd-4778272013e0138c6f99cfaf85766874def9c36dc28bb8a5a23b2b7ff7c3bf3e.scope. May 17 00:32:31.079401 env[1280]: time="2025-05-17T00:32:31.079338999Z" level=info msg="StartContainer for \"4778272013e0138c6f99cfaf85766874def9c36dc28bb8a5a23b2b7ff7c3bf3e\" returns successfully" May 17 00:32:31.232853 systemd-networkd[1105]: lxcd412266e0f1b: Gained IPv6LL May 17 00:32:31.249623 kubelet[1485]: E0517 00:32:31.249534 1485 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:32:31.250121 kubelet[1485]: I0517 00:32:31.250061 1485 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=13.688795092 podStartE2EDuration="14.250042412s" podCreationTimestamp="2025-05-17 00:32:17 +0000 UTC" firstStartedPulling="2025-05-17 00:32:30.358886538 +0000 UTC m=+57.927128448" lastFinishedPulling="2025-05-17 00:32:30.920133858 +0000 UTC m=+58.488375768" observedRunningTime="2025-05-17 00:32:31.249779117 +0000 UTC m=+58.818021027" watchObservedRunningTime="2025-05-17 00:32:31.250042412 +0000 UTC m=+58.818284322" May 17 00:32:32.250544 kubelet[1485]: E0517 00:32:32.250480 1485 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:32:33.197295 kubelet[1485]: E0517 00:32:33.197244 1485 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:32:33.250683 kubelet[1485]: E0517 00:32:33.250639 1485 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:32:34.251188 kubelet[1485]: E0517 00:32:34.251140 1485 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:32:35.209902 env[1280]: time="2025-05-17T00:32:35.209828438Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 00:32:35.216169 env[1280]: time="2025-05-17T00:32:35.216126938Z" level=info msg="StopContainer for \"8b34d8b7dc9ffb647308f871151e3fd3385d060755e37eb8418304d7e758baf2\" with timeout 2 (s)" May 17 00:32:35.216439 env[1280]: time="2025-05-17T00:32:35.216403300Z" level=info msg="Stop container \"8b34d8b7dc9ffb647308f871151e3fd3385d060755e37eb8418304d7e758baf2\" with signal terminated" May 17 00:32:35.222202 systemd-networkd[1105]: lxc_health: Link DOWN May 17 00:32:35.222210 systemd-networkd[1105]: lxc_health: Lost carrier May 17 00:32:35.251476 kubelet[1485]: E0517 00:32:35.251433 1485 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:32:35.261117 systemd[1]: cri-containerd-8b34d8b7dc9ffb647308f871151e3fd3385d060755e37eb8418304d7e758baf2.scope: Deactivated successfully. May 17 00:32:35.261432 systemd[1]: cri-containerd-8b34d8b7dc9ffb647308f871151e3fd3385d060755e37eb8418304d7e758baf2.scope: Consumed 6.070s CPU time. May 17 00:32:35.275323 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8b34d8b7dc9ffb647308f871151e3fd3385d060755e37eb8418304d7e758baf2-rootfs.mount: Deactivated successfully. May 17 00:32:35.284730 env[1280]: time="2025-05-17T00:32:35.284681239Z" level=info msg="shim disconnected" id=8b34d8b7dc9ffb647308f871151e3fd3385d060755e37eb8418304d7e758baf2 May 17 00:32:35.284853 env[1280]: time="2025-05-17T00:32:35.284737803Z" level=warning msg="cleaning up after shim disconnected" id=8b34d8b7dc9ffb647308f871151e3fd3385d060755e37eb8418304d7e758baf2 namespace=k8s.io May 17 00:32:35.284853 env[1280]: time="2025-05-17T00:32:35.284750236Z" level=info msg="cleaning up dead shim" May 17 00:32:35.291218 env[1280]: time="2025-05-17T00:32:35.291176262Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:32:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2957 runtime=io.containerd.runc.v2\n" May 17 00:32:35.294388 env[1280]: time="2025-05-17T00:32:35.294351591Z" level=info msg="StopContainer for \"8b34d8b7dc9ffb647308f871151e3fd3385d060755e37eb8418304d7e758baf2\" returns successfully" May 17 00:32:35.295059 env[1280]: time="2025-05-17T00:32:35.295033894Z" level=info msg="StopPodSandbox for \"59f6fa83fd7220e5b748604b423480785b25fd7663686e97311265697b687129\"" May 17 00:32:35.295115 env[1280]: time="2025-05-17T00:32:35.295091550Z" level=info msg="Container to stop \"6c2348e676fbadf49bad7d837aa69ab23eb4a8c9fe4ef6da82451e079180154d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:32:35.295115 env[1280]: time="2025-05-17T00:32:35.295108371Z" level=info msg="Container to stop \"3991c34766f6a2aea793b46c8bf3ddb3c93352eba7007828f3ca1bc3dee7febc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:32:35.295176 env[1280]: time="2025-05-17T00:32:35.295123480Z" level=info msg="Container to stop \"31b3e6005a8d04602b0568c24e87cdaa04e0caa3b5512e65ea2471b6806d9d7b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:32:35.295176 env[1280]: time="2025-05-17T00:32:35.295136072Z" level=info msg="Container to stop \"e5f9973a2849e55352e3bc1245186764ae9469895d0de642fb02f6022ccae741\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:32:35.295176 env[1280]: time="2025-05-17T00:32:35.295148406Z" level=info msg="Container to stop \"8b34d8b7dc9ffb647308f871151e3fd3385d060755e37eb8418304d7e758baf2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:32:35.297149 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-59f6fa83fd7220e5b748604b423480785b25fd7663686e97311265697b687129-shm.mount: Deactivated successfully. May 17 00:32:35.301068 systemd[1]: cri-containerd-59f6fa83fd7220e5b748604b423480785b25fd7663686e97311265697b687129.scope: Deactivated successfully. May 17 00:32:35.321400 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-59f6fa83fd7220e5b748604b423480785b25fd7663686e97311265697b687129-rootfs.mount: Deactivated successfully. May 17 00:32:35.324598 env[1280]: time="2025-05-17T00:32:35.324551307Z" level=info msg="shim disconnected" id=59f6fa83fd7220e5b748604b423480785b25fd7663686e97311265697b687129 May 17 00:32:35.324598 env[1280]: time="2025-05-17T00:32:35.324597593Z" level=warning msg="cleaning up after shim disconnected" id=59f6fa83fd7220e5b748604b423480785b25fd7663686e97311265697b687129 namespace=k8s.io May 17 00:32:35.324799 env[1280]: time="2025-05-17T00:32:35.324606650Z" level=info msg="cleaning up dead shim" May 17 00:32:35.330639 env[1280]: time="2025-05-17T00:32:35.330591810Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:32:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2988 runtime=io.containerd.runc.v2\n" May 17 00:32:35.330939 env[1280]: time="2025-05-17T00:32:35.330907264Z" level=info msg="TearDown network for sandbox \"59f6fa83fd7220e5b748604b423480785b25fd7663686e97311265697b687129\" successfully" May 17 00:32:35.330939 env[1280]: time="2025-05-17T00:32:35.330931810Z" level=info msg="StopPodSandbox for \"59f6fa83fd7220e5b748604b423480785b25fd7663686e97311265697b687129\" returns successfully" May 17 00:32:35.462565 kubelet[1485]: I0517 00:32:35.462423 1485 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dab1a348-85d4-4040-934d-4c0658d05311-host-proc-sys-net\") pod \"dab1a348-85d4-4040-934d-4c0658d05311\" (UID: \"dab1a348-85d4-4040-934d-4c0658d05311\") " May 17 00:32:35.462565 kubelet[1485]: I0517 00:32:35.462464 1485 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dab1a348-85d4-4040-934d-4c0658d05311-hubble-tls\") pod \"dab1a348-85d4-4040-934d-4c0658d05311\" (UID: \"dab1a348-85d4-4040-934d-4c0658d05311\") " May 17 00:32:35.462565 kubelet[1485]: I0517 00:32:35.462478 1485 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dab1a348-85d4-4040-934d-4c0658d05311-cilium-cgroup\") pod \"dab1a348-85d4-4040-934d-4c0658d05311\" (UID: \"dab1a348-85d4-4040-934d-4c0658d05311\") " May 17 00:32:35.462565 kubelet[1485]: I0517 00:32:35.462493 1485 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dab1a348-85d4-4040-934d-4c0658d05311-clustermesh-secrets\") pod \"dab1a348-85d4-4040-934d-4c0658d05311\" (UID: \"dab1a348-85d4-4040-934d-4c0658d05311\") " May 17 00:32:35.462565 kubelet[1485]: I0517 00:32:35.462509 1485 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dab1a348-85d4-4040-934d-4c0658d05311-cilium-config-path\") pod \"dab1a348-85d4-4040-934d-4c0658d05311\" (UID: \"dab1a348-85d4-4040-934d-4c0658d05311\") " May 17 00:32:35.462565 kubelet[1485]: I0517 00:32:35.462526 1485 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xlfjn\" (UniqueName: \"kubernetes.io/projected/dab1a348-85d4-4040-934d-4c0658d05311-kube-api-access-xlfjn\") pod \"dab1a348-85d4-4040-934d-4c0658d05311\" (UID: \"dab1a348-85d4-4040-934d-4c0658d05311\") " May 17 00:32:35.462875 kubelet[1485]: I0517 00:32:35.462537 1485 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dab1a348-85d4-4040-934d-4c0658d05311-hostproc\") pod \"dab1a348-85d4-4040-934d-4c0658d05311\" (UID: \"dab1a348-85d4-4040-934d-4c0658d05311\") " May 17 00:32:35.462875 kubelet[1485]: I0517 00:32:35.462600 1485 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dab1a348-85d4-4040-934d-4c0658d05311-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "dab1a348-85d4-4040-934d-4c0658d05311" (UID: "dab1a348-85d4-4040-934d-4c0658d05311"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:32:35.462875 kubelet[1485]: I0517 00:32:35.462693 1485 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dab1a348-85d4-4040-934d-4c0658d05311-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "dab1a348-85d4-4040-934d-4c0658d05311" (UID: "dab1a348-85d4-4040-934d-4c0658d05311"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:32:35.463111 kubelet[1485]: I0517 00:32:35.463086 1485 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dab1a348-85d4-4040-934d-4c0658d05311-cilium-run\") pod \"dab1a348-85d4-4040-934d-4c0658d05311\" (UID: \"dab1a348-85d4-4040-934d-4c0658d05311\") " May 17 00:32:35.463238 kubelet[1485]: I0517 00:32:35.463220 1485 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dab1a348-85d4-4040-934d-4c0658d05311-etc-cni-netd\") pod \"dab1a348-85d4-4040-934d-4c0658d05311\" (UID: \"dab1a348-85d4-4040-934d-4c0658d05311\") " May 17 00:32:35.463350 kubelet[1485]: I0517 00:32:35.463332 1485 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dab1a348-85d4-4040-934d-4c0658d05311-host-proc-sys-kernel\") pod \"dab1a348-85d4-4040-934d-4c0658d05311\" (UID: \"dab1a348-85d4-4040-934d-4c0658d05311\") " May 17 00:32:35.463540 kubelet[1485]: I0517 00:32:35.463427 1485 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dab1a348-85d4-4040-934d-4c0658d05311-bpf-maps\") pod \"dab1a348-85d4-4040-934d-4c0658d05311\" (UID: \"dab1a348-85d4-4040-934d-4c0658d05311\") " May 17 00:32:35.463540 kubelet[1485]: I0517 00:32:35.463521 1485 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dab1a348-85d4-4040-934d-4c0658d05311-cni-path\") pod \"dab1a348-85d4-4040-934d-4c0658d05311\" (UID: \"dab1a348-85d4-4040-934d-4c0658d05311\") " May 17 00:32:35.463540 kubelet[1485]: I0517 00:32:35.463538 1485 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dab1a348-85d4-4040-934d-4c0658d05311-lib-modules\") pod \"dab1a348-85d4-4040-934d-4c0658d05311\" (UID: \"dab1a348-85d4-4040-934d-4c0658d05311\") " May 17 00:32:35.463839 kubelet[1485]: I0517 00:32:35.463550 1485 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dab1a348-85d4-4040-934d-4c0658d05311-xtables-lock\") pod \"dab1a348-85d4-4040-934d-4c0658d05311\" (UID: \"dab1a348-85d4-4040-934d-4c0658d05311\") " May 17 00:32:35.463839 kubelet[1485]: I0517 00:32:35.463634 1485 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dab1a348-85d4-4040-934d-4c0658d05311-cilium-cgroup\") on node \"10.0.0.61\" DevicePath \"\"" May 17 00:32:35.463839 kubelet[1485]: I0517 00:32:35.463652 1485 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dab1a348-85d4-4040-934d-4c0658d05311-host-proc-sys-net\") on node \"10.0.0.61\" DevicePath \"\"" May 17 00:32:35.463839 kubelet[1485]: I0517 00:32:35.463718 1485 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dab1a348-85d4-4040-934d-4c0658d05311-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "dab1a348-85d4-4040-934d-4c0658d05311" (UID: "dab1a348-85d4-4040-934d-4c0658d05311"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:32:35.463839 kubelet[1485]: I0517 00:32:35.463748 1485 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dab1a348-85d4-4040-934d-4c0658d05311-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "dab1a348-85d4-4040-934d-4c0658d05311" (UID: "dab1a348-85d4-4040-934d-4c0658d05311"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:32:35.463839 kubelet[1485]: I0517 00:32:35.463763 1485 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dab1a348-85d4-4040-934d-4c0658d05311-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "dab1a348-85d4-4040-934d-4c0658d05311" (UID: "dab1a348-85d4-4040-934d-4c0658d05311"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:32:35.463985 kubelet[1485]: I0517 00:32:35.463778 1485 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dab1a348-85d4-4040-934d-4c0658d05311-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "dab1a348-85d4-4040-934d-4c0658d05311" (UID: "dab1a348-85d4-4040-934d-4c0658d05311"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:32:35.463985 kubelet[1485]: I0517 00:32:35.463792 1485 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dab1a348-85d4-4040-934d-4c0658d05311-cni-path" (OuterVolumeSpecName: "cni-path") pod "dab1a348-85d4-4040-934d-4c0658d05311" (UID: "dab1a348-85d4-4040-934d-4c0658d05311"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:32:35.463985 kubelet[1485]: I0517 00:32:35.463809 1485 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dab1a348-85d4-4040-934d-4c0658d05311-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "dab1a348-85d4-4040-934d-4c0658d05311" (UID: "dab1a348-85d4-4040-934d-4c0658d05311"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:32:35.465556 kubelet[1485]: I0517 00:32:35.464090 1485 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dab1a348-85d4-4040-934d-4c0658d05311-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "dab1a348-85d4-4040-934d-4c0658d05311" (UID: "dab1a348-85d4-4040-934d-4c0658d05311"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:32:35.465556 kubelet[1485]: I0517 00:32:35.464120 1485 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dab1a348-85d4-4040-934d-4c0658d05311-hostproc" (OuterVolumeSpecName: "hostproc") pod "dab1a348-85d4-4040-934d-4c0658d05311" (UID: "dab1a348-85d4-4040-934d-4c0658d05311"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:32:35.465914 kubelet[1485]: I0517 00:32:35.465894 1485 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dab1a348-85d4-4040-934d-4c0658d05311-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "dab1a348-85d4-4040-934d-4c0658d05311" (UID: "dab1a348-85d4-4040-934d-4c0658d05311"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 17 00:32:35.466014 kubelet[1485]: I0517 00:32:35.465988 1485 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dab1a348-85d4-4040-934d-4c0658d05311-kube-api-access-xlfjn" (OuterVolumeSpecName: "kube-api-access-xlfjn") pod "dab1a348-85d4-4040-934d-4c0658d05311" (UID: "dab1a348-85d4-4040-934d-4c0658d05311"). InnerVolumeSpecName "kube-api-access-xlfjn". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 17 00:32:35.466775 kubelet[1485]: I0517 00:32:35.466755 1485 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dab1a348-85d4-4040-934d-4c0658d05311-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "dab1a348-85d4-4040-934d-4c0658d05311" (UID: "dab1a348-85d4-4040-934d-4c0658d05311"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 17 00:32:35.466872 kubelet[1485]: I0517 00:32:35.466787 1485 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dab1a348-85d4-4040-934d-4c0658d05311-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "dab1a348-85d4-4040-934d-4c0658d05311" (UID: "dab1a348-85d4-4040-934d-4c0658d05311"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 17 00:32:35.467159 systemd[1]: var-lib-kubelet-pods-dab1a348\x2d85d4\x2d4040\x2d934d\x2d4c0658d05311-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxlfjn.mount: Deactivated successfully. May 17 00:32:35.564150 kubelet[1485]: I0517 00:32:35.564118 1485 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dab1a348-85d4-4040-934d-4c0658d05311-host-proc-sys-kernel\") on node \"10.0.0.61\" DevicePath \"\"" May 17 00:32:35.564150 kubelet[1485]: I0517 00:32:35.564139 1485 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dab1a348-85d4-4040-934d-4c0658d05311-bpf-maps\") on node \"10.0.0.61\" DevicePath \"\"" May 17 00:32:35.564150 kubelet[1485]: I0517 00:32:35.564147 1485 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dab1a348-85d4-4040-934d-4c0658d05311-cni-path\") on node \"10.0.0.61\" DevicePath \"\"" May 17 00:32:35.564150 kubelet[1485]: I0517 00:32:35.564155 1485 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dab1a348-85d4-4040-934d-4c0658d05311-lib-modules\") on node \"10.0.0.61\" DevicePath \"\"" May 17 00:32:35.564150 kubelet[1485]: I0517 00:32:35.564161 1485 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dab1a348-85d4-4040-934d-4c0658d05311-xtables-lock\") on node \"10.0.0.61\" DevicePath \"\"" May 17 00:32:35.564150 kubelet[1485]: I0517 00:32:35.564168 1485 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dab1a348-85d4-4040-934d-4c0658d05311-hubble-tls\") on node \"10.0.0.61\" DevicePath \"\"" May 17 00:32:35.564150 kubelet[1485]: I0517 00:32:35.564176 1485 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dab1a348-85d4-4040-934d-4c0658d05311-clustermesh-secrets\") on node \"10.0.0.61\" DevicePath \"\"" May 17 00:32:35.564150 kubelet[1485]: I0517 00:32:35.564183 1485 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dab1a348-85d4-4040-934d-4c0658d05311-cilium-config-path\") on node \"10.0.0.61\" DevicePath \"\"" May 17 00:32:35.564518 kubelet[1485]: I0517 00:32:35.564191 1485 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xlfjn\" (UniqueName: \"kubernetes.io/projected/dab1a348-85d4-4040-934d-4c0658d05311-kube-api-access-xlfjn\") on node \"10.0.0.61\" DevicePath \"\"" May 17 00:32:35.564518 kubelet[1485]: I0517 00:32:35.564198 1485 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dab1a348-85d4-4040-934d-4c0658d05311-hostproc\") on node \"10.0.0.61\" DevicePath \"\"" May 17 00:32:35.564518 kubelet[1485]: I0517 00:32:35.564206 1485 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dab1a348-85d4-4040-934d-4c0658d05311-cilium-run\") on node \"10.0.0.61\" DevicePath \"\"" May 17 00:32:35.564518 kubelet[1485]: I0517 00:32:35.564212 1485 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dab1a348-85d4-4040-934d-4c0658d05311-etc-cni-netd\") on node \"10.0.0.61\" DevicePath \"\"" May 17 00:32:35.641990 systemd[1]: Removed slice kubepods-burstable-poddab1a348_85d4_4040_934d_4c0658d05311.slice. May 17 00:32:35.642075 systemd[1]: kubepods-burstable-poddab1a348_85d4_4040_934d_4c0658d05311.slice: Consumed 6.172s CPU time. May 17 00:32:36.115977 kubelet[1485]: I0517 00:32:36.115950 1485 scope.go:117] "RemoveContainer" containerID="8b34d8b7dc9ffb647308f871151e3fd3385d060755e37eb8418304d7e758baf2" May 17 00:32:36.117101 env[1280]: time="2025-05-17T00:32:36.117062455Z" level=info msg="RemoveContainer for \"8b34d8b7dc9ffb647308f871151e3fd3385d060755e37eb8418304d7e758baf2\"" May 17 00:32:36.122751 env[1280]: time="2025-05-17T00:32:36.122708415Z" level=info msg="RemoveContainer for \"8b34d8b7dc9ffb647308f871151e3fd3385d060755e37eb8418304d7e758baf2\" returns successfully" May 17 00:32:36.122947 kubelet[1485]: I0517 00:32:36.122923 1485 scope.go:117] "RemoveContainer" containerID="3991c34766f6a2aea793b46c8bf3ddb3c93352eba7007828f3ca1bc3dee7febc" May 17 00:32:36.123877 env[1280]: time="2025-05-17T00:32:36.123843609Z" level=info msg="RemoveContainer for \"3991c34766f6a2aea793b46c8bf3ddb3c93352eba7007828f3ca1bc3dee7febc\"" May 17 00:32:36.126723 env[1280]: time="2025-05-17T00:32:36.126686647Z" level=info msg="RemoveContainer for \"3991c34766f6a2aea793b46c8bf3ddb3c93352eba7007828f3ca1bc3dee7febc\" returns successfully" May 17 00:32:36.126889 kubelet[1485]: I0517 00:32:36.126866 1485 scope.go:117] "RemoveContainer" containerID="e5f9973a2849e55352e3bc1245186764ae9469895d0de642fb02f6022ccae741" May 17 00:32:36.128088 env[1280]: time="2025-05-17T00:32:36.128046035Z" level=info msg="RemoveContainer for \"e5f9973a2849e55352e3bc1245186764ae9469895d0de642fb02f6022ccae741\"" May 17 00:32:36.134303 env[1280]: time="2025-05-17T00:32:36.134242926Z" level=info msg="RemoveContainer for \"e5f9973a2849e55352e3bc1245186764ae9469895d0de642fb02f6022ccae741\" returns successfully" May 17 00:32:36.134468 kubelet[1485]: I0517 00:32:36.134439 1485 scope.go:117] "RemoveContainer" containerID="6c2348e676fbadf49bad7d837aa69ab23eb4a8c9fe4ef6da82451e079180154d" May 17 00:32:36.135486 env[1280]: time="2025-05-17T00:32:36.135460232Z" level=info msg="RemoveContainer for \"6c2348e676fbadf49bad7d837aa69ab23eb4a8c9fe4ef6da82451e079180154d\"" May 17 00:32:36.138204 env[1280]: time="2025-05-17T00:32:36.138171616Z" level=info msg="RemoveContainer for \"6c2348e676fbadf49bad7d837aa69ab23eb4a8c9fe4ef6da82451e079180154d\" returns successfully" May 17 00:32:36.138383 kubelet[1485]: I0517 00:32:36.138333 1485 scope.go:117] "RemoveContainer" containerID="31b3e6005a8d04602b0568c24e87cdaa04e0caa3b5512e65ea2471b6806d9d7b" May 17 00:32:36.139306 env[1280]: time="2025-05-17T00:32:36.139281261Z" level=info msg="RemoveContainer for \"31b3e6005a8d04602b0568c24e87cdaa04e0caa3b5512e65ea2471b6806d9d7b\"" May 17 00:32:36.142041 env[1280]: time="2025-05-17T00:32:36.142005349Z" level=info msg="RemoveContainer for \"31b3e6005a8d04602b0568c24e87cdaa04e0caa3b5512e65ea2471b6806d9d7b\" returns successfully" May 17 00:32:36.142187 kubelet[1485]: I0517 00:32:36.142159 1485 scope.go:117] "RemoveContainer" containerID="8b34d8b7dc9ffb647308f871151e3fd3385d060755e37eb8418304d7e758baf2" May 17 00:32:36.142490 env[1280]: time="2025-05-17T00:32:36.142393348Z" level=error msg="ContainerStatus for \"8b34d8b7dc9ffb647308f871151e3fd3385d060755e37eb8418304d7e758baf2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8b34d8b7dc9ffb647308f871151e3fd3385d060755e37eb8418304d7e758baf2\": not found" May 17 00:32:36.142631 kubelet[1485]: E0517 00:32:36.142601 1485 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8b34d8b7dc9ffb647308f871151e3fd3385d060755e37eb8418304d7e758baf2\": not found" containerID="8b34d8b7dc9ffb647308f871151e3fd3385d060755e37eb8418304d7e758baf2" May 17 00:32:36.142705 kubelet[1485]: I0517 00:32:36.142637 1485 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8b34d8b7dc9ffb647308f871151e3fd3385d060755e37eb8418304d7e758baf2"} err="failed to get container status \"8b34d8b7dc9ffb647308f871151e3fd3385d060755e37eb8418304d7e758baf2\": rpc error: code = NotFound desc = an error occurred when try to find container \"8b34d8b7dc9ffb647308f871151e3fd3385d060755e37eb8418304d7e758baf2\": not found" May 17 00:32:36.142705 kubelet[1485]: I0517 00:32:36.142690 1485 scope.go:117] "RemoveContainer" containerID="3991c34766f6a2aea793b46c8bf3ddb3c93352eba7007828f3ca1bc3dee7febc" May 17 00:32:36.142891 env[1280]: time="2025-05-17T00:32:36.142841968Z" level=error msg="ContainerStatus for \"3991c34766f6a2aea793b46c8bf3ddb3c93352eba7007828f3ca1bc3dee7febc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3991c34766f6a2aea793b46c8bf3ddb3c93352eba7007828f3ca1bc3dee7febc\": not found" May 17 00:32:36.142987 kubelet[1485]: E0517 00:32:36.142963 1485 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3991c34766f6a2aea793b46c8bf3ddb3c93352eba7007828f3ca1bc3dee7febc\": not found" containerID="3991c34766f6a2aea793b46c8bf3ddb3c93352eba7007828f3ca1bc3dee7febc" May 17 00:32:36.143041 kubelet[1485]: I0517 00:32:36.142985 1485 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3991c34766f6a2aea793b46c8bf3ddb3c93352eba7007828f3ca1bc3dee7febc"} err="failed to get container status \"3991c34766f6a2aea793b46c8bf3ddb3c93352eba7007828f3ca1bc3dee7febc\": rpc error: code = NotFound desc = an error occurred when try to find container \"3991c34766f6a2aea793b46c8bf3ddb3c93352eba7007828f3ca1bc3dee7febc\": not found" May 17 00:32:36.143041 kubelet[1485]: I0517 00:32:36.143004 1485 scope.go:117] "RemoveContainer" containerID="e5f9973a2849e55352e3bc1245186764ae9469895d0de642fb02f6022ccae741" May 17 00:32:36.143204 env[1280]: time="2025-05-17T00:32:36.143160118Z" level=error msg="ContainerStatus for \"e5f9973a2849e55352e3bc1245186764ae9469895d0de642fb02f6022ccae741\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e5f9973a2849e55352e3bc1245186764ae9469895d0de642fb02f6022ccae741\": not found" May 17 00:32:36.143364 kubelet[1485]: E0517 00:32:36.143329 1485 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e5f9973a2849e55352e3bc1245186764ae9469895d0de642fb02f6022ccae741\": not found" containerID="e5f9973a2849e55352e3bc1245186764ae9469895d0de642fb02f6022ccae741" May 17 00:32:36.143413 kubelet[1485]: I0517 00:32:36.143369 1485 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e5f9973a2849e55352e3bc1245186764ae9469895d0de642fb02f6022ccae741"} err="failed to get container status \"e5f9973a2849e55352e3bc1245186764ae9469895d0de642fb02f6022ccae741\": rpc error: code = NotFound desc = an error occurred when try to find container \"e5f9973a2849e55352e3bc1245186764ae9469895d0de642fb02f6022ccae741\": not found" May 17 00:32:36.143413 kubelet[1485]: I0517 00:32:36.143396 1485 scope.go:117] "RemoveContainer" containerID="6c2348e676fbadf49bad7d837aa69ab23eb4a8c9fe4ef6da82451e079180154d" May 17 00:32:36.143630 env[1280]: time="2025-05-17T00:32:36.143561782Z" level=error msg="ContainerStatus for \"6c2348e676fbadf49bad7d837aa69ab23eb4a8c9fe4ef6da82451e079180154d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6c2348e676fbadf49bad7d837aa69ab23eb4a8c9fe4ef6da82451e079180154d\": not found" May 17 00:32:36.143764 kubelet[1485]: E0517 00:32:36.143741 1485 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6c2348e676fbadf49bad7d837aa69ab23eb4a8c9fe4ef6da82451e079180154d\": not found" containerID="6c2348e676fbadf49bad7d837aa69ab23eb4a8c9fe4ef6da82451e079180154d" May 17 00:32:36.143814 kubelet[1485]: I0517 00:32:36.143766 1485 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6c2348e676fbadf49bad7d837aa69ab23eb4a8c9fe4ef6da82451e079180154d"} err="failed to get container status \"6c2348e676fbadf49bad7d837aa69ab23eb4a8c9fe4ef6da82451e079180154d\": rpc error: code = NotFound desc = an error occurred when try to find container \"6c2348e676fbadf49bad7d837aa69ab23eb4a8c9fe4ef6da82451e079180154d\": not found" May 17 00:32:36.143814 kubelet[1485]: I0517 00:32:36.143785 1485 scope.go:117] "RemoveContainer" containerID="31b3e6005a8d04602b0568c24e87cdaa04e0caa3b5512e65ea2471b6806d9d7b" May 17 00:32:36.144000 env[1280]: time="2025-05-17T00:32:36.143949811Z" level=error msg="ContainerStatus for \"31b3e6005a8d04602b0568c24e87cdaa04e0caa3b5512e65ea2471b6806d9d7b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"31b3e6005a8d04602b0568c24e87cdaa04e0caa3b5512e65ea2471b6806d9d7b\": not found" May 17 00:32:36.144097 kubelet[1485]: E0517 00:32:36.144080 1485 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"31b3e6005a8d04602b0568c24e87cdaa04e0caa3b5512e65ea2471b6806d9d7b\": not found" containerID="31b3e6005a8d04602b0568c24e87cdaa04e0caa3b5512e65ea2471b6806d9d7b" May 17 00:32:36.144131 kubelet[1485]: I0517 00:32:36.144098 1485 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"31b3e6005a8d04602b0568c24e87cdaa04e0caa3b5512e65ea2471b6806d9d7b"} err="failed to get container status \"31b3e6005a8d04602b0568c24e87cdaa04e0caa3b5512e65ea2471b6806d9d7b\": rpc error: code = NotFound desc = an error occurred when try to find container \"31b3e6005a8d04602b0568c24e87cdaa04e0caa3b5512e65ea2471b6806d9d7b\": not found" May 17 00:32:36.195032 systemd[1]: var-lib-kubelet-pods-dab1a348\x2d85d4\x2d4040\x2d934d\x2d4c0658d05311-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 17 00:32:36.195124 systemd[1]: var-lib-kubelet-pods-dab1a348\x2d85d4\x2d4040\x2d934d\x2d4c0658d05311-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 17 00:32:36.252528 kubelet[1485]: E0517 00:32:36.252481 1485 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:32:37.253110 kubelet[1485]: E0517 00:32:37.253062 1485 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:32:37.256140 kubelet[1485]: E0517 00:32:37.256106 1485 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:10.0.0.61\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '10.0.0.61' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"cilium-config\"" type="*v1.ConfigMap" May 17 00:32:37.256140 kubelet[1485]: I0517 00:32:37.256105 1485 status_manager.go:895] "Failed to get status for pod" podUID="fa10ae7f-4300-4f87-a054-0f92459a5b47" pod="kube-system/cilium-operator-6c4d7847fc-2bmf4" err="pods \"cilium-operator-6c4d7847fc-2bmf4\" is forbidden: User \"system:node:10.0.0.61\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '10.0.0.61' and this object" May 17 00:32:37.257987 systemd[1]: Created slice kubepods-besteffort-podfa10ae7f_4300_4f87_a054_0f92459a5b47.slice. May 17 00:32:37.268249 systemd[1]: Created slice kubepods-burstable-podb306e443_c238_4c28_afe3_9f08a335df67.slice. May 17 00:32:37.373862 kubelet[1485]: I0517 00:32:37.373829 1485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntg7p\" (UniqueName: \"kubernetes.io/projected/fa10ae7f-4300-4f87-a054-0f92459a5b47-kube-api-access-ntg7p\") pod \"cilium-operator-6c4d7847fc-2bmf4\" (UID: \"fa10ae7f-4300-4f87-a054-0f92459a5b47\") " pod="kube-system/cilium-operator-6c4d7847fc-2bmf4" May 17 00:32:37.374003 kubelet[1485]: I0517 00:32:37.373872 1485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b306e443-c238-4c28-afe3-9f08a335df67-cilium-run\") pod \"cilium-ld5v2\" (UID: \"b306e443-c238-4c28-afe3-9f08a335df67\") " pod="kube-system/cilium-ld5v2" May 17 00:32:37.374003 kubelet[1485]: I0517 00:32:37.373895 1485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b306e443-c238-4c28-afe3-9f08a335df67-hostproc\") pod \"cilium-ld5v2\" (UID: \"b306e443-c238-4c28-afe3-9f08a335df67\") " pod="kube-system/cilium-ld5v2" May 17 00:32:37.374003 kubelet[1485]: I0517 00:32:37.373913 1485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b306e443-c238-4c28-afe3-9f08a335df67-cilium-ipsec-secrets\") pod \"cilium-ld5v2\" (UID: \"b306e443-c238-4c28-afe3-9f08a335df67\") " pod="kube-system/cilium-ld5v2" May 17 00:32:37.374003 kubelet[1485]: I0517 00:32:37.373939 1485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b306e443-c238-4c28-afe3-9f08a335df67-host-proc-sys-net\") pod \"cilium-ld5v2\" (UID: \"b306e443-c238-4c28-afe3-9f08a335df67\") " pod="kube-system/cilium-ld5v2" May 17 00:32:37.374003 kubelet[1485]: I0517 00:32:37.373957 1485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b306e443-c238-4c28-afe3-9f08a335df67-bpf-maps\") pod \"cilium-ld5v2\" (UID: \"b306e443-c238-4c28-afe3-9f08a335df67\") " pod="kube-system/cilium-ld5v2" May 17 00:32:37.374003 kubelet[1485]: I0517 00:32:37.373975 1485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b306e443-c238-4c28-afe3-9f08a335df67-cilium-cgroup\") pod \"cilium-ld5v2\" (UID: \"b306e443-c238-4c28-afe3-9f08a335df67\") " pod="kube-system/cilium-ld5v2" May 17 00:32:37.374229 kubelet[1485]: I0517 00:32:37.373993 1485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b306e443-c238-4c28-afe3-9f08a335df67-lib-modules\") pod \"cilium-ld5v2\" (UID: \"b306e443-c238-4c28-afe3-9f08a335df67\") " pod="kube-system/cilium-ld5v2" May 17 00:32:37.374229 kubelet[1485]: I0517 00:32:37.374014 1485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b306e443-c238-4c28-afe3-9f08a335df67-cni-path\") pod \"cilium-ld5v2\" (UID: \"b306e443-c238-4c28-afe3-9f08a335df67\") " pod="kube-system/cilium-ld5v2" May 17 00:32:37.374229 kubelet[1485]: I0517 00:32:37.374033 1485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b306e443-c238-4c28-afe3-9f08a335df67-etc-cni-netd\") pod \"cilium-ld5v2\" (UID: \"b306e443-c238-4c28-afe3-9f08a335df67\") " pod="kube-system/cilium-ld5v2" May 17 00:32:37.374229 kubelet[1485]: I0517 00:32:37.374058 1485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b306e443-c238-4c28-afe3-9f08a335df67-cilium-config-path\") pod \"cilium-ld5v2\" (UID: \"b306e443-c238-4c28-afe3-9f08a335df67\") " pod="kube-system/cilium-ld5v2" May 17 00:32:37.374229 kubelet[1485]: I0517 00:32:37.374080 1485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b306e443-c238-4c28-afe3-9f08a335df67-host-proc-sys-kernel\") pod \"cilium-ld5v2\" (UID: \"b306e443-c238-4c28-afe3-9f08a335df67\") " pod="kube-system/cilium-ld5v2" May 17 00:32:37.374353 kubelet[1485]: I0517 00:32:37.374105 1485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fa10ae7f-4300-4f87-a054-0f92459a5b47-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-2bmf4\" (UID: \"fa10ae7f-4300-4f87-a054-0f92459a5b47\") " pod="kube-system/cilium-operator-6c4d7847fc-2bmf4" May 17 00:32:37.374353 kubelet[1485]: I0517 00:32:37.374125 1485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b306e443-c238-4c28-afe3-9f08a335df67-xtables-lock\") pod \"cilium-ld5v2\" (UID: \"b306e443-c238-4c28-afe3-9f08a335df67\") " pod="kube-system/cilium-ld5v2" May 17 00:32:37.374353 kubelet[1485]: I0517 00:32:37.374144 1485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b306e443-c238-4c28-afe3-9f08a335df67-clustermesh-secrets\") pod \"cilium-ld5v2\" (UID: \"b306e443-c238-4c28-afe3-9f08a335df67\") " pod="kube-system/cilium-ld5v2" May 17 00:32:37.374353 kubelet[1485]: I0517 00:32:37.374161 1485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b306e443-c238-4c28-afe3-9f08a335df67-hubble-tls\") pod \"cilium-ld5v2\" (UID: \"b306e443-c238-4c28-afe3-9f08a335df67\") " pod="kube-system/cilium-ld5v2" May 17 00:32:37.374353 kubelet[1485]: I0517 00:32:37.374191 1485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9jxd\" (UniqueName: \"kubernetes.io/projected/b306e443-c238-4c28-afe3-9f08a335df67-kube-api-access-q9jxd\") pod \"cilium-ld5v2\" (UID: \"b306e443-c238-4c28-afe3-9f08a335df67\") " pod="kube-system/cilium-ld5v2" May 17 00:32:37.427296 kubelet[1485]: E0517 00:32:37.427259 1485 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-q9jxd lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-ld5v2" podUID="b306e443-c238-4c28-afe3-9f08a335df67" May 17 00:32:37.639970 kubelet[1485]: I0517 00:32:37.639930 1485 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dab1a348-85d4-4040-934d-4c0658d05311" path="/var/lib/kubelet/pods/dab1a348-85d4-4040-934d-4c0658d05311/volumes" May 17 00:32:38.253247 kubelet[1485]: E0517 00:32:38.253175 1485 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:32:38.281413 kubelet[1485]: I0517 00:32:38.281373 1485 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b306e443-c238-4c28-afe3-9f08a335df67-cni-path\") pod \"b306e443-c238-4c28-afe3-9f08a335df67\" (UID: \"b306e443-c238-4c28-afe3-9f08a335df67\") " May 17 00:32:38.281413 kubelet[1485]: I0517 00:32:38.281410 1485 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b306e443-c238-4c28-afe3-9f08a335df67-cilium-config-path\") pod \"b306e443-c238-4c28-afe3-9f08a335df67\" (UID: \"b306e443-c238-4c28-afe3-9f08a335df67\") " May 17 00:32:38.281413 kubelet[1485]: I0517 00:32:38.281426 1485 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b306e443-c238-4c28-afe3-9f08a335df67-xtables-lock\") pod \"b306e443-c238-4c28-afe3-9f08a335df67\" (UID: \"b306e443-c238-4c28-afe3-9f08a335df67\") " May 17 00:32:38.281730 kubelet[1485]: I0517 00:32:38.281450 1485 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9jxd\" (UniqueName: \"kubernetes.io/projected/b306e443-c238-4c28-afe3-9f08a335df67-kube-api-access-q9jxd\") pod \"b306e443-c238-4c28-afe3-9f08a335df67\" (UID: \"b306e443-c238-4c28-afe3-9f08a335df67\") " May 17 00:32:38.281730 kubelet[1485]: I0517 00:32:38.281468 1485 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b306e443-c238-4c28-afe3-9f08a335df67-cilium-run\") pod \"b306e443-c238-4c28-afe3-9f08a335df67\" (UID: \"b306e443-c238-4c28-afe3-9f08a335df67\") " May 17 00:32:38.281730 kubelet[1485]: I0517 00:32:38.281484 1485 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b306e443-c238-4c28-afe3-9f08a335df67-bpf-maps\") pod \"b306e443-c238-4c28-afe3-9f08a335df67\" (UID: \"b306e443-c238-4c28-afe3-9f08a335df67\") " May 17 00:32:38.281730 kubelet[1485]: I0517 00:32:38.281503 1485 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b306e443-c238-4c28-afe3-9f08a335df67-hostproc\") pod \"b306e443-c238-4c28-afe3-9f08a335df67\" (UID: \"b306e443-c238-4c28-afe3-9f08a335df67\") " May 17 00:32:38.281730 kubelet[1485]: I0517 00:32:38.281504 1485 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b306e443-c238-4c28-afe3-9f08a335df67-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b306e443-c238-4c28-afe3-9f08a335df67" (UID: "b306e443-c238-4c28-afe3-9f08a335df67"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:32:38.281730 kubelet[1485]: I0517 00:32:38.281525 1485 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b306e443-c238-4c28-afe3-9f08a335df67-cilium-ipsec-secrets\") pod \"b306e443-c238-4c28-afe3-9f08a335df67\" (UID: \"b306e443-c238-4c28-afe3-9f08a335df67\") " May 17 00:32:38.281895 kubelet[1485]: I0517 00:32:38.281534 1485 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b306e443-c238-4c28-afe3-9f08a335df67-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "b306e443-c238-4c28-afe3-9f08a335df67" (UID: "b306e443-c238-4c28-afe3-9f08a335df67"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:32:38.281895 kubelet[1485]: I0517 00:32:38.281543 1485 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b306e443-c238-4c28-afe3-9f08a335df67-lib-modules\") pod \"b306e443-c238-4c28-afe3-9f08a335df67\" (UID: \"b306e443-c238-4c28-afe3-9f08a335df67\") " May 17 00:32:38.281895 kubelet[1485]: I0517 00:32:38.281564 1485 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b306e443-c238-4c28-afe3-9f08a335df67-host-proc-sys-net\") pod \"b306e443-c238-4c28-afe3-9f08a335df67\" (UID: \"b306e443-c238-4c28-afe3-9f08a335df67\") " May 17 00:32:38.281895 kubelet[1485]: I0517 00:32:38.281580 1485 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b306e443-c238-4c28-afe3-9f08a335df67-cilium-cgroup\") pod \"b306e443-c238-4c28-afe3-9f08a335df67\" (UID: \"b306e443-c238-4c28-afe3-9f08a335df67\") " May 17 00:32:38.281895 kubelet[1485]: I0517 00:32:38.281596 1485 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b306e443-c238-4c28-afe3-9f08a335df67-etc-cni-netd\") pod \"b306e443-c238-4c28-afe3-9f08a335df67\" (UID: \"b306e443-c238-4c28-afe3-9f08a335df67\") " May 17 00:32:38.281895 kubelet[1485]: I0517 00:32:38.281613 1485 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b306e443-c238-4c28-afe3-9f08a335df67-host-proc-sys-kernel\") pod \"b306e443-c238-4c28-afe3-9f08a335df67\" (UID: \"b306e443-c238-4c28-afe3-9f08a335df67\") " May 17 00:32:38.282056 kubelet[1485]: I0517 00:32:38.281633 1485 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b306e443-c238-4c28-afe3-9f08a335df67-clustermesh-secrets\") pod \"b306e443-c238-4c28-afe3-9f08a335df67\" (UID: \"b306e443-c238-4c28-afe3-9f08a335df67\") " May 17 00:32:38.282056 kubelet[1485]: I0517 00:32:38.281653 1485 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b306e443-c238-4c28-afe3-9f08a335df67-hubble-tls\") pod \"b306e443-c238-4c28-afe3-9f08a335df67\" (UID: \"b306e443-c238-4c28-afe3-9f08a335df67\") " May 17 00:32:38.282056 kubelet[1485]: I0517 00:32:38.281701 1485 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b306e443-c238-4c28-afe3-9f08a335df67-xtables-lock\") on node \"10.0.0.61\" DevicePath \"\"" May 17 00:32:38.282056 kubelet[1485]: I0517 00:32:38.281714 1485 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b306e443-c238-4c28-afe3-9f08a335df67-cilium-run\") on node \"10.0.0.61\" DevicePath \"\"" May 17 00:32:38.282776 kubelet[1485]: I0517 00:32:38.282719 1485 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b306e443-c238-4c28-afe3-9f08a335df67-cni-path" (OuterVolumeSpecName: "cni-path") pod "b306e443-c238-4c28-afe3-9f08a335df67" (UID: "b306e443-c238-4c28-afe3-9f08a335df67"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:32:38.282941 kubelet[1485]: I0517 00:32:38.282907 1485 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b306e443-c238-4c28-afe3-9f08a335df67-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "b306e443-c238-4c28-afe3-9f08a335df67" (UID: "b306e443-c238-4c28-afe3-9f08a335df67"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:32:38.283067 kubelet[1485]: I0517 00:32:38.283051 1485 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b306e443-c238-4c28-afe3-9f08a335df67-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "b306e443-c238-4c28-afe3-9f08a335df67" (UID: "b306e443-c238-4c28-afe3-9f08a335df67"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:32:38.283198 kubelet[1485]: I0517 00:32:38.283182 1485 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b306e443-c238-4c28-afe3-9f08a335df67-hostproc" (OuterVolumeSpecName: "hostproc") pod "b306e443-c238-4c28-afe3-9f08a335df67" (UID: "b306e443-c238-4c28-afe3-9f08a335df67"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:32:38.283620 kubelet[1485]: I0517 00:32:38.283582 1485 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b306e443-c238-4c28-afe3-9f08a335df67-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b306e443-c238-4c28-afe3-9f08a335df67" (UID: "b306e443-c238-4c28-afe3-9f08a335df67"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 17 00:32:38.283697 kubelet[1485]: I0517 00:32:38.283631 1485 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b306e443-c238-4c28-afe3-9f08a335df67-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "b306e443-c238-4c28-afe3-9f08a335df67" (UID: "b306e443-c238-4c28-afe3-9f08a335df67"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:32:38.283697 kubelet[1485]: I0517 00:32:38.283651 1485 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b306e443-c238-4c28-afe3-9f08a335df67-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "b306e443-c238-4c28-afe3-9f08a335df67" (UID: "b306e443-c238-4c28-afe3-9f08a335df67"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:32:38.283697 kubelet[1485]: I0517 00:32:38.283683 1485 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b306e443-c238-4c28-afe3-9f08a335df67-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "b306e443-c238-4c28-afe3-9f08a335df67" (UID: "b306e443-c238-4c28-afe3-9f08a335df67"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:32:38.283801 kubelet[1485]: I0517 00:32:38.283705 1485 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b306e443-c238-4c28-afe3-9f08a335df67-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b306e443-c238-4c28-afe3-9f08a335df67" (UID: "b306e443-c238-4c28-afe3-9f08a335df67"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:32:38.285542 systemd[1]: var-lib-kubelet-pods-b306e443\x2dc238\x2d4c28\x2dafe3\x2d9f08a335df67-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dq9jxd.mount: Deactivated successfully. May 17 00:32:38.286132 kubelet[1485]: I0517 00:32:38.286094 1485 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b306e443-c238-4c28-afe3-9f08a335df67-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "b306e443-c238-4c28-afe3-9f08a335df67" (UID: "b306e443-c238-4c28-afe3-9f08a335df67"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 17 00:32:38.287515 kubelet[1485]: I0517 00:32:38.287484 1485 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b306e443-c238-4c28-afe3-9f08a335df67-kube-api-access-q9jxd" (OuterVolumeSpecName: "kube-api-access-q9jxd") pod "b306e443-c238-4c28-afe3-9f08a335df67" (UID: "b306e443-c238-4c28-afe3-9f08a335df67"). InnerVolumeSpecName "kube-api-access-q9jxd". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 17 00:32:38.287691 systemd[1]: var-lib-kubelet-pods-b306e443\x2dc238\x2d4c28\x2dafe3\x2d9f08a335df67-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 17 00:32:38.287820 kubelet[1485]: I0517 00:32:38.287784 1485 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b306e443-c238-4c28-afe3-9f08a335df67-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "b306e443-c238-4c28-afe3-9f08a335df67" (UID: "b306e443-c238-4c28-afe3-9f08a335df67"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 17 00:32:38.287820 kubelet[1485]: I0517 00:32:38.287802 1485 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b306e443-c238-4c28-afe3-9f08a335df67-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "b306e443-c238-4c28-afe3-9f08a335df67" (UID: "b306e443-c238-4c28-afe3-9f08a335df67"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 17 00:32:38.289553 systemd[1]: var-lib-kubelet-pods-b306e443\x2dc238\x2d4c28\x2dafe3\x2d9f08a335df67-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 17 00:32:38.289627 systemd[1]: var-lib-kubelet-pods-b306e443\x2dc238\x2d4c28\x2dafe3\x2d9f08a335df67-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. May 17 00:32:38.356722 kubelet[1485]: E0517 00:32:38.356645 1485 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 17 00:32:38.381843 kubelet[1485]: I0517 00:32:38.381811 1485 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b306e443-c238-4c28-afe3-9f08a335df67-host-proc-sys-net\") on node \"10.0.0.61\" DevicePath \"\"" May 17 00:32:38.381843 kubelet[1485]: I0517 00:32:38.381834 1485 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b306e443-c238-4c28-afe3-9f08a335df67-cilium-cgroup\") on node \"10.0.0.61\" DevicePath \"\"" May 17 00:32:38.381843 kubelet[1485]: I0517 00:32:38.381843 1485 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b306e443-c238-4c28-afe3-9f08a335df67-etc-cni-netd\") on node \"10.0.0.61\" DevicePath \"\"" May 17 00:32:38.381981 kubelet[1485]: I0517 00:32:38.381852 1485 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b306e443-c238-4c28-afe3-9f08a335df67-host-proc-sys-kernel\") on node \"10.0.0.61\" DevicePath \"\"" May 17 00:32:38.381981 kubelet[1485]: I0517 00:32:38.381862 1485 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b306e443-c238-4c28-afe3-9f08a335df67-clustermesh-secrets\") on node \"10.0.0.61\" DevicePath \"\"" May 17 00:32:38.381981 kubelet[1485]: I0517 00:32:38.381870 1485 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b306e443-c238-4c28-afe3-9f08a335df67-hubble-tls\") on node \"10.0.0.61\" DevicePath \"\"" May 17 00:32:38.381981 kubelet[1485]: I0517 00:32:38.381878 1485 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b306e443-c238-4c28-afe3-9f08a335df67-cni-path\") on node \"10.0.0.61\" DevicePath \"\"" May 17 00:32:38.381981 kubelet[1485]: I0517 00:32:38.381886 1485 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b306e443-c238-4c28-afe3-9f08a335df67-cilium-config-path\") on node \"10.0.0.61\" DevicePath \"\"" May 17 00:32:38.381981 kubelet[1485]: I0517 00:32:38.381894 1485 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q9jxd\" (UniqueName: \"kubernetes.io/projected/b306e443-c238-4c28-afe3-9f08a335df67-kube-api-access-q9jxd\") on node \"10.0.0.61\" DevicePath \"\"" May 17 00:32:38.381981 kubelet[1485]: I0517 00:32:38.381903 1485 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b306e443-c238-4c28-afe3-9f08a335df67-bpf-maps\") on node \"10.0.0.61\" DevicePath \"\"" May 17 00:32:38.381981 kubelet[1485]: I0517 00:32:38.381910 1485 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b306e443-c238-4c28-afe3-9f08a335df67-hostproc\") on node \"10.0.0.61\" DevicePath \"\"" May 17 00:32:38.382222 kubelet[1485]: I0517 00:32:38.381918 1485 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b306e443-c238-4c28-afe3-9f08a335df67-cilium-ipsec-secrets\") on node \"10.0.0.61\" DevicePath \"\"" May 17 00:32:38.382222 kubelet[1485]: I0517 00:32:38.381926 1485 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b306e443-c238-4c28-afe3-9f08a335df67-lib-modules\") on node \"10.0.0.61\" DevicePath \"\"" May 17 00:32:38.460463 kubelet[1485]: E0517 00:32:38.460430 1485 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:32:38.460939 env[1280]: time="2025-05-17T00:32:38.460897385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-2bmf4,Uid:fa10ae7f-4300-4f87-a054-0f92459a5b47,Namespace:kube-system,Attempt:0,}" May 17 00:32:38.474189 env[1280]: time="2025-05-17T00:32:38.474083682Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:32:38.474189 env[1280]: time="2025-05-17T00:32:38.474147510Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:32:38.474189 env[1280]: time="2025-05-17T00:32:38.474159573Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:32:38.474568 env[1280]: time="2025-05-17T00:32:38.474501177Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/85f4bfa040a72e3873d8c71370f97f1bd879bc8dbf7744123bb321cd43522f0b pid=3021 runtime=io.containerd.runc.v2 May 17 00:32:38.490254 systemd[1]: Started cri-containerd-85f4bfa040a72e3873d8c71370f97f1bd879bc8dbf7744123bb321cd43522f0b.scope. May 17 00:32:38.522496 env[1280]: time="2025-05-17T00:32:38.521262974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-2bmf4,Uid:fa10ae7f-4300-4f87-a054-0f92459a5b47,Namespace:kube-system,Attempt:0,} returns sandbox id \"85f4bfa040a72e3873d8c71370f97f1bd879bc8dbf7744123bb321cd43522f0b\"" May 17 00:32:38.522650 kubelet[1485]: E0517 00:32:38.521929 1485 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:32:38.523128 env[1280]: time="2025-05-17T00:32:38.523082791Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 17 00:32:39.125049 systemd[1]: Removed slice kubepods-burstable-podb306e443_c238_4c28_afe3_9f08a335df67.slice. May 17 00:32:39.175160 systemd[1]: Created slice kubepods-burstable-poddfb35d7a_2007_4989_9819_54fdc9cda33e.slice. May 17 00:32:39.254060 kubelet[1485]: E0517 00:32:39.253985 1485 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:32:39.292464 kubelet[1485]: I0517 00:32:39.292405 1485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dfb35d7a-2007-4989-9819-54fdc9cda33e-cilium-run\") pod \"cilium-rs4pl\" (UID: \"dfb35d7a-2007-4989-9819-54fdc9cda33e\") " pod="kube-system/cilium-rs4pl" May 17 00:32:39.292464 kubelet[1485]: I0517 00:32:39.292460 1485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dfb35d7a-2007-4989-9819-54fdc9cda33e-lib-modules\") pod \"cilium-rs4pl\" (UID: \"dfb35d7a-2007-4989-9819-54fdc9cda33e\") " pod="kube-system/cilium-rs4pl" May 17 00:32:39.292657 kubelet[1485]: I0517 00:32:39.292485 1485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/dfb35d7a-2007-4989-9819-54fdc9cda33e-cilium-ipsec-secrets\") pod \"cilium-rs4pl\" (UID: \"dfb35d7a-2007-4989-9819-54fdc9cda33e\") " pod="kube-system/cilium-rs4pl" May 17 00:32:39.292657 kubelet[1485]: I0517 00:32:39.292501 1485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dfb35d7a-2007-4989-9819-54fdc9cda33e-host-proc-sys-net\") pod \"cilium-rs4pl\" (UID: \"dfb35d7a-2007-4989-9819-54fdc9cda33e\") " pod="kube-system/cilium-rs4pl" May 17 00:32:39.292657 kubelet[1485]: I0517 00:32:39.292519 1485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxkxj\" (UniqueName: \"kubernetes.io/projected/dfb35d7a-2007-4989-9819-54fdc9cda33e-kube-api-access-lxkxj\") pod \"cilium-rs4pl\" (UID: \"dfb35d7a-2007-4989-9819-54fdc9cda33e\") " pod="kube-system/cilium-rs4pl" May 17 00:32:39.292657 kubelet[1485]: I0517 00:32:39.292539 1485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dfb35d7a-2007-4989-9819-54fdc9cda33e-hostproc\") pod \"cilium-rs4pl\" (UID: \"dfb35d7a-2007-4989-9819-54fdc9cda33e\") " pod="kube-system/cilium-rs4pl" May 17 00:32:39.292657 kubelet[1485]: I0517 00:32:39.292558 1485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dfb35d7a-2007-4989-9819-54fdc9cda33e-cni-path\") pod \"cilium-rs4pl\" (UID: \"dfb35d7a-2007-4989-9819-54fdc9cda33e\") " pod="kube-system/cilium-rs4pl" May 17 00:32:39.292657 kubelet[1485]: I0517 00:32:39.292578 1485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dfb35d7a-2007-4989-9819-54fdc9cda33e-clustermesh-secrets\") pod \"cilium-rs4pl\" (UID: \"dfb35d7a-2007-4989-9819-54fdc9cda33e\") " pod="kube-system/cilium-rs4pl" May 17 00:32:39.292853 kubelet[1485]: I0517 00:32:39.292612 1485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dfb35d7a-2007-4989-9819-54fdc9cda33e-host-proc-sys-kernel\") pod \"cilium-rs4pl\" (UID: \"dfb35d7a-2007-4989-9819-54fdc9cda33e\") " pod="kube-system/cilium-rs4pl" May 17 00:32:39.292853 kubelet[1485]: I0517 00:32:39.292652 1485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dfb35d7a-2007-4989-9819-54fdc9cda33e-hubble-tls\") pod \"cilium-rs4pl\" (UID: \"dfb35d7a-2007-4989-9819-54fdc9cda33e\") " pod="kube-system/cilium-rs4pl" May 17 00:32:39.292853 kubelet[1485]: I0517 00:32:39.292693 1485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dfb35d7a-2007-4989-9819-54fdc9cda33e-bpf-maps\") pod \"cilium-rs4pl\" (UID: \"dfb35d7a-2007-4989-9819-54fdc9cda33e\") " pod="kube-system/cilium-rs4pl" May 17 00:32:39.292853 kubelet[1485]: I0517 00:32:39.292712 1485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dfb35d7a-2007-4989-9819-54fdc9cda33e-cilium-cgroup\") pod \"cilium-rs4pl\" (UID: \"dfb35d7a-2007-4989-9819-54fdc9cda33e\") " pod="kube-system/cilium-rs4pl" May 17 00:32:39.292853 kubelet[1485]: I0517 00:32:39.292729 1485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dfb35d7a-2007-4989-9819-54fdc9cda33e-etc-cni-netd\") pod \"cilium-rs4pl\" (UID: \"dfb35d7a-2007-4989-9819-54fdc9cda33e\") " pod="kube-system/cilium-rs4pl" May 17 00:32:39.292853 kubelet[1485]: I0517 00:32:39.292750 1485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dfb35d7a-2007-4989-9819-54fdc9cda33e-xtables-lock\") pod \"cilium-rs4pl\" (UID: \"dfb35d7a-2007-4989-9819-54fdc9cda33e\") " pod="kube-system/cilium-rs4pl" May 17 00:32:39.292988 kubelet[1485]: I0517 00:32:39.292771 1485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dfb35d7a-2007-4989-9819-54fdc9cda33e-cilium-config-path\") pod \"cilium-rs4pl\" (UID: \"dfb35d7a-2007-4989-9819-54fdc9cda33e\") " pod="kube-system/cilium-rs4pl" May 17 00:32:39.486048 kubelet[1485]: E0517 00:32:39.485945 1485 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:32:39.486697 env[1280]: time="2025-05-17T00:32:39.486433746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rs4pl,Uid:dfb35d7a-2007-4989-9819-54fdc9cda33e,Namespace:kube-system,Attempt:0,}" May 17 00:32:39.502387 env[1280]: time="2025-05-17T00:32:39.502317239Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:32:39.502387 env[1280]: time="2025-05-17T00:32:39.502359116Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:32:39.502606 env[1280]: time="2025-05-17T00:32:39.502371820Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:32:39.502808 env[1280]: time="2025-05-17T00:32:39.502761564Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f7088063cbb5e7cfa18939fb2abc8668260dc792e67c9e1bc80b7d31c265a463 pid=3066 runtime=io.containerd.runc.v2 May 17 00:32:39.516744 systemd[1]: run-containerd-runc-k8s.io-f7088063cbb5e7cfa18939fb2abc8668260dc792e67c9e1bc80b7d31c265a463-runc.PdXBUw.mount: Deactivated successfully. May 17 00:32:39.518495 systemd[1]: Started cri-containerd-f7088063cbb5e7cfa18939fb2abc8668260dc792e67c9e1bc80b7d31c265a463.scope. May 17 00:32:39.539095 env[1280]: time="2025-05-17T00:32:39.539042030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rs4pl,Uid:dfb35d7a-2007-4989-9819-54fdc9cda33e,Namespace:kube-system,Attempt:0,} returns sandbox id \"f7088063cbb5e7cfa18939fb2abc8668260dc792e67c9e1bc80b7d31c265a463\"" May 17 00:32:39.539809 kubelet[1485]: E0517 00:32:39.539789 1485 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:32:39.544130 env[1280]: time="2025-05-17T00:32:39.544075696Z" level=info msg="CreateContainer within sandbox \"f7088063cbb5e7cfa18939fb2abc8668260dc792e67c9e1bc80b7d31c265a463\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 17 00:32:39.555430 env[1280]: time="2025-05-17T00:32:39.555319183Z" level=info msg="CreateContainer within sandbox \"f7088063cbb5e7cfa18939fb2abc8668260dc792e67c9e1bc80b7d31c265a463\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9b253b9fc19a4ac096b4995d365d5a1cfdb875b0e01e634e4051764055e4bc90\"" May 17 00:32:39.555790 env[1280]: time="2025-05-17T00:32:39.555743040Z" level=info msg="StartContainer for \"9b253b9fc19a4ac096b4995d365d5a1cfdb875b0e01e634e4051764055e4bc90\"" May 17 00:32:39.569171 systemd[1]: Started cri-containerd-9b253b9fc19a4ac096b4995d365d5a1cfdb875b0e01e634e4051764055e4bc90.scope. May 17 00:32:39.594416 env[1280]: time="2025-05-17T00:32:39.594361588Z" level=info msg="StartContainer for \"9b253b9fc19a4ac096b4995d365d5a1cfdb875b0e01e634e4051764055e4bc90\" returns successfully" May 17 00:32:39.601073 systemd[1]: cri-containerd-9b253b9fc19a4ac096b4995d365d5a1cfdb875b0e01e634e4051764055e4bc90.scope: Deactivated successfully. May 17 00:32:39.629465 env[1280]: time="2025-05-17T00:32:39.629406900Z" level=info msg="shim disconnected" id=9b253b9fc19a4ac096b4995d365d5a1cfdb875b0e01e634e4051764055e4bc90 May 17 00:32:39.629465 env[1280]: time="2025-05-17T00:32:39.629457584Z" level=warning msg="cleaning up after shim disconnected" id=9b253b9fc19a4ac096b4995d365d5a1cfdb875b0e01e634e4051764055e4bc90 namespace=k8s.io May 17 00:32:39.629465 env[1280]: time="2025-05-17T00:32:39.629467553Z" level=info msg="cleaning up dead shim" May 17 00:32:39.635853 env[1280]: time="2025-05-17T00:32:39.635801434Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:32:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3150 runtime=io.containerd.runc.v2\n" May 17 00:32:39.639952 kubelet[1485]: I0517 00:32:39.639924 1485 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b306e443-c238-4c28-afe3-9f08a335df67" path="/var/lib/kubelet/pods/b306e443-c238-4c28-afe3-9f08a335df67/volumes" May 17 00:32:40.124759 kubelet[1485]: E0517 00:32:40.124403 1485 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:32:40.128542 env[1280]: time="2025-05-17T00:32:40.128490513Z" level=info msg="CreateContainer within sandbox \"f7088063cbb5e7cfa18939fb2abc8668260dc792e67c9e1bc80b7d31c265a463\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 17 00:32:40.141840 env[1280]: time="2025-05-17T00:32:40.141778552Z" level=info msg="CreateContainer within sandbox \"f7088063cbb5e7cfa18939fb2abc8668260dc792e67c9e1bc80b7d31c265a463\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d68d3fc9f88dc2085f093600a20701acd6fbde837c2b004b1e2fe31a6fb93ed8\"" May 17 00:32:40.142769 env[1280]: time="2025-05-17T00:32:40.142662985Z" level=info msg="StartContainer for \"d68d3fc9f88dc2085f093600a20701acd6fbde837c2b004b1e2fe31a6fb93ed8\"" May 17 00:32:40.158557 systemd[1]: Started cri-containerd-d68d3fc9f88dc2085f093600a20701acd6fbde837c2b004b1e2fe31a6fb93ed8.scope. May 17 00:32:40.189995 env[1280]: time="2025-05-17T00:32:40.189930808Z" level=info msg="StartContainer for \"d68d3fc9f88dc2085f093600a20701acd6fbde837c2b004b1e2fe31a6fb93ed8\" returns successfully" May 17 00:32:40.194303 systemd[1]: cri-containerd-d68d3fc9f88dc2085f093600a20701acd6fbde837c2b004b1e2fe31a6fb93ed8.scope: Deactivated successfully. May 17 00:32:40.254610 kubelet[1485]: E0517 00:32:40.254556 1485 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:32:40.631928 env[1280]: time="2025-05-17T00:32:40.631869827Z" level=info msg="shim disconnected" id=d68d3fc9f88dc2085f093600a20701acd6fbde837c2b004b1e2fe31a6fb93ed8 May 17 00:32:40.631928 env[1280]: time="2025-05-17T00:32:40.631916955Z" level=warning msg="cleaning up after shim disconnected" id=d68d3fc9f88dc2085f093600a20701acd6fbde837c2b004b1e2fe31a6fb93ed8 namespace=k8s.io May 17 00:32:40.631928 env[1280]: time="2025-05-17T00:32:40.631924789Z" level=info msg="cleaning up dead shim" May 17 00:32:40.638746 env[1280]: time="2025-05-17T00:32:40.638692794Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:32:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3211 runtime=io.containerd.runc.v2\n" May 17 00:32:40.970864 env[1280]: time="2025-05-17T00:32:40.970712483Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:32:40.973713 env[1280]: time="2025-05-17T00:32:40.973655171Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:32:40.975583 env[1280]: time="2025-05-17T00:32:40.975539582Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:32:40.975984 env[1280]: time="2025-05-17T00:32:40.975949323Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 17 00:32:40.981349 env[1280]: time="2025-05-17T00:32:40.981299754Z" level=info msg="CreateContainer within sandbox \"85f4bfa040a72e3873d8c71370f97f1bd879bc8dbf7744123bb321cd43522f0b\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 17 00:32:40.998063 env[1280]: time="2025-05-17T00:32:40.997998841Z" level=info msg="CreateContainer within sandbox \"85f4bfa040a72e3873d8c71370f97f1bd879bc8dbf7744123bb321cd43522f0b\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"f9d71f4d4ad40692e8cd1e1fd4bda6f4112a8040cf06b3cb8ffa082eaa09efb7\"" May 17 00:32:40.998590 env[1280]: time="2025-05-17T00:32:40.998548111Z" level=info msg="StartContainer for \"f9d71f4d4ad40692e8cd1e1fd4bda6f4112a8040cf06b3cb8ffa082eaa09efb7\"" May 17 00:32:41.015621 systemd[1]: Started cri-containerd-f9d71f4d4ad40692e8cd1e1fd4bda6f4112a8040cf06b3cb8ffa082eaa09efb7.scope. May 17 00:32:41.115905 env[1280]: time="2025-05-17T00:32:41.115836008Z" level=info msg="StartContainer for \"f9d71f4d4ad40692e8cd1e1fd4bda6f4112a8040cf06b3cb8ffa082eaa09efb7\" returns successfully" May 17 00:32:41.127135 kubelet[1485]: E0517 00:32:41.127096 1485 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:32:41.129360 kubelet[1485]: E0517 00:32:41.129157 1485 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:32:41.145574 env[1280]: time="2025-05-17T00:32:41.145427521Z" level=info msg="CreateContainer within sandbox \"f7088063cbb5e7cfa18939fb2abc8668260dc792e67c9e1bc80b7d31c265a463\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 17 00:32:41.146442 kubelet[1485]: I0517 00:32:41.146367 1485 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-2bmf4" podStartSLOduration=1.6923513909999999 podStartE2EDuration="4.1463465s" podCreationTimestamp="2025-05-17 00:32:37 +0000 UTC" firstStartedPulling="2025-05-17 00:32:38.522843587 +0000 UTC m=+66.091085497" lastFinishedPulling="2025-05-17 00:32:40.976838706 +0000 UTC m=+68.545080606" observedRunningTime="2025-05-17 00:32:41.146156497 +0000 UTC m=+68.714398417" watchObservedRunningTime="2025-05-17 00:32:41.1463465 +0000 UTC m=+68.714588420" May 17 00:32:41.166989 env[1280]: time="2025-05-17T00:32:41.166841572Z" level=info msg="CreateContainer within sandbox \"f7088063cbb5e7cfa18939fb2abc8668260dc792e67c9e1bc80b7d31c265a463\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ac04269644128b2dd5b92ea89371ee2749e6cf9a3c544e601513954fff2d0754\"" May 17 00:32:41.167941 env[1280]: time="2025-05-17T00:32:41.167891885Z" level=info msg="StartContainer for \"ac04269644128b2dd5b92ea89371ee2749e6cf9a3c544e601513954fff2d0754\"" May 17 00:32:41.189627 systemd[1]: Started cri-containerd-ac04269644128b2dd5b92ea89371ee2749e6cf9a3c544e601513954fff2d0754.scope. May 17 00:32:41.220150 env[1280]: time="2025-05-17T00:32:41.220099995Z" level=info msg="StartContainer for \"ac04269644128b2dd5b92ea89371ee2749e6cf9a3c544e601513954fff2d0754\" returns successfully" May 17 00:32:41.223042 systemd[1]: cri-containerd-ac04269644128b2dd5b92ea89371ee2749e6cf9a3c544e601513954fff2d0754.scope: Deactivated successfully. May 17 00:32:41.245280 env[1280]: time="2025-05-17T00:32:41.245214956Z" level=info msg="shim disconnected" id=ac04269644128b2dd5b92ea89371ee2749e6cf9a3c544e601513954fff2d0754 May 17 00:32:41.245280 env[1280]: time="2025-05-17T00:32:41.245262635Z" level=warning msg="cleaning up after shim disconnected" id=ac04269644128b2dd5b92ea89371ee2749e6cf9a3c544e601513954fff2d0754 namespace=k8s.io May 17 00:32:41.245280 env[1280]: time="2025-05-17T00:32:41.245271381Z" level=info msg="cleaning up dead shim" May 17 00:32:41.252797 env[1280]: time="2025-05-17T00:32:41.252729046Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:32:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3303 runtime=io.containerd.runc.v2\n" May 17 00:32:41.254873 kubelet[1485]: E0517 00:32:41.254823 1485 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:32:42.132372 kubelet[1485]: E0517 00:32:42.132341 1485 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:32:42.132564 kubelet[1485]: E0517 00:32:42.132341 1485 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:32:42.138053 env[1280]: time="2025-05-17T00:32:42.137988316Z" level=info msg="CreateContainer within sandbox \"f7088063cbb5e7cfa18939fb2abc8668260dc792e67c9e1bc80b7d31c265a463\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 17 00:32:42.152300 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3834954653.mount: Deactivated successfully. May 17 00:32:42.153871 env[1280]: time="2025-05-17T00:32:42.153817640Z" level=info msg="CreateContainer within sandbox \"f7088063cbb5e7cfa18939fb2abc8668260dc792e67c9e1bc80b7d31c265a463\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"40a783301629d4d28e6d6eb556ec8e45854b0b84aed4eba86af46db2699067f1\"" May 17 00:32:42.155710 env[1280]: time="2025-05-17T00:32:42.154373123Z" level=info msg="StartContainer for \"40a783301629d4d28e6d6eb556ec8e45854b0b84aed4eba86af46db2699067f1\"" May 17 00:32:42.172943 systemd[1]: Started cri-containerd-40a783301629d4d28e6d6eb556ec8e45854b0b84aed4eba86af46db2699067f1.scope. May 17 00:32:42.195037 systemd[1]: cri-containerd-40a783301629d4d28e6d6eb556ec8e45854b0b84aed4eba86af46db2699067f1.scope: Deactivated successfully. May 17 00:32:42.195322 env[1280]: time="2025-05-17T00:32:42.195267392Z" level=info msg="StartContainer for \"40a783301629d4d28e6d6eb556ec8e45854b0b84aed4eba86af46db2699067f1\" returns successfully" May 17 00:32:42.215337 env[1280]: time="2025-05-17T00:32:42.215282503Z" level=info msg="shim disconnected" id=40a783301629d4d28e6d6eb556ec8e45854b0b84aed4eba86af46db2699067f1 May 17 00:32:42.215337 env[1280]: time="2025-05-17T00:32:42.215330192Z" level=warning msg="cleaning up after shim disconnected" id=40a783301629d4d28e6d6eb556ec8e45854b0b84aed4eba86af46db2699067f1 namespace=k8s.io May 17 00:32:42.215337 env[1280]: time="2025-05-17T00:32:42.215338818Z" level=info msg="cleaning up dead shim" May 17 00:32:42.221389 env[1280]: time="2025-05-17T00:32:42.221355440Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:32:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3359 runtime=io.containerd.runc.v2\n" May 17 00:32:42.255834 kubelet[1485]: E0517 00:32:42.255781 1485 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:32:42.495704 systemd[1]: run-containerd-runc-k8s.io-40a783301629d4d28e6d6eb556ec8e45854b0b84aed4eba86af46db2699067f1-runc.XeJCue.mount: Deactivated successfully. May 17 00:32:42.495783 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-40a783301629d4d28e6d6eb556ec8e45854b0b84aed4eba86af46db2699067f1-rootfs.mount: Deactivated successfully. May 17 00:32:43.136727 kubelet[1485]: E0517 00:32:43.136184 1485 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:32:43.141247 env[1280]: time="2025-05-17T00:32:43.141200835Z" level=info msg="CreateContainer within sandbox \"f7088063cbb5e7cfa18939fb2abc8668260dc792e67c9e1bc80b7d31c265a463\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 17 00:32:43.156323 env[1280]: time="2025-05-17T00:32:43.156267194Z" level=info msg="CreateContainer within sandbox \"f7088063cbb5e7cfa18939fb2abc8668260dc792e67c9e1bc80b7d31c265a463\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"cebf8a1c288e213674af07bae4884f125b1e38e4028a9b2cac37d660ecd0abca\"" May 17 00:32:43.156849 env[1280]: time="2025-05-17T00:32:43.156812198Z" level=info msg="StartContainer for \"cebf8a1c288e213674af07bae4884f125b1e38e4028a9b2cac37d660ecd0abca\"" May 17 00:32:43.172849 systemd[1]: Started cri-containerd-cebf8a1c288e213674af07bae4884f125b1e38e4028a9b2cac37d660ecd0abca.scope. May 17 00:32:43.197715 env[1280]: time="2025-05-17T00:32:43.197654069Z" level=info msg="StartContainer for \"cebf8a1c288e213674af07bae4884f125b1e38e4028a9b2cac37d660ecd0abca\" returns successfully" May 17 00:32:43.256422 kubelet[1485]: E0517 00:32:43.256388 1485 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:32:43.463710 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) May 17 00:32:43.496027 systemd[1]: run-containerd-runc-k8s.io-cebf8a1c288e213674af07bae4884f125b1e38e4028a9b2cac37d660ecd0abca-runc.zI3ai9.mount: Deactivated successfully. May 17 00:32:44.140883 kubelet[1485]: E0517 00:32:44.140841 1485 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:32:44.257006 kubelet[1485]: E0517 00:32:44.256936 1485 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:32:45.257327 kubelet[1485]: E0517 00:32:45.257273 1485 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:32:45.486816 kubelet[1485]: E0517 00:32:45.486760 1485 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:32:46.078635 systemd-networkd[1105]: lxc_health: Link UP May 17 00:32:46.087850 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 17 00:32:46.087583 systemd-networkd[1105]: lxc_health: Gained carrier May 17 00:32:46.257656 kubelet[1485]: E0517 00:32:46.257599 1485 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:32:47.257797 kubelet[1485]: E0517 00:32:47.257736 1485 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:32:47.488152 kubelet[1485]: E0517 00:32:47.488110 1485 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:32:47.540997 kubelet[1485]: I0517 00:32:47.540851 1485 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-rs4pl" podStartSLOduration=8.540832797 podStartE2EDuration="8.540832797s" podCreationTimestamp="2025-05-17 00:32:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:32:44.153886983 +0000 UTC m=+71.722128893" watchObservedRunningTime="2025-05-17 00:32:47.540832797 +0000 UTC m=+75.109074707" May 17 00:32:47.552828 systemd-networkd[1105]: lxc_health: Gained IPv6LL May 17 00:32:48.149010 kubelet[1485]: E0517 00:32:48.148565 1485 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:32:48.257886 kubelet[1485]: E0517 00:32:48.257835 1485 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:32:49.150117 kubelet[1485]: E0517 00:32:49.150077 1485 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:32:49.258008 kubelet[1485]: E0517 00:32:49.257959 1485 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:32:50.258623 kubelet[1485]: E0517 00:32:50.258562 1485 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:32:51.259197 kubelet[1485]: E0517 00:32:51.259141 1485 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:32:52.260190 kubelet[1485]: E0517 00:32:52.260143 1485 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:32:53.196999 kubelet[1485]: E0517 00:32:53.196954 1485 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:32:53.260553 kubelet[1485]: E0517 00:32:53.260522 1485 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:32:54.261142 kubelet[1485]: E0517 00:32:54.261087 1485 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"