May 17 00:40:01.877925 kernel: Linux version 5.15.182-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri May 16 23:09:52 -00 2025 May 17 00:40:01.877952 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4aad7caeadb0359f379975532748a0b4ae6bb9b229507353e0f5ae84cb9335a0 May 17 00:40:01.877963 kernel: BIOS-provided physical RAM map: May 17 00:40:01.877970 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 17 00:40:01.877977 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 17 00:40:01.877984 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 17 00:40:01.877993 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable May 17 00:40:01.878001 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved May 17 00:40:01.878011 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 17 00:40:01.878018 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved May 17 00:40:01.878025 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 17 00:40:01.878033 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 17 00:40:01.878040 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 17 00:40:01.878048 kernel: NX (Execute Disable) protection: active May 17 00:40:01.878060 kernel: SMBIOS 2.8 present. May 17 00:40:01.878069 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 May 17 00:40:01.878077 kernel: Hypervisor detected: KVM May 17 00:40:01.878085 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 17 00:40:01.878092 kernel: kvm-clock: cpu 0, msr 2b19a001, primary cpu clock May 17 00:40:01.878100 kernel: kvm-clock: using sched offset of 2495947504 cycles May 17 00:40:01.878108 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 17 00:40:01.878117 kernel: tsc: Detected 2794.748 MHz processor May 17 00:40:01.878125 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 17 00:40:01.878137 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 17 00:40:01.878145 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 May 17 00:40:01.878154 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 17 00:40:01.878162 kernel: Using GB pages for direct mapping May 17 00:40:01.878170 kernel: ACPI: Early table checksum verification disabled May 17 00:40:01.878178 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) May 17 00:40:01.878186 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:40:01.878194 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:40:01.878203 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:40:01.878213 kernel: ACPI: FACS 0x000000009CFE0000 000040 May 17 00:40:01.878221 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:40:01.878229 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:40:01.878237 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:40:01.878245 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:40:01.878253 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] May 17 00:40:01.878261 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] May 17 00:40:01.878269 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] May 17 00:40:01.878282 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] May 17 00:40:01.878295 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] May 17 00:40:01.878306 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] May 17 00:40:01.878315 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] May 17 00:40:01.878323 kernel: No NUMA configuration found May 17 00:40:01.878339 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] May 17 00:40:01.878351 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] May 17 00:40:01.878359 kernel: Zone ranges: May 17 00:40:01.878370 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 17 00:40:01.878384 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] May 17 00:40:01.878392 kernel: Normal empty May 17 00:40:01.878400 kernel: Movable zone start for each node May 17 00:40:01.878408 kernel: Early memory node ranges May 17 00:40:01.878417 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 17 00:40:01.878425 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] May 17 00:40:01.878436 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] May 17 00:40:01.878444 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 17 00:40:01.878453 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 17 00:40:01.878462 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges May 17 00:40:01.878470 kernel: ACPI: PM-Timer IO Port: 0x608 May 17 00:40:01.878478 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 17 00:40:01.878487 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 17 00:40:01.878495 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 17 00:40:01.878504 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 17 00:40:01.878512 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 17 00:40:01.878522 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 17 00:40:01.878530 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 17 00:40:01.878538 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 17 00:40:01.878547 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 17 00:40:01.878556 kernel: TSC deadline timer available May 17 00:40:01.878564 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs May 17 00:40:01.878578 kernel: kvm-guest: KVM setup pv remote TLB flush May 17 00:40:01.878588 kernel: kvm-guest: setup PV sched yield May 17 00:40:01.878597 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices May 17 00:40:01.878608 kernel: Booting paravirtualized kernel on KVM May 17 00:40:01.878617 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 17 00:40:01.878626 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 May 17 00:40:01.878645 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 May 17 00:40:01.878661 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 May 17 00:40:01.878670 kernel: pcpu-alloc: [0] 0 1 2 3 May 17 00:40:01.878678 kernel: kvm-guest: setup async PF for cpu 0 May 17 00:40:01.878686 kernel: kvm-guest: stealtime: cpu 0, msr 9a41c0c0 May 17 00:40:01.878694 kernel: kvm-guest: PV spinlocks enabled May 17 00:40:01.878718 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 17 00:40:01.878727 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 May 17 00:40:01.878736 kernel: Policy zone: DMA32 May 17 00:40:01.878746 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4aad7caeadb0359f379975532748a0b4ae6bb9b229507353e0f5ae84cb9335a0 May 17 00:40:01.878756 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 17 00:40:01.878764 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 17 00:40:01.878773 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 17 00:40:01.878781 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 17 00:40:01.878792 kernel: Memory: 2436696K/2571752K available (12294K kernel code, 2276K rwdata, 13724K rodata, 47472K init, 4108K bss, 134796K reserved, 0K cma-reserved) May 17 00:40:01.878801 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 17 00:40:01.878809 kernel: ftrace: allocating 34585 entries in 136 pages May 17 00:40:01.878818 kernel: ftrace: allocated 136 pages with 2 groups May 17 00:40:01.878826 kernel: rcu: Hierarchical RCU implementation. May 17 00:40:01.878835 kernel: rcu: RCU event tracing is enabled. May 17 00:40:01.878844 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 17 00:40:01.878852 kernel: Rude variant of Tasks RCU enabled. May 17 00:40:01.878860 kernel: Tracing variant of Tasks RCU enabled. May 17 00:40:01.878871 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 17 00:40:01.878879 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 17 00:40:01.878887 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 17 00:40:01.878896 kernel: random: crng init done May 17 00:40:01.878904 kernel: Console: colour VGA+ 80x25 May 17 00:40:01.878913 kernel: printk: console [ttyS0] enabled May 17 00:40:01.878921 kernel: ACPI: Core revision 20210730 May 17 00:40:01.878930 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 17 00:40:01.878938 kernel: APIC: Switch to symmetric I/O mode setup May 17 00:40:01.878948 kernel: x2apic enabled May 17 00:40:01.878956 kernel: Switched APIC routing to physical x2apic. May 17 00:40:01.878965 kernel: kvm-guest: setup PV IPIs May 17 00:40:01.878974 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 17 00:40:01.878982 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 17 00:40:01.878991 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) May 17 00:40:01.879000 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 17 00:40:01.879008 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 17 00:40:01.879017 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 17 00:40:01.879033 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 17 00:40:01.879041 kernel: Spectre V2 : Mitigation: Retpolines May 17 00:40:01.879051 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 17 00:40:01.879062 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 17 00:40:01.879070 kernel: RETBleed: Mitigation: untrained return thunk May 17 00:40:01.879079 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 17 00:40:01.879088 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp May 17 00:40:01.879097 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 17 00:40:01.879106 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 17 00:40:01.879117 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 17 00:40:01.879126 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 17 00:40:01.879135 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. May 17 00:40:01.879144 kernel: Freeing SMP alternatives memory: 32K May 17 00:40:01.879153 kernel: pid_max: default: 32768 minimum: 301 May 17 00:40:01.879161 kernel: LSM: Security Framework initializing May 17 00:40:01.879170 kernel: SELinux: Initializing. May 17 00:40:01.879179 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 17 00:40:01.879190 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 17 00:40:01.879199 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 17 00:40:01.879208 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 17 00:40:01.879217 kernel: ... version: 0 May 17 00:40:01.879226 kernel: ... bit width: 48 May 17 00:40:01.879235 kernel: ... generic registers: 6 May 17 00:40:01.879244 kernel: ... value mask: 0000ffffffffffff May 17 00:40:01.879253 kernel: ... max period: 00007fffffffffff May 17 00:40:01.879262 kernel: ... fixed-purpose events: 0 May 17 00:40:01.879272 kernel: ... event mask: 000000000000003f May 17 00:40:01.879281 kernel: signal: max sigframe size: 1776 May 17 00:40:01.879290 kernel: rcu: Hierarchical SRCU implementation. May 17 00:40:01.879299 kernel: smp: Bringing up secondary CPUs ... May 17 00:40:01.879307 kernel: x86: Booting SMP configuration: May 17 00:40:01.879316 kernel: .... node #0, CPUs: #1 May 17 00:40:01.879325 kernel: kvm-clock: cpu 1, msr 2b19a041, secondary cpu clock May 17 00:40:01.879334 kernel: kvm-guest: setup async PF for cpu 1 May 17 00:40:01.879343 kernel: kvm-guest: stealtime: cpu 1, msr 9a49c0c0 May 17 00:40:01.879353 kernel: #2 May 17 00:40:01.879362 kernel: kvm-clock: cpu 2, msr 2b19a081, secondary cpu clock May 17 00:40:01.879371 kernel: kvm-guest: setup async PF for cpu 2 May 17 00:40:01.879378 kernel: kvm-guest: stealtime: cpu 2, msr 9a51c0c0 May 17 00:40:01.879386 kernel: #3 May 17 00:40:01.879394 kernel: kvm-clock: cpu 3, msr 2b19a0c1, secondary cpu clock May 17 00:40:01.879403 kernel: kvm-guest: setup async PF for cpu 3 May 17 00:40:01.879412 kernel: kvm-guest: stealtime: cpu 3, msr 9a59c0c0 May 17 00:40:01.879421 kernel: smp: Brought up 1 node, 4 CPUs May 17 00:40:01.879432 kernel: smpboot: Max logical packages: 1 May 17 00:40:01.879441 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) May 17 00:40:01.879450 kernel: devtmpfs: initialized May 17 00:40:01.879459 kernel: x86/mm: Memory block size: 128MB May 17 00:40:01.879468 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 17 00:40:01.879477 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 17 00:40:01.879486 kernel: pinctrl core: initialized pinctrl subsystem May 17 00:40:01.879495 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 17 00:40:01.879504 kernel: audit: initializing netlink subsys (disabled) May 17 00:40:01.879512 kernel: audit: type=2000 audit(1747442401.457:1): state=initialized audit_enabled=0 res=1 May 17 00:40:01.879523 kernel: thermal_sys: Registered thermal governor 'step_wise' May 17 00:40:01.879531 kernel: thermal_sys: Registered thermal governor 'user_space' May 17 00:40:01.879540 kernel: cpuidle: using governor menu May 17 00:40:01.879549 kernel: ACPI: bus type PCI registered May 17 00:40:01.879558 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 17 00:40:01.879567 kernel: dca service started, version 1.12.1 May 17 00:40:01.879576 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) May 17 00:40:01.879584 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 May 17 00:40:01.879595 kernel: PCI: Using configuration type 1 for base access May 17 00:40:01.879604 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 17 00:40:01.879613 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages May 17 00:40:01.879624 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages May 17 00:40:01.879649 kernel: ACPI: Added _OSI(Module Device) May 17 00:40:01.879658 kernel: ACPI: Added _OSI(Processor Device) May 17 00:40:01.879667 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 17 00:40:01.879676 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 17 00:40:01.879685 kernel: ACPI: Added _OSI(Linux-Dell-Video) May 17 00:40:01.879694 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) May 17 00:40:01.879720 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) May 17 00:40:01.879729 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 17 00:40:01.879738 kernel: ACPI: Interpreter enabled May 17 00:40:01.879747 kernel: ACPI: PM: (supports S0 S3 S5) May 17 00:40:01.879756 kernel: ACPI: Using IOAPIC for interrupt routing May 17 00:40:01.879765 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 17 00:40:01.879773 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 17 00:40:01.879782 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 17 00:40:01.879951 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 17 00:40:01.880040 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 17 00:40:01.880148 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 17 00:40:01.880165 kernel: PCI host bridge to bus 0000:00 May 17 00:40:01.880257 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 17 00:40:01.880366 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 17 00:40:01.880433 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 17 00:40:01.880516 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] May 17 00:40:01.880597 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 17 00:40:01.880761 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] May 17 00:40:01.880857 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 17 00:40:01.880973 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 17 00:40:01.881078 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 May 17 00:40:01.881374 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] May 17 00:40:01.882152 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] May 17 00:40:01.882228 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] May 17 00:40:01.882306 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 17 00:40:01.882399 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 May 17 00:40:01.882473 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] May 17 00:40:01.882574 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] May 17 00:40:01.882696 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] May 17 00:40:01.882821 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 May 17 00:40:01.882922 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] May 17 00:40:01.883020 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] May 17 00:40:01.883125 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] May 17 00:40:01.883221 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 May 17 00:40:01.883325 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] May 17 00:40:01.883409 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] May 17 00:40:01.883484 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] May 17 00:40:01.883567 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] May 17 00:40:01.883689 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 17 00:40:01.883835 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 17 00:40:01.883975 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 17 00:40:01.884066 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] May 17 00:40:01.884178 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] May 17 00:40:01.884283 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 17 00:40:01.884401 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] May 17 00:40:01.884417 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 17 00:40:01.884426 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 17 00:40:01.884435 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 17 00:40:01.884444 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 17 00:40:01.884456 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 17 00:40:01.884464 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 17 00:40:01.884471 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 17 00:40:01.884478 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 17 00:40:01.884485 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 17 00:40:01.884491 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 17 00:40:01.884498 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 17 00:40:01.884505 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 17 00:40:01.884512 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 17 00:40:01.884520 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 17 00:40:01.884529 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 17 00:40:01.884538 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 17 00:40:01.884547 kernel: iommu: Default domain type: Translated May 17 00:40:01.884554 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 17 00:40:01.884673 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 17 00:40:01.884778 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 17 00:40:01.884848 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 17 00:40:01.884857 kernel: vgaarb: loaded May 17 00:40:01.884868 kernel: pps_core: LinuxPPS API ver. 1 registered May 17 00:40:01.884876 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 17 00:40:01.884883 kernel: PTP clock support registered May 17 00:40:01.884889 kernel: PCI: Using ACPI for IRQ routing May 17 00:40:01.884897 kernel: PCI: pci_cache_line_size set to 64 bytes May 17 00:40:01.884903 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 17 00:40:01.884911 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] May 17 00:40:01.884925 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 17 00:40:01.884936 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 17 00:40:01.884950 kernel: clocksource: Switched to clocksource kvm-clock May 17 00:40:01.884965 kernel: VFS: Disk quotas dquot_6.6.0 May 17 00:40:01.884976 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 17 00:40:01.884983 kernel: pnp: PnP ACPI init May 17 00:40:01.885075 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved May 17 00:40:01.885086 kernel: pnp: PnP ACPI: found 6 devices May 17 00:40:01.885094 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 17 00:40:01.885101 kernel: NET: Registered PF_INET protocol family May 17 00:40:01.885111 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 17 00:40:01.885118 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 17 00:40:01.885125 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 17 00:40:01.885132 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 17 00:40:01.885139 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) May 17 00:40:01.885146 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 17 00:40:01.885153 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 17 00:40:01.885159 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 17 00:40:01.885167 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 17 00:40:01.885174 kernel: NET: Registered PF_XDP protocol family May 17 00:40:01.885248 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 17 00:40:01.885332 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 17 00:40:01.885393 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 17 00:40:01.885494 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] May 17 00:40:01.885587 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 17 00:40:01.885674 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] May 17 00:40:01.885687 kernel: PCI: CLS 0 bytes, default 64 May 17 00:40:01.885700 kernel: Initialise system trusted keyrings May 17 00:40:01.885727 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 17 00:40:01.885740 kernel: Key type asymmetric registered May 17 00:40:01.885748 kernel: Asymmetric key parser 'x509' registered May 17 00:40:01.885757 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 17 00:40:01.885766 kernel: io scheduler mq-deadline registered May 17 00:40:01.885775 kernel: io scheduler kyber registered May 17 00:40:01.885784 kernel: io scheduler bfq registered May 17 00:40:01.885791 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 17 00:40:01.885802 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 17 00:40:01.885809 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 17 00:40:01.885816 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 17 00:40:01.885823 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 17 00:40:01.885830 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 17 00:40:01.885840 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 17 00:40:01.885851 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 17 00:40:01.885860 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 17 00:40:01.885955 kernel: rtc_cmos 00:04: RTC can wake from S4 May 17 00:40:01.885969 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 17 00:40:01.886031 kernel: rtc_cmos 00:04: registered as rtc0 May 17 00:40:01.886095 kernel: rtc_cmos 00:04: setting system clock to 2025-05-17T00:40:01 UTC (1747442401) May 17 00:40:01.886163 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 17 00:40:01.886172 kernel: NET: Registered PF_INET6 protocol family May 17 00:40:01.886179 kernel: Segment Routing with IPv6 May 17 00:40:01.886186 kernel: In-situ OAM (IOAM) with IPv6 May 17 00:40:01.886193 kernel: NET: Registered PF_PACKET protocol family May 17 00:40:01.886202 kernel: Key type dns_resolver registered May 17 00:40:01.886209 kernel: IPI shorthand broadcast: enabled May 17 00:40:01.886217 kernel: sched_clock: Marking stable (451027974, 106605875)->(599092907, -41459058) May 17 00:40:01.886224 kernel: registered taskstats version 1 May 17 00:40:01.886231 kernel: Loading compiled-in X.509 certificates May 17 00:40:01.886238 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.182-flatcar: 01ca23caa8e5879327538f9287e5164b3e97ac0c' May 17 00:40:01.886245 kernel: Key type .fscrypt registered May 17 00:40:01.886252 kernel: Key type fscrypt-provisioning registered May 17 00:40:01.886261 kernel: ima: No TPM chip found, activating TPM-bypass! May 17 00:40:01.886279 kernel: ima: Allocated hash algorithm: sha1 May 17 00:40:01.886289 kernel: ima: No architecture policies found May 17 00:40:01.886298 kernel: clk: Disabling unused clocks May 17 00:40:01.886306 kernel: Freeing unused kernel image (initmem) memory: 47472K May 17 00:40:01.886334 kernel: Write protecting the kernel read-only data: 28672k May 17 00:40:01.886341 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K May 17 00:40:01.886348 kernel: Freeing unused kernel image (rodata/data gap) memory: 612K May 17 00:40:01.886355 kernel: Run /init as init process May 17 00:40:01.886364 kernel: with arguments: May 17 00:40:01.886375 kernel: /init May 17 00:40:01.886384 kernel: with environment: May 17 00:40:01.886393 kernel: HOME=/ May 17 00:40:01.886402 kernel: TERM=linux May 17 00:40:01.886411 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 17 00:40:01.886423 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 17 00:40:01.886446 systemd[1]: Detected virtualization kvm. May 17 00:40:01.886454 systemd[1]: Detected architecture x86-64. May 17 00:40:01.886463 systemd[1]: Running in initrd. May 17 00:40:01.886471 systemd[1]: No hostname configured, using default hostname. May 17 00:40:01.886478 systemd[1]: Hostname set to . May 17 00:40:01.886486 systemd[1]: Initializing machine ID from VM UUID. May 17 00:40:01.886493 systemd[1]: Queued start job for default target initrd.target. May 17 00:40:01.886501 systemd[1]: Started systemd-ask-password-console.path. May 17 00:40:01.886515 systemd[1]: Reached target cryptsetup.target. May 17 00:40:01.886532 systemd[1]: Reached target paths.target. May 17 00:40:01.886546 systemd[1]: Reached target slices.target. May 17 00:40:01.886563 systemd[1]: Reached target swap.target. May 17 00:40:01.886575 systemd[1]: Reached target timers.target. May 17 00:40:01.886585 systemd[1]: Listening on iscsid.socket. May 17 00:40:01.886595 systemd[1]: Listening on iscsiuio.socket. May 17 00:40:01.886607 systemd[1]: Listening on systemd-journald-audit.socket. May 17 00:40:01.886618 systemd[1]: Listening on systemd-journald-dev-log.socket. May 17 00:40:01.886648 systemd[1]: Listening on systemd-journald.socket. May 17 00:40:01.886656 systemd[1]: Listening on systemd-networkd.socket. May 17 00:40:01.886664 systemd[1]: Listening on systemd-udevd-control.socket. May 17 00:40:01.886672 systemd[1]: Listening on systemd-udevd-kernel.socket. May 17 00:40:01.886682 systemd[1]: Reached target sockets.target. May 17 00:40:01.886692 systemd[1]: Starting kmod-static-nodes.service... May 17 00:40:01.886703 systemd[1]: Finished network-cleanup.service. May 17 00:40:01.886727 systemd[1]: Starting systemd-fsck-usr.service... May 17 00:40:01.886748 systemd[1]: Starting systemd-journald.service... May 17 00:40:01.886757 systemd[1]: Starting systemd-modules-load.service... May 17 00:40:01.886766 systemd[1]: Starting systemd-resolved.service... May 17 00:40:01.886777 systemd[1]: Starting systemd-vconsole-setup.service... May 17 00:40:01.886787 systemd[1]: Finished kmod-static-nodes.service. May 17 00:40:01.886797 systemd[1]: Finished systemd-fsck-usr.service. May 17 00:40:01.886807 kernel: audit: type=1130 audit(1747442401.877:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:01.886826 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 17 00:40:01.886847 systemd-journald[199]: Journal started May 17 00:40:01.886904 systemd-journald[199]: Runtime Journal (/run/log/journal/83fcb09c6e9d4174ab69b88565b85a9d) is 6.0M, max 48.5M, 42.5M free. May 17 00:40:01.877000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:01.881751 systemd-modules-load[200]: Inserted module 'overlay' May 17 00:40:01.921400 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 17 00:40:01.921426 systemd[1]: Started systemd-journald.service. May 17 00:40:01.896747 systemd-resolved[201]: Positive Trust Anchors: May 17 00:40:01.927026 kernel: audit: type=1130 audit(1747442401.922:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:01.922000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:01.896759 systemd-resolved[201]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:40:01.927000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:01.896786 systemd-resolved[201]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 17 00:40:01.941163 kernel: audit: type=1130 audit(1747442401.927:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:01.941177 kernel: audit: type=1130 audit(1747442401.927:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:01.927000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:01.898939 systemd-resolved[201]: Defaulting to hostname 'linux'. May 17 00:40:01.945790 kernel: Bridge firewalling registered May 17 00:40:01.945817 kernel: audit: type=1130 audit(1747442401.942:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:01.942000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:01.922336 systemd[1]: Started systemd-resolved.service. May 17 00:40:01.928107 systemd[1]: Finished systemd-vconsole-setup.service. May 17 00:40:01.931249 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 17 00:40:01.942231 systemd[1]: Reached target nss-lookup.target. May 17 00:40:01.945615 systemd-modules-load[200]: Inserted module 'br_netfilter' May 17 00:40:01.948132 systemd[1]: Starting dracut-cmdline-ask.service... May 17 00:40:01.965495 systemd[1]: Finished dracut-cmdline-ask.service. May 17 00:40:01.966000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:01.967034 systemd[1]: Starting dracut-cmdline.service... May 17 00:40:01.972666 kernel: audit: type=1130 audit(1747442401.966:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:01.972687 kernel: SCSI subsystem initialized May 17 00:40:01.977028 dracut-cmdline[218]: dracut-dracut-053 May 17 00:40:01.979548 dracut-cmdline[218]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4aad7caeadb0359f379975532748a0b4ae6bb9b229507353e0f5ae84cb9335a0 May 17 00:40:01.987330 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 17 00:40:01.987357 kernel: device-mapper: uevent: version 1.0.3 May 17 00:40:01.988733 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com May 17 00:40:01.991927 systemd-modules-load[200]: Inserted module 'dm_multipath' May 17 00:40:01.992757 systemd[1]: Finished systemd-modules-load.service. May 17 00:40:01.998596 kernel: audit: type=1130 audit(1747442401.993:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:01.993000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:01.994166 systemd[1]: Starting systemd-sysctl.service... May 17 00:40:02.003945 systemd[1]: Finished systemd-sysctl.service. May 17 00:40:02.009118 kernel: audit: type=1130 audit(1747442402.003:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:02.003000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:02.058748 kernel: Loading iSCSI transport class v2.0-870. May 17 00:40:02.076738 kernel: iscsi: registered transport (tcp) May 17 00:40:02.101740 kernel: iscsi: registered transport (qla4xxx) May 17 00:40:02.101766 kernel: QLogic iSCSI HBA Driver May 17 00:40:02.133140 systemd[1]: Finished dracut-cmdline.service. May 17 00:40:02.139086 kernel: audit: type=1130 audit(1747442402.132:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:02.132000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:02.134018 systemd[1]: Starting dracut-pre-udev.service... May 17 00:40:02.181740 kernel: raid6: avx2x4 gen() 27593 MB/s May 17 00:40:02.198733 kernel: raid6: avx2x4 xor() 7349 MB/s May 17 00:40:02.215733 kernel: raid6: avx2x2 gen() 31880 MB/s May 17 00:40:02.232738 kernel: raid6: avx2x2 xor() 19112 MB/s May 17 00:40:02.249740 kernel: raid6: avx2x1 gen() 25072 MB/s May 17 00:40:02.266741 kernel: raid6: avx2x1 xor() 14441 MB/s May 17 00:40:02.283741 kernel: raid6: sse2x4 gen() 14522 MB/s May 17 00:40:02.300745 kernel: raid6: sse2x4 xor() 6480 MB/s May 17 00:40:02.317757 kernel: raid6: sse2x2 gen() 14581 MB/s May 17 00:40:02.334764 kernel: raid6: sse2x2 xor() 9421 MB/s May 17 00:40:02.351749 kernel: raid6: sse2x1 gen() 11765 MB/s May 17 00:40:02.369168 kernel: raid6: sse2x1 xor() 7480 MB/s May 17 00:40:02.369238 kernel: raid6: using algorithm avx2x2 gen() 31880 MB/s May 17 00:40:02.369247 kernel: raid6: .... xor() 19112 MB/s, rmw enabled May 17 00:40:02.369906 kernel: raid6: using avx2x2 recovery algorithm May 17 00:40:02.382739 kernel: xor: automatically using best checksumming function avx May 17 00:40:02.473739 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no May 17 00:40:02.482235 systemd[1]: Finished dracut-pre-udev.service. May 17 00:40:02.481000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:02.484000 audit: BPF prog-id=7 op=LOAD May 17 00:40:02.484000 audit: BPF prog-id=8 op=LOAD May 17 00:40:02.485688 systemd[1]: Starting systemd-udevd.service... May 17 00:40:02.498800 systemd-udevd[402]: Using default interface naming scheme 'v252'. May 17 00:40:02.504000 systemd[1]: Started systemd-udevd.service. May 17 00:40:02.503000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:02.504866 systemd[1]: Starting dracut-pre-trigger.service... May 17 00:40:02.514806 dracut-pre-trigger[404]: rd.md=0: removing MD RAID activation May 17 00:40:02.540353 systemd[1]: Finished dracut-pre-trigger.service. May 17 00:40:02.539000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:02.541256 systemd[1]: Starting systemd-udev-trigger.service... May 17 00:40:02.579279 systemd[1]: Finished systemd-udev-trigger.service. May 17 00:40:02.579000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:02.618311 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 17 00:40:02.661307 kernel: cryptd: max_cpu_qlen set to 1000 May 17 00:40:02.661323 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 17 00:40:02.661332 kernel: GPT:9289727 != 19775487 May 17 00:40:02.661341 kernel: GPT:Alternate GPT header not at the end of the disk. May 17 00:40:02.661349 kernel: GPT:9289727 != 19775487 May 17 00:40:02.661357 kernel: GPT: Use GNU Parted to correct GPT errors. May 17 00:40:02.661370 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 17 00:40:02.661379 kernel: libata version 3.00 loaded. May 17 00:40:02.661387 kernel: AVX2 version of gcm_enc/dec engaged. May 17 00:40:02.661396 kernel: AES CTR mode by8 optimization enabled May 17 00:40:02.665739 kernel: ahci 0000:00:1f.2: version 3.0 May 17 00:40:02.683335 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 17 00:40:02.683364 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 17 00:40:02.683512 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 17 00:40:02.683669 kernel: scsi host0: ahci May 17 00:40:02.683829 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (454) May 17 00:40:02.683841 kernel: scsi host1: ahci May 17 00:40:02.683926 kernel: scsi host2: ahci May 17 00:40:02.684004 kernel: scsi host3: ahci May 17 00:40:02.684100 kernel: scsi host4: ahci May 17 00:40:02.684181 kernel: scsi host5: ahci May 17 00:40:02.684307 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 May 17 00:40:02.684318 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 May 17 00:40:02.684327 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 May 17 00:40:02.684336 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 May 17 00:40:02.684344 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 May 17 00:40:02.684353 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 May 17 00:40:02.676670 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. May 17 00:40:02.707614 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. May 17 00:40:02.708648 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. May 17 00:40:02.718314 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. May 17 00:40:02.724713 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 17 00:40:02.726299 systemd[1]: Starting disk-uuid.service... May 17 00:40:02.879061 disk-uuid[532]: Primary Header is updated. May 17 00:40:02.879061 disk-uuid[532]: Secondary Entries is updated. May 17 00:40:02.879061 disk-uuid[532]: Secondary Header is updated. May 17 00:40:02.884977 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 17 00:40:02.887751 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 17 00:40:02.894747 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 17 00:40:02.990732 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 17 00:40:02.990787 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 17 00:40:02.998738 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 17 00:40:02.998770 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 17 00:40:02.999745 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 17 00:40:03.000745 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 17 00:40:03.001744 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 17 00:40:03.003663 kernel: ata3.00: applying bridge limits May 17 00:40:03.003764 kernel: ata3.00: configured for UDMA/100 May 17 00:40:03.004748 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 17 00:40:03.038265 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 17 00:40:03.055839 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 17 00:40:03.055860 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 17 00:40:03.893904 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 17 00:40:03.893972 disk-uuid[533]: The operation has completed successfully. May 17 00:40:03.920364 systemd[1]: disk-uuid.service: Deactivated successfully. May 17 00:40:03.920000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:03.920000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:03.920477 systemd[1]: Finished disk-uuid.service. May 17 00:40:03.924974 systemd[1]: Starting verity-setup.service... May 17 00:40:03.938752 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" May 17 00:40:03.964687 systemd[1]: Found device dev-mapper-usr.device. May 17 00:40:03.968469 systemd[1]: Mounting sysusr-usr.mount... May 17 00:40:03.970925 systemd[1]: Finished verity-setup.service. May 17 00:40:03.971000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:04.045761 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. May 17 00:40:04.045887 systemd[1]: Mounted sysusr-usr.mount. May 17 00:40:04.046086 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. May 17 00:40:04.047450 systemd[1]: Starting ignition-setup.service... May 17 00:40:04.050301 systemd[1]: Starting parse-ip-for-networkd.service... May 17 00:40:04.063641 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:40:04.063730 kernel: BTRFS info (device vda6): using free space tree May 17 00:40:04.063742 kernel: BTRFS info (device vda6): has skinny extents May 17 00:40:04.075341 systemd[1]: mnt-oem.mount: Deactivated successfully. May 17 00:40:04.086453 systemd[1]: Finished ignition-setup.service. May 17 00:40:04.086000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:04.088347 systemd[1]: Starting ignition-fetch-offline.service... May 17 00:40:04.136457 systemd[1]: Finished parse-ip-for-networkd.service. May 17 00:40:04.136000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:04.137000 audit: BPF prog-id=9 op=LOAD May 17 00:40:04.139032 systemd[1]: Starting systemd-networkd.service... May 17 00:40:04.147471 ignition[651]: Ignition 2.14.0 May 17 00:40:04.147481 ignition[651]: Stage: fetch-offline May 17 00:40:04.147572 ignition[651]: no configs at "/usr/lib/ignition/base.d" May 17 00:40:04.147582 ignition[651]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 17 00:40:04.147718 ignition[651]: parsed url from cmdline: "" May 17 00:40:04.147722 ignition[651]: no config URL provided May 17 00:40:04.147727 ignition[651]: reading system config file "/usr/lib/ignition/user.ign" May 17 00:40:04.147735 ignition[651]: no config at "/usr/lib/ignition/user.ign" May 17 00:40:04.147754 ignition[651]: op(1): [started] loading QEMU firmware config module May 17 00:40:04.147763 ignition[651]: op(1): executing: "modprobe" "qemu_fw_cfg" May 17 00:40:04.158725 ignition[651]: op(1): [finished] loading QEMU firmware config module May 17 00:40:04.160457 ignition[651]: parsing config with SHA512: bd9a8251299c7ef0646097522dd5e1f619cab4b803ea3b5867a420b3676e4a1f3663502c8ed3b58ba4c296da76e42df764b4a4247309d2edbe0156eceeb99d43 May 17 00:40:04.166539 unknown[651]: fetched base config from "system" May 17 00:40:04.166555 unknown[651]: fetched user config from "qemu" May 17 00:40:04.168515 ignition[651]: fetch-offline: fetch-offline passed May 17 00:40:04.168596 ignition[651]: Ignition finished successfully May 17 00:40:04.169991 systemd-networkd[720]: lo: Link UP May 17 00:40:04.169996 systemd-networkd[720]: lo: Gained carrier May 17 00:40:04.171000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:04.170577 systemd-networkd[720]: Enumeration completed May 17 00:40:04.170693 systemd[1]: Started systemd-networkd.service. May 17 00:40:04.170990 systemd-networkd[720]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:40:04.178000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:04.172078 systemd-networkd[720]: eth0: Link UP May 17 00:40:04.172082 systemd-networkd[720]: eth0: Gained carrier May 17 00:40:04.172578 systemd[1]: Reached target network.target. May 17 00:40:04.175235 systemd[1]: Starting iscsiuio.service... May 17 00:40:04.183000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:04.177024 systemd[1]: Finished ignition-fetch-offline.service. May 17 00:40:04.179555 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 17 00:40:04.180622 systemd[1]: Starting ignition-kargs.service... May 17 00:40:04.189860 iscsid[728]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi May 17 00:40:04.189860 iscsid[728]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log May 17 00:40:04.189860 iscsid[728]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. May 17 00:40:04.189860 iscsid[728]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. May 17 00:40:04.189860 iscsid[728]: If using hardware iscsi like qla4xxx this message can be ignored. May 17 00:40:04.189860 iscsid[728]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi May 17 00:40:04.189860 iscsid[728]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf May 17 00:40:04.190000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:04.197000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:04.208000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:04.182124 systemd[1]: Started iscsiuio.service. May 17 00:40:04.190755 ignition[727]: Ignition 2.14.0 May 17 00:40:04.185222 systemd[1]: Starting iscsid.service... May 17 00:40:04.190763 ignition[727]: Stage: kargs May 17 00:40:04.190002 systemd[1]: Started iscsid.service. May 17 00:40:04.218000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:04.190868 ignition[727]: no configs at "/usr/lib/ignition/base.d" May 17 00:40:04.191818 systemd[1]: Starting dracut-initqueue.service... May 17 00:40:04.225000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:04.190879 ignition[727]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 17 00:40:04.192329 systemd-networkd[720]: eth0: DHCPv4 address 10.0.0.140/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 17 00:40:04.191833 ignition[727]: kargs: kargs passed May 17 00:40:04.194107 systemd[1]: Finished ignition-kargs.service. May 17 00:40:04.191887 ignition[727]: Ignition finished successfully May 17 00:40:04.199325 systemd[1]: Starting ignition-disks.service... May 17 00:40:04.208997 ignition[738]: Ignition 2.14.0 May 17 00:40:04.207607 systemd[1]: Finished dracut-initqueue.service. May 17 00:40:04.209006 ignition[738]: Stage: disks May 17 00:40:04.208942 systemd[1]: Reached target remote-fs-pre.target. May 17 00:40:04.209138 ignition[738]: no configs at "/usr/lib/ignition/base.d" May 17 00:40:04.211505 systemd[1]: Reached target remote-cryptsetup.target. May 17 00:40:04.209151 ignition[738]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 17 00:40:04.212540 systemd[1]: Reached target remote-fs.target. May 17 00:40:04.243972 systemd-fsck[755]: ROOT: clean, 619/553520 files, 56023/553472 blocks May 17 00:40:04.210075 ignition[738]: disks: disks passed May 17 00:40:04.215263 systemd[1]: Starting dracut-pre-mount.service... May 17 00:40:04.210125 ignition[738]: Ignition finished successfully May 17 00:40:04.217114 systemd[1]: Finished ignition-disks.service. May 17 00:40:04.218660 systemd[1]: Reached target initrd-root-device.target. May 17 00:40:04.250000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:04.220418 systemd[1]: Reached target local-fs-pre.target. May 17 00:40:04.221470 systemd[1]: Reached target local-fs.target. May 17 00:40:04.222363 systemd[1]: Reached target sysinit.target. May 17 00:40:04.223283 systemd[1]: Reached target basic.target. May 17 00:40:04.224494 systemd[1]: Finished dracut-pre-mount.service. May 17 00:40:04.227017 systemd[1]: Starting systemd-fsck-root.service... May 17 00:40:04.260930 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. May 17 00:40:04.249181 systemd[1]: Finished systemd-fsck-root.service. May 17 00:40:04.251948 systemd[1]: Mounting sysroot.mount... May 17 00:40:04.261176 systemd[1]: Mounted sysroot.mount. May 17 00:40:04.262575 systemd[1]: Reached target initrd-root-fs.target. May 17 00:40:04.266303 systemd[1]: Mounting sysroot-usr.mount... May 17 00:40:04.267583 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. May 17 00:40:04.267648 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 17 00:40:04.267691 systemd[1]: Reached target ignition-diskful.target. May 17 00:40:04.270001 systemd[1]: Mounted sysroot-usr.mount. May 17 00:40:04.272826 systemd[1]: Starting initrd-setup-root.service... May 17 00:40:04.278151 initrd-setup-root[765]: cut: /sysroot/etc/passwd: No such file or directory May 17 00:40:04.281608 initrd-setup-root[773]: cut: /sysroot/etc/group: No such file or directory May 17 00:40:04.285958 initrd-setup-root[781]: cut: /sysroot/etc/shadow: No such file or directory May 17 00:40:04.290153 initrd-setup-root[789]: cut: /sysroot/etc/gshadow: No such file or directory May 17 00:40:04.319618 systemd[1]: Finished initrd-setup-root.service. May 17 00:40:04.320000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:04.322585 systemd[1]: Starting ignition-mount.service... May 17 00:40:04.324938 systemd[1]: Starting sysroot-boot.service... May 17 00:40:04.328425 bash[806]: umount: /sysroot/usr/share/oem: not mounted. May 17 00:40:04.337567 ignition[807]: INFO : Ignition 2.14.0 May 17 00:40:04.337567 ignition[807]: INFO : Stage: mount May 17 00:40:04.340393 ignition[807]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:40:04.340393 ignition[807]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 17 00:40:04.340393 ignition[807]: INFO : mount: mount passed May 17 00:40:04.340393 ignition[807]: INFO : Ignition finished successfully May 17 00:40:04.340000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:04.339496 systemd[1]: Finished ignition-mount.service. May 17 00:40:04.348277 systemd[1]: Finished sysroot-boot.service. May 17 00:40:04.347000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:04.979276 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 17 00:40:04.986736 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (816) May 17 00:40:04.989704 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:40:04.989732 kernel: BTRFS info (device vda6): using free space tree May 17 00:40:04.989742 kernel: BTRFS info (device vda6): has skinny extents May 17 00:40:04.995038 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 17 00:40:04.996942 systemd[1]: Starting ignition-files.service... May 17 00:40:05.013674 ignition[836]: INFO : Ignition 2.14.0 May 17 00:40:05.013674 ignition[836]: INFO : Stage: files May 17 00:40:05.015701 ignition[836]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:40:05.015701 ignition[836]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 17 00:40:05.015701 ignition[836]: DEBUG : files: compiled without relabeling support, skipping May 17 00:40:05.020306 ignition[836]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 17 00:40:05.020306 ignition[836]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 17 00:40:05.023813 ignition[836]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 17 00:40:05.023813 ignition[836]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 17 00:40:05.026827 ignition[836]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 17 00:40:05.026827 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" May 17 00:40:05.026827 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" May 17 00:40:05.025952 unknown[836]: wrote ssh authorized keys file for user: core May 17 00:40:05.033745 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:40:05.033745 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:40:05.033745 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 17 00:40:05.033745 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 17 00:40:05.033745 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 17 00:40:05.033745 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 May 17 00:40:05.578673 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK May 17 00:40:05.973273 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 17 00:40:05.973273 ignition[836]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" May 17 00:40:05.983789 ignition[836]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 17 00:40:05.983789 ignition[836]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 17 00:40:05.983789 ignition[836]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" May 17 00:40:05.983789 ignition[836]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" May 17 00:40:05.983789 ignition[836]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" May 17 00:40:06.007272 ignition[836]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 17 00:40:06.009021 ignition[836]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" May 17 00:40:06.009021 ignition[836]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" May 17 00:40:06.009021 ignition[836]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" May 17 00:40:06.009021 ignition[836]: INFO : files: files passed May 17 00:40:06.009021 ignition[836]: INFO : Ignition finished successfully May 17 00:40:06.016801 systemd[1]: Finished ignition-files.service. May 17 00:40:06.017000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:06.018559 systemd[1]: Starting initrd-setup-root-after-ignition.service... May 17 00:40:06.018644 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). May 17 00:40:06.019268 systemd[1]: Starting ignition-quench.service... May 17 00:40:06.024000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:06.024000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:06.022606 systemd[1]: ignition-quench.service: Deactivated successfully. May 17 00:40:06.022681 systemd[1]: Finished ignition-quench.service. May 17 00:40:06.026833 initrd-setup-root-after-ignition[861]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory May 17 00:40:06.029737 initrd-setup-root-after-ignition[863]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:40:06.031773 systemd[1]: Finished initrd-setup-root-after-ignition.service. May 17 00:40:06.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:06.031883 systemd[1]: Reached target ignition-complete.target. May 17 00:40:06.034553 systemd[1]: Starting initrd-parse-etc.service... May 17 00:40:06.048833 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 17 00:40:06.048921 systemd[1]: Finished initrd-parse-etc.service. May 17 00:40:06.050041 systemd[1]: Reached target initrd-fs.target. May 17 00:40:06.049000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:06.049000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:06.051480 systemd[1]: Reached target initrd.target. May 17 00:40:06.053816 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. May 17 00:40:06.054465 systemd[1]: Starting dracut-pre-pivot.service... May 17 00:40:06.064159 systemd[1]: Finished dracut-pre-pivot.service. May 17 00:40:06.063000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:06.064959 systemd[1]: Starting initrd-cleanup.service... May 17 00:40:06.072836 systemd[1]: Stopped target nss-lookup.target. May 17 00:40:06.073783 systemd[1]: Stopped target remote-cryptsetup.target. May 17 00:40:06.075431 systemd[1]: Stopped target timers.target. May 17 00:40:06.076959 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 17 00:40:06.078000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:06.077043 systemd[1]: Stopped dracut-pre-pivot.service. May 17 00:40:06.078487 systemd[1]: Stopped target initrd.target. May 17 00:40:06.080057 systemd[1]: Stopped target basic.target. May 17 00:40:06.081002 systemd[1]: Stopped target ignition-complete.target. May 17 00:40:06.083010 systemd[1]: Stopped target ignition-diskful.target. May 17 00:40:06.084537 systemd[1]: Stopped target initrd-root-device.target. May 17 00:40:06.086201 systemd[1]: Stopped target remote-fs.target. May 17 00:40:06.087792 systemd[1]: Stopped target remote-fs-pre.target. May 17 00:40:06.089337 systemd[1]: Stopped target sysinit.target. May 17 00:40:06.090819 systemd[1]: Stopped target local-fs.target. May 17 00:40:06.092354 systemd[1]: Stopped target local-fs-pre.target. May 17 00:40:06.093853 systemd[1]: Stopped target swap.target. May 17 00:40:06.095230 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 17 00:40:06.096000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:06.095314 systemd[1]: Stopped dracut-pre-mount.service. May 17 00:40:06.099000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:06.096859 systemd[1]: Stopped target cryptsetup.target. May 17 00:40:06.101000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:06.098158 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 17 00:40:06.098240 systemd[1]: Stopped dracut-initqueue.service. May 17 00:40:06.099962 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 17 00:40:06.100048 systemd[1]: Stopped ignition-fetch-offline.service. May 17 00:40:06.101544 systemd[1]: Stopped target paths.target. May 17 00:40:06.102935 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 17 00:40:06.106744 systemd[1]: Stopped systemd-ask-password-console.path. May 17 00:40:06.107944 systemd[1]: Stopped target slices.target. May 17 00:40:06.109691 systemd[1]: Stopped target sockets.target. May 17 00:40:06.111000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:06.111288 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 17 00:40:06.113000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:06.111373 systemd[1]: Stopped initrd-setup-root-after-ignition.service. May 17 00:40:06.117257 iscsid[728]: iscsid shutting down. May 17 00:40:06.112042 systemd[1]: ignition-files.service: Deactivated successfully. May 17 00:40:06.112120 systemd[1]: Stopped ignition-files.service. May 17 00:40:06.115028 systemd[1]: Stopping ignition-mount.service... May 17 00:40:06.116292 systemd[1]: Stopping iscsid.service... May 17 00:40:06.122814 ignition[876]: INFO : Ignition 2.14.0 May 17 00:40:06.122814 ignition[876]: INFO : Stage: umount May 17 00:40:06.125681 ignition[876]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:40:06.125681 ignition[876]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 17 00:40:06.125681 ignition[876]: INFO : umount: umount passed May 17 00:40:06.125681 ignition[876]: INFO : Ignition finished successfully May 17 00:40:06.127000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:06.129000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:06.123950 systemd[1]: Stopping sysroot-boot.service... May 17 00:40:06.132000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:06.135000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:06.125700 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 17 00:40:06.125857 systemd[1]: Stopped systemd-udev-trigger.service. May 17 00:40:06.128071 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 17 00:40:06.139000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:06.141000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:06.128173 systemd[1]: Stopped dracut-pre-trigger.service. May 17 00:40:06.141000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:06.145000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:06.145000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:06.132281 systemd[1]: iscsid.service: Deactivated successfully. May 17 00:40:06.147000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:06.132367 systemd[1]: Stopped iscsid.service. May 17 00:40:06.149000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:06.133976 systemd[1]: ignition-mount.service: Deactivated successfully. May 17 00:40:06.134040 systemd[1]: Stopped ignition-mount.service. May 17 00:40:06.136058 systemd[1]: iscsid.socket: Deactivated successfully. May 17 00:40:06.136118 systemd[1]: Closed iscsid.socket. May 17 00:40:06.161921 kernel: kauditd_printk_skb: 48 callbacks suppressed May 17 00:40:06.161956 kernel: audit: type=1131 audit(1747442406.154:59): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:06.154000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:06.138032 systemd[1]: ignition-disks.service: Deactivated successfully. May 17 00:40:06.138070 systemd[1]: Stopped ignition-disks.service. May 17 00:40:06.140233 systemd[1]: ignition-kargs.service: Deactivated successfully. May 17 00:40:06.171484 kernel: audit: type=1131 audit(1747442406.164:60): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:06.164000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:06.140264 systemd[1]: Stopped ignition-kargs.service. May 17 00:40:06.142560 systemd[1]: ignition-setup.service: Deactivated successfully. May 17 00:40:06.142605 systemd[1]: Stopped ignition-setup.service. May 17 00:40:06.142805 systemd[1]: Stopping iscsiuio.service... May 17 00:40:06.143909 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 17 00:40:06.144436 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 17 00:40:06.144523 systemd[1]: Finished initrd-cleanup.service. May 17 00:40:06.146372 systemd[1]: sysroot-boot.service: Deactivated successfully. May 17 00:40:06.146446 systemd[1]: Stopped sysroot-boot.service. May 17 00:40:06.148443 systemd[1]: iscsiuio.service: Deactivated successfully. May 17 00:40:06.148532 systemd[1]: Stopped iscsiuio.service. May 17 00:40:06.150811 systemd[1]: Stopped target network.target. May 17 00:40:06.151945 systemd[1]: iscsiuio.socket: Deactivated successfully. May 17 00:40:06.151977 systemd[1]: Closed iscsiuio.socket. May 17 00:40:06.153478 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 17 00:40:06.153526 systemd[1]: Stopped initrd-setup-root.service. May 17 00:40:06.155473 systemd[1]: Stopping systemd-networkd.service... May 17 00:40:06.162122 systemd[1]: Stopping systemd-resolved.service... May 17 00:40:06.163027 systemd-networkd[720]: eth0: DHCPv6 lease lost May 17 00:40:06.164193 systemd[1]: systemd-networkd.service: Deactivated successfully. May 17 00:40:06.164297 systemd[1]: Stopped systemd-networkd.service. May 17 00:40:06.169735 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 17 00:40:06.169787 systemd[1]: Closed systemd-networkd.socket. May 17 00:40:06.181614 systemd[1]: Stopping network-cleanup.service... May 17 00:40:06.183000 audit: BPF prog-id=9 op=UNLOAD May 17 00:40:06.186748 kernel: audit: type=1334 audit(1747442406.183:61): prog-id=9 op=UNLOAD May 17 00:40:06.193928 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 17 00:40:06.194036 systemd[1]: Stopped parse-ip-for-networkd.service. May 17 00:40:06.196000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:06.196840 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 00:40:06.202840 kernel: audit: type=1131 audit(1747442406.196:62): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:06.202865 kernel: audit: type=1131 audit(1747442406.201:63): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:06.201000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:06.196880 systemd[1]: Stopped systemd-sysctl.service. May 17 00:40:06.206659 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 17 00:40:06.206751 systemd[1]: Stopped systemd-modules-load.service. May 17 00:40:06.213264 kernel: audit: type=1131 audit(1747442406.208:64): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:06.208000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:06.209628 systemd[1]: Stopping systemd-udevd.service... May 17 00:40:06.215356 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 17 00:40:06.216034 systemd[1]: systemd-resolved.service: Deactivated successfully. May 17 00:40:06.221877 kernel: audit: type=1131 audit(1747442406.216:65): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:06.216000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:06.216135 systemd[1]: Stopped systemd-resolved.service. May 17 00:40:06.222378 systemd[1]: network-cleanup.service: Deactivated successfully. May 17 00:40:06.228814 kernel: audit: type=1131 audit(1747442406.223:66): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:06.223000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:06.222481 systemd[1]: Stopped network-cleanup.service. May 17 00:40:06.230167 systemd[1]: systemd-udevd.service: Deactivated successfully. May 17 00:40:06.230000 audit: BPF prog-id=6 op=UNLOAD May 17 00:40:06.230338 systemd[1]: Stopped systemd-udevd.service. May 17 00:40:06.237450 kernel: audit: type=1334 audit(1747442406.230:67): prog-id=6 op=UNLOAD May 17 00:40:06.238375 kernel: audit: type=1131 audit(1747442406.232:68): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:06.232000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:06.232887 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 17 00:40:06.232943 systemd[1]: Closed systemd-udevd-control.socket. May 17 00:40:06.238459 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 17 00:40:06.241000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:06.243000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:06.238498 systemd[1]: Closed systemd-udevd-kernel.socket. May 17 00:40:06.244000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:06.245000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:06.248000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:06.240616 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 17 00:40:06.251000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:06.240658 systemd[1]: Stopped dracut-pre-udev.service. May 17 00:40:06.242611 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 17 00:40:06.242649 systemd[1]: Stopped dracut-cmdline.service. May 17 00:40:06.254000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:06.254000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:06.244756 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:40:06.244789 systemd[1]: Stopped dracut-cmdline-ask.service. May 17 00:40:06.245622 systemd[1]: Starting initrd-udevadm-cleanup-db.service... May 17 00:40:06.245835 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 17 00:40:06.245886 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. May 17 00:40:06.247679 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 17 00:40:06.247735 systemd[1]: Stopped kmod-static-nodes.service. May 17 00:40:06.249589 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:40:06.249628 systemd[1]: Stopped systemd-vconsole-setup.service. May 17 00:40:06.252728 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 17 00:40:06.253210 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 17 00:40:06.253312 systemd[1]: Finished initrd-udevadm-cleanup-db.service. May 17 00:40:06.255198 systemd[1]: Reached target initrd-switch-root.target. May 17 00:40:06.257514 systemd[1]: Starting initrd-switch-root.service... May 17 00:40:06.273184 systemd[1]: Switching root. May 17 00:40:06.295428 systemd-journald[199]: Journal stopped May 17 00:40:09.010085 systemd-journald[199]: Received SIGTERM from PID 1 (systemd). May 17 00:40:09.010163 kernel: SELinux: Class mctp_socket not defined in policy. May 17 00:40:09.010185 kernel: SELinux: Class anon_inode not defined in policy. May 17 00:40:09.010200 kernel: SELinux: the above unknown classes and permissions will be allowed May 17 00:40:09.010215 kernel: SELinux: policy capability network_peer_controls=1 May 17 00:40:09.010238 kernel: SELinux: policy capability open_perms=1 May 17 00:40:09.010253 kernel: SELinux: policy capability extended_socket_class=1 May 17 00:40:09.010266 kernel: SELinux: policy capability always_check_network=0 May 17 00:40:09.010281 kernel: SELinux: policy capability cgroup_seclabel=1 May 17 00:40:09.010297 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 17 00:40:09.010312 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 17 00:40:09.010326 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 17 00:40:09.010347 systemd[1]: Successfully loaded SELinux policy in 39.383ms. May 17 00:40:09.010375 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.018ms. May 17 00:40:09.010393 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 17 00:40:09.010409 systemd[1]: Detected virtualization kvm. May 17 00:40:09.010434 systemd[1]: Detected architecture x86-64. May 17 00:40:09.010450 systemd[1]: Detected first boot. May 17 00:40:09.010469 systemd[1]: Initializing machine ID from VM UUID. May 17 00:40:09.010485 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). May 17 00:40:09.010500 systemd[1]: Populated /etc with preset unit settings. May 17 00:40:09.010516 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 00:40:09.010538 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 00:40:09.010556 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:40:09.010573 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 17 00:40:09.010590 systemd[1]: Stopped initrd-switch-root.service. May 17 00:40:09.010606 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 17 00:40:09.010621 systemd[1]: Created slice system-addon\x2dconfig.slice. May 17 00:40:09.010637 systemd[1]: Created slice system-addon\x2drun.slice. May 17 00:40:09.010652 systemd[1]: Created slice system-getty.slice. May 17 00:40:09.010667 systemd[1]: Created slice system-modprobe.slice. May 17 00:40:09.010682 systemd[1]: Created slice system-serial\x2dgetty.slice. May 17 00:40:09.010697 systemd[1]: Created slice system-system\x2dcloudinit.slice. May 17 00:40:09.010775 systemd[1]: Created slice system-systemd\x2dfsck.slice. May 17 00:40:09.010793 systemd[1]: Created slice user.slice. May 17 00:40:09.010809 systemd[1]: Started systemd-ask-password-console.path. May 17 00:40:09.010824 systemd[1]: Started systemd-ask-password-wall.path. May 17 00:40:09.010839 systemd[1]: Set up automount boot.automount. May 17 00:40:09.010857 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. May 17 00:40:09.010874 systemd[1]: Stopped target initrd-switch-root.target. May 17 00:40:09.010889 systemd[1]: Stopped target initrd-fs.target. May 17 00:40:09.010904 systemd[1]: Stopped target initrd-root-fs.target. May 17 00:40:09.010919 systemd[1]: Reached target integritysetup.target. May 17 00:40:09.010935 systemd[1]: Reached target remote-cryptsetup.target. May 17 00:40:09.010951 systemd[1]: Reached target remote-fs.target. May 17 00:40:09.010966 systemd[1]: Reached target slices.target. May 17 00:40:09.010981 systemd[1]: Reached target swap.target. May 17 00:40:09.010998 systemd[1]: Reached target torcx.target. May 17 00:40:09.011014 systemd[1]: Reached target veritysetup.target. May 17 00:40:09.011030 systemd[1]: Listening on systemd-coredump.socket. May 17 00:40:09.011045 systemd[1]: Listening on systemd-initctl.socket. May 17 00:40:09.011060 systemd[1]: Listening on systemd-networkd.socket. May 17 00:40:09.011076 systemd[1]: Listening on systemd-udevd-control.socket. May 17 00:40:09.011092 systemd[1]: Listening on systemd-udevd-kernel.socket. May 17 00:40:09.011107 systemd[1]: Listening on systemd-userdbd.socket. May 17 00:40:09.011122 systemd[1]: Mounting dev-hugepages.mount... May 17 00:40:09.011137 systemd[1]: Mounting dev-mqueue.mount... May 17 00:40:09.011154 systemd[1]: Mounting media.mount... May 17 00:40:09.011169 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:40:09.011185 systemd[1]: Mounting sys-kernel-debug.mount... May 17 00:40:09.011200 systemd[1]: Mounting sys-kernel-tracing.mount... May 17 00:40:09.011216 systemd[1]: Mounting tmp.mount... May 17 00:40:09.011231 systemd[1]: Starting flatcar-tmpfiles.service... May 17 00:40:09.011246 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:40:09.011262 systemd[1]: Starting kmod-static-nodes.service... May 17 00:40:09.011278 systemd[1]: Starting modprobe@configfs.service... May 17 00:40:09.011296 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:40:09.011312 systemd[1]: Starting modprobe@drm.service... May 17 00:40:09.011327 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:40:09.011342 systemd[1]: Starting modprobe@fuse.service... May 17 00:40:09.011358 systemd[1]: Starting modprobe@loop.service... May 17 00:40:09.011374 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 17 00:40:09.011390 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 17 00:40:09.011405 systemd[1]: Stopped systemd-fsck-root.service. May 17 00:40:09.011431 kernel: loop: module loaded May 17 00:40:09.011447 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 17 00:40:09.011463 systemd[1]: Stopped systemd-fsck-usr.service. May 17 00:40:09.011478 kernel: fuse: init (API version 7.34) May 17 00:40:09.011492 systemd[1]: Stopped systemd-journald.service. May 17 00:40:09.011507 systemd[1]: Starting systemd-journald.service... May 17 00:40:09.011522 systemd[1]: Starting systemd-modules-load.service... May 17 00:40:09.011538 systemd[1]: Starting systemd-network-generator.service... May 17 00:40:09.011559 systemd[1]: Starting systemd-remount-fs.service... May 17 00:40:09.011574 systemd[1]: Starting systemd-udev-trigger.service... May 17 00:40:09.011591 systemd[1]: verity-setup.service: Deactivated successfully. May 17 00:40:09.011607 systemd[1]: Stopped verity-setup.service. May 17 00:40:09.011623 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:40:09.011642 systemd-journald[991]: Journal started May 17 00:40:09.011695 systemd-journald[991]: Runtime Journal (/run/log/journal/83fcb09c6e9d4174ab69b88565b85a9d) is 6.0M, max 48.5M, 42.5M free. May 17 00:40:06.353000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 May 17 00:40:06.590000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 17 00:40:06.590000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 17 00:40:06.590000 audit: BPF prog-id=10 op=LOAD May 17 00:40:06.590000 audit: BPF prog-id=10 op=UNLOAD May 17 00:40:06.590000 audit: BPF prog-id=11 op=LOAD May 17 00:40:06.590000 audit: BPF prog-id=11 op=UNLOAD May 17 00:40:06.624000 audit[910]: AVC avc: denied { associate } for pid=910 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" May 17 00:40:06.624000 audit[910]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c00014d8a2 a1=c0000cede0 a2=c0000d70c0 a3=32 items=0 ppid=893 pid=910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:06.624000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 17 00:40:06.627000 audit[910]: AVC avc: denied { associate } for pid=910 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 May 17 00:40:06.627000 audit[910]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00014d979 a2=1ed a3=0 items=2 ppid=893 pid=910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:06.627000 audit: CWD cwd="/" May 17 00:40:06.627000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:06.627000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:06.627000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 17 00:40:08.860000 audit: BPF prog-id=12 op=LOAD May 17 00:40:08.860000 audit: BPF prog-id=3 op=UNLOAD May 17 00:40:08.860000 audit: BPF prog-id=13 op=LOAD May 17 00:40:08.860000 audit: BPF prog-id=14 op=LOAD May 17 00:40:08.861000 audit: BPF prog-id=4 op=UNLOAD May 17 00:40:08.861000 audit: BPF prog-id=5 op=UNLOAD May 17 00:40:08.861000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:08.879000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:08.879000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:08.882000 audit: BPF prog-id=12 op=UNLOAD May 17 00:40:08.983000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:08.988000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:08.990000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:08.990000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:08.990000 audit: BPF prog-id=15 op=LOAD May 17 00:40:08.990000 audit: BPF prog-id=16 op=LOAD May 17 00:40:08.990000 audit: BPF prog-id=17 op=LOAD May 17 00:40:08.990000 audit: BPF prog-id=13 op=UNLOAD May 17 00:40:08.990000 audit: BPF prog-id=14 op=UNLOAD May 17 00:40:09.007000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:09.007000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 17 00:40:09.007000 audit[991]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7ffd255cf8d0 a2=4000 a3=7ffd255cf96c items=0 ppid=1 pid=991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:09.007000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" May 17 00:40:08.859739 systemd[1]: Queued start job for default target multi-user.target. May 17 00:40:06.623870 /usr/lib/systemd/system-generators/torcx-generator[910]: time="2025-05-17T00:40:06Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 17 00:40:08.859750 systemd[1]: Unnecessary job was removed for dev-vda6.device. May 17 00:40:06.624139 /usr/lib/systemd/system-generators/torcx-generator[910]: time="2025-05-17T00:40:06Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 17 00:40:09.013428 systemd[1]: Started systemd-journald.service. May 17 00:40:08.862368 systemd[1]: systemd-journald.service: Deactivated successfully. May 17 00:40:09.013000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:06.624154 /usr/lib/systemd/system-generators/torcx-generator[910]: time="2025-05-17T00:40:06Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 17 00:40:06.624181 /usr/lib/systemd/system-generators/torcx-generator[910]: time="2025-05-17T00:40:06Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" May 17 00:40:09.013776 systemd[1]: Mounted dev-hugepages.mount. May 17 00:40:06.624190 /usr/lib/systemd/system-generators/torcx-generator[910]: time="2025-05-17T00:40:06Z" level=debug msg="skipped missing lower profile" missing profile=oem May 17 00:40:06.624214 /usr/lib/systemd/system-generators/torcx-generator[910]: time="2025-05-17T00:40:06Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" May 17 00:40:06.624225 /usr/lib/systemd/system-generators/torcx-generator[910]: time="2025-05-17T00:40:06Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= May 17 00:40:06.624401 /usr/lib/systemd/system-generators/torcx-generator[910]: time="2025-05-17T00:40:06Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack May 17 00:40:06.624432 /usr/lib/systemd/system-generators/torcx-generator[910]: time="2025-05-17T00:40:06Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 17 00:40:06.624443 /usr/lib/systemd/system-generators/torcx-generator[910]: time="2025-05-17T00:40:06Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 17 00:40:06.625128 /usr/lib/systemd/system-generators/torcx-generator[910]: time="2025-05-17T00:40:06Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 May 17 00:40:06.625159 /usr/lib/systemd/system-generators/torcx-generator[910]: time="2025-05-17T00:40:06Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl May 17 00:40:06.625174 /usr/lib/systemd/system-generators/torcx-generator[910]: time="2025-05-17T00:40:06Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.7: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.7 May 17 00:40:09.014801 systemd[1]: Mounted dev-mqueue.mount. May 17 00:40:06.625187 /usr/lib/systemd/system-generators/torcx-generator[910]: time="2025-05-17T00:40:06Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store May 17 00:40:06.625201 /usr/lib/systemd/system-generators/torcx-generator[910]: time="2025-05-17T00:40:06Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.7: no such file or directory" path=/var/lib/torcx/store/3510.3.7 May 17 00:40:06.625213 /usr/lib/systemd/system-generators/torcx-generator[910]: time="2025-05-17T00:40:06Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store May 17 00:40:08.593500 /usr/lib/systemd/system-generators/torcx-generator[910]: time="2025-05-17T00:40:08Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 17 00:40:08.593774 /usr/lib/systemd/system-generators/torcx-generator[910]: time="2025-05-17T00:40:08Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 17 00:40:08.593862 /usr/lib/systemd/system-generators/torcx-generator[910]: time="2025-05-17T00:40:08Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 17 00:40:08.594009 /usr/lib/systemd/system-generators/torcx-generator[910]: time="2025-05-17T00:40:08Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 17 00:40:08.594056 /usr/lib/systemd/system-generators/torcx-generator[910]: time="2025-05-17T00:40:08Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= May 17 00:40:08.594107 /usr/lib/systemd/system-generators/torcx-generator[910]: time="2025-05-17T00:40:08Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx May 17 00:40:09.015793 systemd[1]: Mounted media.mount. May 17 00:40:09.016572 systemd[1]: Mounted sys-kernel-debug.mount. May 17 00:40:09.017470 systemd[1]: Mounted sys-kernel-tracing.mount. May 17 00:40:09.018410 systemd[1]: Mounted tmp.mount. May 17 00:40:09.019411 systemd[1]: Finished flatcar-tmpfiles.service. May 17 00:40:09.020000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:09.020545 systemd[1]: Finished kmod-static-nodes.service. May 17 00:40:09.021000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:09.021672 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 17 00:40:09.021792 systemd[1]: Finished modprobe@configfs.service. May 17 00:40:09.021000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:09.021000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:09.022897 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:40:09.023056 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:40:09.024256 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:40:09.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:09.023000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:09.024446 systemd[1]: Finished modprobe@drm.service. May 17 00:40:09.025000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:09.025000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:09.025562 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:40:09.025754 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:40:09.025000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:09.025000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:09.026850 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 17 00:40:09.027010 systemd[1]: Finished modprobe@fuse.service. May 17 00:40:09.027000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:09.027000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:09.028099 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:40:09.028243 systemd[1]: Finished modprobe@loop.service. May 17 00:40:09.028000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:09.028000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:09.029378 systemd[1]: Finished systemd-modules-load.service. May 17 00:40:09.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:09.030527 systemd[1]: Finished systemd-network-generator.service. May 17 00:40:09.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:09.031901 systemd[1]: Finished systemd-remount-fs.service. May 17 00:40:09.032000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:09.033160 systemd[1]: Reached target network-pre.target. May 17 00:40:09.035086 systemd[1]: Mounting sys-fs-fuse-connections.mount... May 17 00:40:09.036907 systemd[1]: Mounting sys-kernel-config.mount... May 17 00:40:09.038020 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 17 00:40:09.039229 systemd[1]: Starting systemd-hwdb-update.service... May 17 00:40:09.041167 systemd[1]: Starting systemd-journal-flush.service... May 17 00:40:09.042384 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:40:09.043360 systemd[1]: Starting systemd-random-seed.service... May 17 00:40:09.044488 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 00:40:09.046134 systemd-journald[991]: Time spent on flushing to /var/log/journal/83fcb09c6e9d4174ab69b88565b85a9d is 14.754ms for 1078 entries. May 17 00:40:09.046134 systemd-journald[991]: System Journal (/var/log/journal/83fcb09c6e9d4174ab69b88565b85a9d) is 8.0M, max 195.6M, 187.6M free. May 17 00:40:09.426363 systemd-journald[991]: Received client request to flush runtime journal. May 17 00:40:09.053000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:09.127000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:09.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:09.151000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:09.167000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:09.045366 systemd[1]: Starting systemd-sysctl.service... May 17 00:40:09.048829 systemd[1]: Starting systemd-sysusers.service... May 17 00:40:09.052043 systemd[1]: Mounted sys-fs-fuse-connections.mount. May 17 00:40:09.053182 systemd[1]: Finished systemd-udev-trigger.service. May 17 00:40:09.427235 udevadm[1013]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 17 00:40:09.054197 systemd[1]: Mounted sys-kernel-config.mount. May 17 00:40:09.056191 systemd[1]: Starting systemd-udev-settle.service... May 17 00:40:09.080341 systemd[1]: Finished systemd-sysctl.service. May 17 00:40:09.132466 systemd[1]: Finished systemd-sysusers.service. May 17 00:40:09.134631 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 17 00:40:09.150870 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 17 00:40:09.166987 systemd[1]: Finished systemd-random-seed.service. May 17 00:40:09.168051 systemd[1]: Reached target first-boot-complete.target. May 17 00:40:09.427470 systemd[1]: Finished systemd-journal-flush.service. May 17 00:40:09.436000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:09.757080 systemd[1]: Finished systemd-hwdb-update.service. May 17 00:40:09.757000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:09.758000 audit: BPF prog-id=18 op=LOAD May 17 00:40:09.758000 audit: BPF prog-id=19 op=LOAD May 17 00:40:09.758000 audit: BPF prog-id=7 op=UNLOAD May 17 00:40:09.758000 audit: BPF prog-id=8 op=UNLOAD May 17 00:40:09.759361 systemd[1]: Starting systemd-udevd.service... May 17 00:40:09.775040 systemd-udevd[1018]: Using default interface naming scheme 'v252'. May 17 00:40:09.789760 systemd[1]: Started systemd-udevd.service. May 17 00:40:09.790000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:09.791000 audit: BPF prog-id=20 op=LOAD May 17 00:40:09.794053 systemd[1]: Starting systemd-networkd.service... May 17 00:40:09.797000 audit: BPF prog-id=21 op=LOAD May 17 00:40:09.797000 audit: BPF prog-id=22 op=LOAD May 17 00:40:09.797000 audit: BPF prog-id=23 op=LOAD May 17 00:40:09.799054 systemd[1]: Starting systemd-userdbd.service... May 17 00:40:09.816205 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. May 17 00:40:09.829136 systemd[1]: Started systemd-userdbd.service. May 17 00:40:09.829000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:09.866976 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 17 00:40:09.877739 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 17 00:40:09.881256 systemd-networkd[1027]: lo: Link UP May 17 00:40:09.881551 systemd-networkd[1027]: lo: Gained carrier May 17 00:40:09.882055 systemd-networkd[1027]: Enumeration completed May 17 00:40:09.882000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:09.882211 systemd[1]: Started systemd-networkd.service. May 17 00:40:09.884353 systemd-networkd[1027]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:40:09.885415 systemd-networkd[1027]: eth0: Link UP May 17 00:40:09.885485 systemd-networkd[1027]: eth0: Gained carrier May 17 00:40:09.887775 kernel: ACPI: button: Power Button [PWRF] May 17 00:40:09.884000 audit[1031]: AVC avc: denied { confidentiality } for pid=1031 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 May 17 00:40:09.884000 audit[1031]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=556906d293f0 a1=338ac a2=7f081763ebc5 a3=5 items=110 ppid=1018 pid=1031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:09.884000 audit: CWD cwd="/" May 17 00:40:09.884000 audit: PATH item=0 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=1 name=(null) inode=14682 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=2 name=(null) inode=14682 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=3 name=(null) inode=14683 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=4 name=(null) inode=14682 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=5 name=(null) inode=14684 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=6 name=(null) inode=14682 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=7 name=(null) inode=14685 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=8 name=(null) inode=14685 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=9 name=(null) inode=14686 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=10 name=(null) inode=14685 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=11 name=(null) inode=14687 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=12 name=(null) inode=14685 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=13 name=(null) inode=14688 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=14 name=(null) inode=14685 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=15 name=(null) inode=14689 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=16 name=(null) inode=14685 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=17 name=(null) inode=14690 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=18 name=(null) inode=14682 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=19 name=(null) inode=14691 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=20 name=(null) inode=14691 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=21 name=(null) inode=14692 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=22 name=(null) inode=14691 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=23 name=(null) inode=14693 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=24 name=(null) inode=14691 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=25 name=(null) inode=14694 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=26 name=(null) inode=14691 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=27 name=(null) inode=14695 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=28 name=(null) inode=14691 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=29 name=(null) inode=14696 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=30 name=(null) inode=14682 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=31 name=(null) inode=14697 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=32 name=(null) inode=14697 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=33 name=(null) inode=14698 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=34 name=(null) inode=14697 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=35 name=(null) inode=14699 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=36 name=(null) inode=14697 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=37 name=(null) inode=14700 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=38 name=(null) inode=14697 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=39 name=(null) inode=14701 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=40 name=(null) inode=14697 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=41 name=(null) inode=14702 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=42 name=(null) inode=14682 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=43 name=(null) inode=14703 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=44 name=(null) inode=14703 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=45 name=(null) inode=14704 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=46 name=(null) inode=14703 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=47 name=(null) inode=14705 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=48 name=(null) inode=14703 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=49 name=(null) inode=14706 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=50 name=(null) inode=14703 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=51 name=(null) inode=14707 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=52 name=(null) inode=14703 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=53 name=(null) inode=14708 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=54 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=55 name=(null) inode=14709 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=56 name=(null) inode=14709 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=57 name=(null) inode=14710 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=58 name=(null) inode=14709 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=59 name=(null) inode=14711 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=60 name=(null) inode=14709 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=61 name=(null) inode=14712 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=62 name=(null) inode=14712 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=63 name=(null) inode=14713 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=64 name=(null) inode=14712 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=65 name=(null) inode=14714 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=66 name=(null) inode=14712 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=67 name=(null) inode=14715 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=68 name=(null) inode=14712 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=69 name=(null) inode=14716 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=70 name=(null) inode=14712 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=71 name=(null) inode=14717 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=72 name=(null) inode=14709 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=73 name=(null) inode=14718 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=74 name=(null) inode=14718 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=75 name=(null) inode=14719 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=76 name=(null) inode=14718 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=77 name=(null) inode=14720 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=78 name=(null) inode=14718 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=79 name=(null) inode=14721 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=80 name=(null) inode=14718 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=81 name=(null) inode=14722 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=82 name=(null) inode=14718 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=83 name=(null) inode=14723 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=84 name=(null) inode=14709 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=85 name=(null) inode=14724 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=86 name=(null) inode=14724 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=87 name=(null) inode=14725 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=88 name=(null) inode=14724 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=89 name=(null) inode=14726 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=90 name=(null) inode=14724 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=91 name=(null) inode=14727 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=92 name=(null) inode=14724 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=93 name=(null) inode=14728 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=94 name=(null) inode=14724 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=95 name=(null) inode=14729 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=96 name=(null) inode=14709 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=97 name=(null) inode=14730 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=98 name=(null) inode=14730 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=99 name=(null) inode=14731 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=100 name=(null) inode=14730 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=101 name=(null) inode=14732 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=102 name=(null) inode=14730 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=103 name=(null) inode=14733 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=104 name=(null) inode=14730 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=105 name=(null) inode=14734 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=106 name=(null) inode=14730 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=107 name=(null) inode=14735 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PATH item=109 name=(null) inode=14736 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:40:09.884000 audit: PROCTITLE proctitle="(udev-worker)" May 17 00:40:09.898905 systemd-networkd[1027]: eth0: DHCPv4 address 10.0.0.140/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 17 00:40:09.906265 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 17 00:40:09.907744 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) May 17 00:40:09.907888 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 17 00:40:09.919763 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 17 00:40:09.922729 kernel: mousedev: PS/2 mouse device common for all mice May 17 00:40:09.982361 kernel: kvm: Nested Virtualization enabled May 17 00:40:09.982469 kernel: SVM: kvm: Nested Paging enabled May 17 00:40:09.982485 kernel: SVM: Virtual VMLOAD VMSAVE supported May 17 00:40:09.982498 kernel: SVM: Virtual GIF supported May 17 00:40:09.998729 kernel: EDAC MC: Ver: 3.0.0 May 17 00:40:10.028111 systemd[1]: Finished systemd-udev-settle.service. May 17 00:40:10.028000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:10.030148 systemd[1]: Starting lvm2-activation-early.service... May 17 00:40:10.038911 lvm[1053]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:40:10.060697 systemd[1]: Finished lvm2-activation-early.service. May 17 00:40:10.061000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:10.061869 systemd[1]: Reached target cryptsetup.target. May 17 00:40:10.063877 systemd[1]: Starting lvm2-activation.service... May 17 00:40:10.068476 lvm[1054]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:40:10.094853 systemd[1]: Finished lvm2-activation.service. May 17 00:40:10.095000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:10.095997 systemd[1]: Reached target local-fs-pre.target. May 17 00:40:10.096921 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 17 00:40:10.096953 systemd[1]: Reached target local-fs.target. May 17 00:40:10.097872 systemd[1]: Reached target machines.target. May 17 00:40:10.100002 systemd[1]: Starting ldconfig.service... May 17 00:40:10.101226 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:40:10.101271 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:40:10.102136 systemd[1]: Starting systemd-boot-update.service... May 17 00:40:10.104113 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... May 17 00:40:10.107266 systemd[1]: Starting systemd-machine-id-commit.service... May 17 00:40:10.109751 systemd[1]: Starting systemd-sysext.service... May 17 00:40:10.110065 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1056 (bootctl) May 17 00:40:10.110895 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... May 17 00:40:10.120029 systemd[1]: Unmounting usr-share-oem.mount... May 17 00:40:10.122400 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. May 17 00:40:10.121000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:10.126602 systemd[1]: usr-share-oem.mount: Deactivated successfully. May 17 00:40:10.126831 systemd[1]: Unmounted usr-share-oem.mount. May 17 00:40:10.137750 kernel: loop0: detected capacity change from 0 to 224512 May 17 00:40:10.171560 systemd-fsck[1065]: fsck.fat 4.2 (2021-01-31) May 17 00:40:10.171560 systemd-fsck[1065]: /dev/vda1: 790 files, 120726/258078 clusters May 17 00:40:10.173496 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. May 17 00:40:10.176000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:10.178686 systemd[1]: Mounting boot.mount... May 17 00:40:10.429612 systemd[1]: Mounted boot.mount. May 17 00:40:10.436728 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 17 00:40:10.443276 systemd[1]: Finished systemd-boot-update.service. May 17 00:40:10.444000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:10.453722 kernel: loop1: detected capacity change from 0 to 224512 May 17 00:40:10.477682 (sd-sysext)[1069]: Using extensions 'kubernetes'. May 17 00:40:10.478049 (sd-sysext)[1069]: Merged extensions into '/usr'. May 17 00:40:10.497525 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:40:10.498738 systemd[1]: Mounting usr-share-oem.mount... May 17 00:40:10.499674 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:40:10.500725 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:40:10.502583 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:40:10.504425 systemd[1]: Starting modprobe@loop.service... May 17 00:40:10.505361 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:40:10.505550 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:40:10.505692 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:40:10.508584 systemd[1]: Mounted usr-share-oem.mount. May 17 00:40:10.509847 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:40:10.509984 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:40:10.510000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:10.510000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:10.511369 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:40:10.511511 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:40:10.512000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:10.512000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:10.512986 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:40:10.513097 systemd[1]: Finished modprobe@loop.service. May 17 00:40:10.513000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:10.513000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:10.514298 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:40:10.514425 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 00:40:10.515351 systemd[1]: Finished systemd-sysext.service. May 17 00:40:10.516000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:10.517222 systemd[1]: Starting ensure-sysext.service... May 17 00:40:10.521412 systemd[1]: Starting systemd-tmpfiles-setup.service... May 17 00:40:10.527718 systemd[1]: Reloading. May 17 00:40:10.532547 systemd-tmpfiles[1076]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. May 17 00:40:10.533446 systemd-tmpfiles[1076]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 17 00:40:10.535037 systemd-tmpfiles[1076]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 17 00:40:10.538097 ldconfig[1055]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 17 00:40:10.577063 /usr/lib/systemd/system-generators/torcx-generator[1095]: time="2025-05-17T00:40:10Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 17 00:40:10.580862 /usr/lib/systemd/system-generators/torcx-generator[1095]: time="2025-05-17T00:40:10Z" level=info msg="torcx already run" May 17 00:40:10.638606 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 00:40:10.638621 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 00:40:10.655863 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:40:10.704164 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 17 00:40:10.706000 audit: BPF prog-id=24 op=LOAD May 17 00:40:10.706000 audit: BPF prog-id=20 op=UNLOAD May 17 00:40:10.707000 audit: BPF prog-id=25 op=LOAD May 17 00:40:10.707000 audit: BPF prog-id=26 op=LOAD May 17 00:40:10.707000 audit: BPF prog-id=18 op=UNLOAD May 17 00:40:10.707000 audit: BPF prog-id=19 op=UNLOAD May 17 00:40:10.708000 audit: BPF prog-id=27 op=LOAD May 17 00:40:10.708000 audit: BPF prog-id=21 op=UNLOAD May 17 00:40:10.708000 audit: BPF prog-id=28 op=LOAD May 17 00:40:10.708000 audit: BPF prog-id=29 op=LOAD May 17 00:40:10.708000 audit: BPF prog-id=22 op=UNLOAD May 17 00:40:10.708000 audit: BPF prog-id=23 op=UNLOAD May 17 00:40:10.709000 audit: BPF prog-id=30 op=LOAD May 17 00:40:10.709000 audit: BPF prog-id=15 op=UNLOAD May 17 00:40:10.709000 audit: BPF prog-id=31 op=LOAD May 17 00:40:10.709000 audit: BPF prog-id=32 op=LOAD May 17 00:40:10.709000 audit: BPF prog-id=16 op=UNLOAD May 17 00:40:10.709000 audit: BPF prog-id=17 op=UNLOAD May 17 00:40:10.712336 systemd[1]: Finished ldconfig.service. May 17 00:40:10.712000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:10.713603 systemd[1]: Finished systemd-machine-id-commit.service. May 17 00:40:10.713000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:10.715641 systemd[1]: Finished systemd-tmpfiles-setup.service. May 17 00:40:10.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:10.719426 systemd[1]: Starting audit-rules.service... May 17 00:40:10.721223 systemd[1]: Starting clean-ca-certificates.service... May 17 00:40:10.723214 systemd[1]: Starting systemd-journal-catalog-update.service... May 17 00:40:10.724000 audit: BPF prog-id=33 op=LOAD May 17 00:40:10.725695 systemd[1]: Starting systemd-resolved.service... May 17 00:40:10.726000 audit: BPF prog-id=34 op=LOAD May 17 00:40:10.728134 systemd[1]: Starting systemd-timesyncd.service... May 17 00:40:10.729816 systemd[1]: Starting systemd-update-utmp.service... May 17 00:40:10.731124 systemd[1]: Finished clean-ca-certificates.service. May 17 00:40:10.731000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:10.734000 audit[1149]: SYSTEM_BOOT pid=1149 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' May 17 00:40:10.734049 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 00:40:10.739302 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:40:10.740388 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:40:10.742120 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:40:10.743800 systemd[1]: Starting modprobe@loop.service... May 17 00:40:10.744671 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:40:10.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:10.744809 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:40:10.744928 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 00:40:10.745817 systemd[1]: Finished systemd-update-utmp.service. May 17 00:40:10.747157 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:40:10.747254 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:40:10.747000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:10.747000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:10.748682 systemd[1]: Finished systemd-journal-catalog-update.service. May 17 00:40:10.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:10.750122 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:40:10.750235 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:40:10.750000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:10.750000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:10.751595 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:40:10.751689 systemd[1]: Finished modprobe@loop.service. May 17 00:40:10.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:10.752000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:10.753762 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:40:10.753874 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 00:40:10.755079 systemd[1]: Starting systemd-update-done.service... May 17 00:40:10.757553 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:40:10.758547 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:40:10.760409 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:40:10.762156 systemd[1]: Starting modprobe@loop.service... May 17 00:40:10.762990 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:40:10.763086 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:40:10.763165 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 00:40:10.763872 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:40:10.763974 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:40:10.764000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:10.764000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:10.765213 systemd[1]: Finished systemd-update-done.service. May 17 00:40:10.765000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:10.766483 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:40:10.766580 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:40:10.766000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:10.766000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:10.767845 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:40:10.767933 systemd[1]: Finished modprobe@loop.service. May 17 00:40:10.768000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:10.768000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:40:10.769103 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:40:10.769204 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 00:40:10.771783 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:40:10.772000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 May 17 00:40:10.772000 audit[1165]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd4bca7ed0 a2=420 a3=0 items=0 ppid=1138 pid=1165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:40:10.772000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 May 17 00:40:10.773379 augenrules[1165]: No rules May 17 00:40:10.773077 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:40:10.781279 systemd[1]: Starting modprobe@drm.service... May 17 00:40:10.783279 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:40:10.785357 systemd[1]: Starting modprobe@loop.service... May 17 00:40:10.786870 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:40:10.786965 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:40:10.787988 systemd[1]: Starting systemd-networkd-wait-online.service... May 17 00:40:10.789123 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 00:40:10.790244 systemd[1]: Finished audit-rules.service. May 17 00:40:10.791645 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:40:10.791882 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:40:10.793132 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:40:10.793228 systemd[1]: Finished modprobe@drm.service. May 17 00:40:10.794394 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:40:10.794483 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:40:10.795660 systemd[1]: Started systemd-timesyncd.service. May 17 00:40:11.250916 systemd-timesyncd[1148]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 17 00:40:11.250990 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:40:11.251083 systemd[1]: Finished modprobe@loop.service. May 17 00:40:11.251326 systemd-timesyncd[1148]: Initial clock synchronization to Sat 2025-05-17 00:40:11.250854 UTC. May 17 00:40:11.252629 systemd[1]: Reached target time-set.target. May 17 00:40:11.253603 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:40:11.253684 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 00:40:11.253940 systemd[1]: Finished ensure-sysext.service. May 17 00:40:11.265080 systemd-resolved[1142]: Positive Trust Anchors: May 17 00:40:11.265095 systemd-resolved[1142]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:40:11.265126 systemd-resolved[1142]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 17 00:40:11.272161 systemd-resolved[1142]: Defaulting to hostname 'linux'. May 17 00:40:11.273643 systemd[1]: Started systemd-resolved.service. May 17 00:40:11.274703 systemd[1]: Reached target network.target. May 17 00:40:11.275607 systemd[1]: Reached target nss-lookup.target. May 17 00:40:11.276529 systemd[1]: Reached target sysinit.target. May 17 00:40:11.277473 systemd[1]: Started motdgen.path. May 17 00:40:11.278253 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. May 17 00:40:11.279587 systemd[1]: Started logrotate.timer. May 17 00:40:11.280418 systemd[1]: Started mdadm.timer. May 17 00:40:11.281138 systemd[1]: Started systemd-tmpfiles-clean.timer. May 17 00:40:11.282065 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 17 00:40:11.282092 systemd[1]: Reached target paths.target. May 17 00:40:11.282911 systemd[1]: Reached target timers.target. May 17 00:40:11.284048 systemd[1]: Listening on dbus.socket. May 17 00:40:11.285820 systemd[1]: Starting docker.socket... May 17 00:40:11.288800 systemd[1]: Listening on sshd.socket. May 17 00:40:11.289725 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:40:11.290049 systemd[1]: Listening on docker.socket. May 17 00:40:11.290954 systemd[1]: Reached target sockets.target. May 17 00:40:11.291832 systemd[1]: Reached target basic.target. May 17 00:40:11.292729 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. May 17 00:40:11.292755 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. May 17 00:40:11.293539 systemd[1]: Starting containerd.service... May 17 00:40:11.295126 systemd[1]: Starting dbus.service... May 17 00:40:11.296813 systemd[1]: Starting enable-oem-cloudinit.service... May 17 00:40:11.298803 systemd[1]: Starting extend-filesystems.service... May 17 00:40:11.300394 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). May 17 00:40:11.301808 systemd[1]: Starting motdgen.service... May 17 00:40:11.302391 jq[1180]: false May 17 00:40:11.303613 systemd[1]: Starting ssh-key-proc-cmdline.service... May 17 00:40:11.305657 systemd[1]: Starting sshd-keygen.service... May 17 00:40:11.308888 systemd[1]: Starting systemd-logind.service... May 17 00:40:11.310302 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:40:11.310358 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 17 00:40:11.310770 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 17 00:40:11.311434 systemd[1]: Starting update-engine.service... May 17 00:40:11.313413 systemd[1]: Starting update-ssh-keys-after-ignition.service... May 17 00:40:11.318290 dbus-daemon[1179]: [system] SELinux support is enabled May 17 00:40:11.318764 systemd[1]: Started dbus.service. May 17 00:40:11.319056 jq[1192]: true May 17 00:40:11.321827 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 17 00:40:11.323720 extend-filesystems[1181]: Found loop1 May 17 00:40:11.323720 extend-filesystems[1181]: Found sr0 May 17 00:40:11.323720 extend-filesystems[1181]: Found vda May 17 00:40:11.323720 extend-filesystems[1181]: Found vda1 May 17 00:40:11.323720 extend-filesystems[1181]: Found vda2 May 17 00:40:11.323720 extend-filesystems[1181]: Found vda3 May 17 00:40:11.323720 extend-filesystems[1181]: Found usr May 17 00:40:11.323720 extend-filesystems[1181]: Found vda4 May 17 00:40:11.323720 extend-filesystems[1181]: Found vda6 May 17 00:40:11.323720 extend-filesystems[1181]: Found vda7 May 17 00:40:11.323720 extend-filesystems[1181]: Found vda9 May 17 00:40:11.323720 extend-filesystems[1181]: Checking size of /dev/vda9 May 17 00:40:11.321991 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. May 17 00:40:11.347821 extend-filesystems[1181]: Resized partition /dev/vda9 May 17 00:40:11.322246 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 17 00:40:11.350748 extend-filesystems[1209]: resize2fs 1.46.5 (30-Dec-2021) May 17 00:40:11.353623 jq[1202]: true May 17 00:40:11.322392 systemd[1]: Finished ssh-key-proc-cmdline.service. May 17 00:40:11.328457 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 17 00:40:11.328489 systemd[1]: Reached target system-config.target. May 17 00:40:11.331880 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 17 00:40:11.331900 systemd[1]: Reached target user-config.target. May 17 00:40:11.341511 systemd[1]: motdgen.service: Deactivated successfully. May 17 00:40:11.347376 systemd[1]: Finished motdgen.service. May 17 00:40:11.358603 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 17 00:40:11.378087 env[1203]: time="2025-05-17T00:40:11.378014205Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 May 17 00:40:11.392635 update_engine[1191]: I0517 00:40:11.391883 1191 main.cc:92] Flatcar Update Engine starting May 17 00:40:11.395147 systemd[1]: Started update-engine.service. May 17 00:40:11.395436 update_engine[1191]: I0517 00:40:11.395217 1191 update_check_scheduler.cc:74] Next update check in 6m32s May 17 00:40:11.397971 systemd[1]: Started locksmithd.service. May 17 00:40:11.400712 env[1203]: time="2025-05-17T00:40:11.400657598Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 17 00:40:11.400890 env[1203]: time="2025-05-17T00:40:11.400861150Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 17 00:40:11.402241 env[1203]: time="2025-05-17T00:40:11.402188019Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.182-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 17 00:40:11.402241 env[1203]: time="2025-05-17T00:40:11.402223155Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 17 00:40:11.402462 env[1203]: time="2025-05-17T00:40:11.402426276Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:40:11.402462 env[1203]: time="2025-05-17T00:40:11.402452555Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 17 00:40:11.402545 env[1203]: time="2025-05-17T00:40:11.402468585Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" May 17 00:40:11.402545 env[1203]: time="2025-05-17T00:40:11.402482180Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 17 00:40:11.402633 env[1203]: time="2025-05-17T00:40:11.402583330Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 17 00:40:11.402853 env[1203]: time="2025-05-17T00:40:11.402819874Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 17 00:40:11.402999 env[1203]: time="2025-05-17T00:40:11.402966208Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:40:11.402999 env[1203]: time="2025-05-17T00:40:11.402990354Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 17 00:40:11.403065 env[1203]: time="2025-05-17T00:40:11.403041289Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" May 17 00:40:11.403065 env[1203]: time="2025-05-17T00:40:11.403055085Z" level=info msg="metadata content store policy set" policy=shared May 17 00:40:11.419237 systemd-logind[1188]: Watching system buttons on /dev/input/event1 (Power Button) May 17 00:40:11.419584 systemd-logind[1188]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 17 00:40:11.419918 systemd-logind[1188]: New seat seat0. May 17 00:40:11.423467 systemd[1]: Started systemd-logind.service. May 17 00:40:11.441611 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 17 00:40:11.528765 locksmithd[1231]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 17 00:40:11.580824 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:40:11.830738 extend-filesystems[1209]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 17 00:40:11.830738 extend-filesystems[1209]: old_desc_blocks = 1, new_desc_blocks = 1 May 17 00:40:11.830738 extend-filesystems[1209]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 17 00:40:11.580879 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:40:11.841287 extend-filesystems[1181]: Resized filesystem in /dev/vda9 May 17 00:40:11.832233 systemd[1]: extend-filesystems.service: Deactivated successfully. May 17 00:40:11.841283 systemd[1]: Finished extend-filesystems.service. May 17 00:40:11.994002 sshd_keygen[1197]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 17 00:40:12.012192 systemd[1]: Finished sshd-keygen.service. May 17 00:40:12.015069 systemd[1]: Starting issuegen.service... May 17 00:40:12.016251 systemd-networkd[1027]: eth0: Gained IPv6LL May 17 00:40:12.017986 systemd[1]: Finished systemd-networkd-wait-online.service. May 17 00:40:12.020330 systemd[1]: issuegen.service: Deactivated successfully. May 17 00:40:12.020524 systemd[1]: Finished issuegen.service. May 17 00:40:12.021894 systemd[1]: Reached target network-online.target. May 17 00:40:12.024517 systemd[1]: Starting kubelet.service... May 17 00:40:12.026407 systemd[1]: Starting systemd-user-sessions.service... May 17 00:40:12.040041 systemd[1]: Finished systemd-user-sessions.service. May 17 00:40:12.042342 systemd[1]: Started getty@tty1.service. May 17 00:40:12.044338 systemd[1]: Started serial-getty@ttyS0.service. May 17 00:40:12.045462 systemd[1]: Reached target getty.target. May 17 00:40:12.077634 env[1203]: time="2025-05-17T00:40:12.077556096Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 17 00:40:12.077634 env[1203]: time="2025-05-17T00:40:12.077621248Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 17 00:40:12.077634 env[1203]: time="2025-05-17T00:40:12.077635064Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 17 00:40:12.077813 env[1203]: time="2025-05-17T00:40:12.077666974Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 17 00:40:12.077813 env[1203]: time="2025-05-17T00:40:12.077680550Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 17 00:40:12.077813 env[1203]: time="2025-05-17T00:40:12.077692562Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 17 00:40:12.077813 env[1203]: time="2025-05-17T00:40:12.077704775Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 17 00:40:12.077813 env[1203]: time="2025-05-17T00:40:12.077717018Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 17 00:40:12.077813 env[1203]: time="2025-05-17T00:40:12.077728880Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 May 17 00:40:12.077813 env[1203]: time="2025-05-17T00:40:12.077741554Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 17 00:40:12.077813 env[1203]: time="2025-05-17T00:40:12.077752795Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 17 00:40:12.077813 env[1203]: time="2025-05-17T00:40:12.077763535Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 17 00:40:12.077993 env[1203]: time="2025-05-17T00:40:12.077947540Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 17 00:40:12.078033 env[1203]: time="2025-05-17T00:40:12.078015838Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 17 00:40:12.078792 env[1203]: time="2025-05-17T00:40:12.078680004Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 17 00:40:12.078792 env[1203]: time="2025-05-17T00:40:12.078753282Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 17 00:40:12.078792 env[1203]: time="2025-05-17T00:40:12.078778699Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 17 00:40:12.078919 env[1203]: time="2025-05-17T00:40:12.078887093Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 17 00:40:12.078943 env[1203]: time="2025-05-17T00:40:12.078923952Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 17 00:40:12.078968 env[1203]: time="2025-05-17T00:40:12.078942096Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 17 00:40:12.078968 env[1203]: time="2025-05-17T00:40:12.078957865Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 17 00:40:12.079009 env[1203]: time="2025-05-17T00:40:12.078973324Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 17 00:40:12.079009 env[1203]: time="2025-05-17T00:40:12.078989304Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 17 00:40:12.079049 env[1203]: time="2025-05-17T00:40:12.079043797Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 17 00:40:12.079070 env[1203]: time="2025-05-17T00:40:12.079059727Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 17 00:40:12.079098 env[1203]: time="2025-05-17T00:40:12.079077891Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 17 00:40:12.079273 env[1203]: time="2025-05-17T00:40:12.079238502Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 17 00:40:12.079301 env[1203]: time="2025-05-17T00:40:12.079271634Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 17 00:40:12.079301 env[1203]: time="2025-05-17T00:40:12.079287925Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 17 00:40:12.079340 env[1203]: time="2025-05-17T00:40:12.079308032Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 17 00:40:12.079361 env[1203]: time="2025-05-17T00:40:12.079345212Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 May 17 00:40:12.079386 env[1203]: time="2025-05-17T00:40:12.079362905Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 17 00:40:12.079407 env[1203]: time="2025-05-17T00:40:12.079387401Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" May 17 00:40:12.079475 env[1203]: time="2025-05-17T00:40:12.079434339Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 17 00:40:12.079867 env[1203]: time="2025-05-17T00:40:12.079794254Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 17 00:40:12.090712 env[1203]: time="2025-05-17T00:40:12.079873383Z" level=info msg="Connect containerd service" May 17 00:40:12.090712 env[1203]: time="2025-05-17T00:40:12.079918567Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 17 00:40:12.090712 env[1203]: time="2025-05-17T00:40:12.080456607Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 00:40:12.090712 env[1203]: time="2025-05-17T00:40:12.080590528Z" level=info msg="Start subscribing containerd event" May 17 00:40:12.090712 env[1203]: time="2025-05-17T00:40:12.080635873Z" level=info msg="Start recovering state" May 17 00:40:12.090712 env[1203]: time="2025-05-17T00:40:12.080689534Z" level=info msg="Start event monitor" May 17 00:40:12.090712 env[1203]: time="2025-05-17T00:40:12.080691036Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 17 00:40:12.090712 env[1203]: time="2025-05-17T00:40:12.080705574Z" level=info msg="Start snapshots syncer" May 17 00:40:12.090712 env[1203]: time="2025-05-17T00:40:12.080714851Z" level=info msg="Start cni network conf syncer for default" May 17 00:40:12.090712 env[1203]: time="2025-05-17T00:40:12.080722065Z" level=info msg="Start streaming server" May 17 00:40:12.090712 env[1203]: time="2025-05-17T00:40:12.080723748Z" level=info msg=serving... address=/run/containerd/containerd.sock May 17 00:40:12.090712 env[1203]: time="2025-05-17T00:40:12.080777739Z" level=info msg="containerd successfully booted in 0.703507s" May 17 00:40:12.080874 systemd[1]: Started containerd.service. May 17 00:40:12.109419 bash[1227]: Updated "/home/core/.ssh/authorized_keys" May 17 00:40:12.110050 systemd[1]: Finished update-ssh-keys-after-ignition.service. May 17 00:40:13.255007 systemd[1]: Started kubelet.service. May 17 00:40:13.257127 systemd[1]: Reached target multi-user.target. May 17 00:40:13.260242 systemd[1]: Starting systemd-update-utmp-runlevel.service... May 17 00:40:13.267832 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. May 17 00:40:13.268015 systemd[1]: Finished systemd-update-utmp-runlevel.service. May 17 00:40:13.269561 systemd[1]: Startup finished in 680ms (kernel) + 4.576s (initrd) + 6.502s (userspace) = 11.758s. May 17 00:40:13.975483 kubelet[1256]: E0517 00:40:13.975423 1256 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:40:13.976873 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:40:13.977013 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:40:13.977257 systemd[1]: kubelet.service: Consumed 1.797s CPU time. May 17 00:40:14.987787 systemd[1]: Created slice system-sshd.slice. May 17 00:40:14.989066 systemd[1]: Started sshd@0-10.0.0.140:22-10.0.0.1:55206.service. May 17 00:40:15.033219 sshd[1265]: Accepted publickey for core from 10.0.0.1 port 55206 ssh2: RSA SHA256:zHGb6zFE5uWTPnbfHFhmjGeDUJxvuwQSpK8sihWDiq0 May 17 00:40:15.034677 sshd[1265]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:40:15.043467 systemd-logind[1188]: New session 1 of user core. May 17 00:40:15.044411 systemd[1]: Created slice user-500.slice. May 17 00:40:15.045533 systemd[1]: Starting user-runtime-dir@500.service... May 17 00:40:15.053213 systemd[1]: Finished user-runtime-dir@500.service. May 17 00:40:15.054500 systemd[1]: Starting user@500.service... May 17 00:40:15.056923 (systemd)[1268]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 17 00:40:15.128885 systemd[1268]: Queued start job for default target default.target. May 17 00:40:15.129372 systemd[1268]: Reached target paths.target. May 17 00:40:15.129392 systemd[1268]: Reached target sockets.target. May 17 00:40:15.129403 systemd[1268]: Reached target timers.target. May 17 00:40:15.129416 systemd[1268]: Reached target basic.target. May 17 00:40:15.129463 systemd[1268]: Reached target default.target. May 17 00:40:15.129497 systemd[1268]: Startup finished in 64ms. May 17 00:40:15.129553 systemd[1]: Started user@500.service. May 17 00:40:15.130610 systemd[1]: Started session-1.scope. May 17 00:40:15.180731 systemd[1]: Started sshd@1-10.0.0.140:22-10.0.0.1:55212.service. May 17 00:40:15.222053 sshd[1277]: Accepted publickey for core from 10.0.0.1 port 55212 ssh2: RSA SHA256:zHGb6zFE5uWTPnbfHFhmjGeDUJxvuwQSpK8sihWDiq0 May 17 00:40:15.223295 sshd[1277]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:40:15.226756 systemd-logind[1188]: New session 2 of user core. May 17 00:40:15.227743 systemd[1]: Started session-2.scope. May 17 00:40:15.280856 sshd[1277]: pam_unix(sshd:session): session closed for user core May 17 00:40:15.284083 systemd[1]: sshd@1-10.0.0.140:22-10.0.0.1:55212.service: Deactivated successfully. May 17 00:40:15.284751 systemd[1]: session-2.scope: Deactivated successfully. May 17 00:40:15.285310 systemd-logind[1188]: Session 2 logged out. Waiting for processes to exit. May 17 00:40:15.286489 systemd[1]: Started sshd@2-10.0.0.140:22-10.0.0.1:55214.service. May 17 00:40:15.287152 systemd-logind[1188]: Removed session 2. May 17 00:40:15.326785 sshd[1283]: Accepted publickey for core from 10.0.0.1 port 55214 ssh2: RSA SHA256:zHGb6zFE5uWTPnbfHFhmjGeDUJxvuwQSpK8sihWDiq0 May 17 00:40:15.327988 sshd[1283]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:40:15.331811 systemd-logind[1188]: New session 3 of user core. May 17 00:40:15.332767 systemd[1]: Started session-3.scope. May 17 00:40:15.384028 sshd[1283]: pam_unix(sshd:session): session closed for user core May 17 00:40:15.386742 systemd[1]: sshd@2-10.0.0.140:22-10.0.0.1:55214.service: Deactivated successfully. May 17 00:40:15.387316 systemd[1]: session-3.scope: Deactivated successfully. May 17 00:40:15.387771 systemd-logind[1188]: Session 3 logged out. Waiting for processes to exit. May 17 00:40:15.388837 systemd[1]: Started sshd@3-10.0.0.140:22-10.0.0.1:55218.service. May 17 00:40:15.389684 systemd-logind[1188]: Removed session 3. May 17 00:40:15.429691 sshd[1289]: Accepted publickey for core from 10.0.0.1 port 55218 ssh2: RSA SHA256:zHGb6zFE5uWTPnbfHFhmjGeDUJxvuwQSpK8sihWDiq0 May 17 00:40:15.430794 sshd[1289]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:40:15.434280 systemd-logind[1188]: New session 4 of user core. May 17 00:40:15.435002 systemd[1]: Started session-4.scope. May 17 00:40:15.487942 sshd[1289]: pam_unix(sshd:session): session closed for user core May 17 00:40:15.491544 systemd[1]: Started sshd@4-10.0.0.140:22-10.0.0.1:55230.service. May 17 00:40:15.492030 systemd[1]: sshd@3-10.0.0.140:22-10.0.0.1:55218.service: Deactivated successfully. May 17 00:40:15.492634 systemd[1]: session-4.scope: Deactivated successfully. May 17 00:40:15.493078 systemd-logind[1188]: Session 4 logged out. Waiting for processes to exit. May 17 00:40:15.493880 systemd-logind[1188]: Removed session 4. May 17 00:40:15.533481 sshd[1294]: Accepted publickey for core from 10.0.0.1 port 55230 ssh2: RSA SHA256:zHGb6zFE5uWTPnbfHFhmjGeDUJxvuwQSpK8sihWDiq0 May 17 00:40:15.534782 sshd[1294]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:40:15.538580 systemd-logind[1188]: New session 5 of user core. May 17 00:40:15.539648 systemd[1]: Started session-5.scope. May 17 00:40:15.596503 sudo[1298]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 17 00:40:15.596732 sudo[1298]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 17 00:40:15.609417 systemd[1]: Starting coreos-metadata.service... May 17 00:40:15.617390 systemd[1]: coreos-metadata.service: Deactivated successfully. May 17 00:40:15.617561 systemd[1]: Finished coreos-metadata.service. May 17 00:40:16.208169 systemd[1]: Stopped kubelet.service. May 17 00:40:16.208372 systemd[1]: kubelet.service: Consumed 1.797s CPU time. May 17 00:40:16.210130 systemd[1]: Starting kubelet.service... May 17 00:40:16.232757 systemd[1]: Reloading. May 17 00:40:16.410782 /usr/lib/systemd/system-generators/torcx-generator[1356]: time="2025-05-17T00:40:16Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 17 00:40:16.411191 /usr/lib/systemd/system-generators/torcx-generator[1356]: time="2025-05-17T00:40:16Z" level=info msg="torcx already run" May 17 00:40:17.251240 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 00:40:17.251256 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 00:40:17.268739 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:40:17.346584 systemd[1]: Started kubelet.service. May 17 00:40:17.347782 systemd[1]: Stopping kubelet.service... May 17 00:40:17.348001 systemd[1]: kubelet.service: Deactivated successfully. May 17 00:40:17.348158 systemd[1]: Stopped kubelet.service. May 17 00:40:17.349858 systemd[1]: Starting kubelet.service... May 17 00:40:17.438822 systemd[1]: Started kubelet.service. May 17 00:40:17.694304 kubelet[1405]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:40:17.694304 kubelet[1405]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 17 00:40:17.694304 kubelet[1405]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:40:17.694716 kubelet[1405]: I0517 00:40:17.694297 1405 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:40:17.938104 kubelet[1405]: I0517 00:40:17.938030 1405 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" May 17 00:40:17.938104 kubelet[1405]: I0517 00:40:17.938075 1405 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:40:17.938343 kubelet[1405]: I0517 00:40:17.938326 1405 server.go:954] "Client rotation is on, will bootstrap in background" May 17 00:40:17.968503 kubelet[1405]: I0517 00:40:17.968346 1405 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:40:17.982591 kubelet[1405]: E0517 00:40:17.982528 1405 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:40:17.982591 kubelet[1405]: I0517 00:40:17.982588 1405 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:40:17.987250 kubelet[1405]: I0517 00:40:17.987195 1405 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:40:17.988380 kubelet[1405]: I0517 00:40:17.988332 1405 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:40:17.988556 kubelet[1405]: I0517 00:40:17.988371 1405 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.140","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 17 00:40:17.988658 kubelet[1405]: I0517 00:40:17.988557 1405 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:40:17.988658 kubelet[1405]: I0517 00:40:17.988584 1405 container_manager_linux.go:304] "Creating device plugin manager" May 17 00:40:17.988720 kubelet[1405]: I0517 00:40:17.988707 1405 state_mem.go:36] "Initialized new in-memory state store" May 17 00:40:17.991152 kubelet[1405]: I0517 00:40:17.991123 1405 kubelet.go:446] "Attempting to sync node with API server" May 17 00:40:17.991204 kubelet[1405]: I0517 00:40:17.991155 1405 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:40:17.991204 kubelet[1405]: I0517 00:40:17.991181 1405 kubelet.go:352] "Adding apiserver pod source" May 17 00:40:17.991204 kubelet[1405]: I0517 00:40:17.991193 1405 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:40:17.991374 kubelet[1405]: E0517 00:40:17.991342 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:17.991404 kubelet[1405]: E0517 00:40:17.991389 1405 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:18.005666 kubelet[1405]: I0517 00:40:18.005632 1405 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 17 00:40:18.006155 kubelet[1405]: I0517 00:40:18.006129 1405 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 17 00:40:18.006792 kubelet[1405]: W0517 00:40:18.006752 1405 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 17 00:40:18.011023 kubelet[1405]: I0517 00:40:18.010990 1405 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 17 00:40:18.011112 kubelet[1405]: I0517 00:40:18.011065 1405 server.go:1287] "Started kubelet" May 17 00:40:18.011201 kubelet[1405]: I0517 00:40:18.011139 1405 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:40:18.012137 kubelet[1405]: I0517 00:40:18.012089 1405 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:40:18.012530 kubelet[1405]: I0517 00:40:18.012514 1405 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:40:18.012685 kubelet[1405]: I0517 00:40:18.012671 1405 server.go:479] "Adding debug handlers to kubelet server" May 17 00:40:18.015029 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). May 17 00:40:18.015234 kubelet[1405]: I0517 00:40:18.015064 1405 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:40:18.015775 kubelet[1405]: I0517 00:40:18.015749 1405 volume_manager.go:297] "Starting Kubelet Volume Manager" May 17 00:40:18.015838 kubelet[1405]: I0517 00:40:18.015777 1405 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:40:18.015869 kubelet[1405]: I0517 00:40:18.015845 1405 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 17 00:40:18.015896 kubelet[1405]: I0517 00:40:18.015884 1405 reconciler.go:26] "Reconciler: start to sync state" May 17 00:40:18.016558 kubelet[1405]: E0517 00:40:18.016526 1405 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.140\" not found" May 17 00:40:18.017879 kubelet[1405]: I0517 00:40:18.017859 1405 factory.go:221] Registration of the containerd container factory successfully May 17 00:40:18.017879 kubelet[1405]: I0517 00:40:18.017876 1405 factory.go:221] Registration of the systemd container factory successfully May 17 00:40:18.018124 kubelet[1405]: I0517 00:40:18.017958 1405 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:40:18.032192 kubelet[1405]: I0517 00:40:18.031929 1405 cpu_manager.go:221] "Starting CPU manager" policy="none" May 17 00:40:18.032192 kubelet[1405]: I0517 00:40:18.031946 1405 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 17 00:40:18.032192 kubelet[1405]: I0517 00:40:18.031964 1405 state_mem.go:36] "Initialized new in-memory state store" May 17 00:40:18.117602 kubelet[1405]: E0517 00:40:18.117510 1405 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.140\" not found" May 17 00:40:18.218059 kubelet[1405]: E0517 00:40:18.217977 1405 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.140\" not found" May 17 00:40:18.318662 kubelet[1405]: E0517 00:40:18.318525 1405 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.140\" not found" May 17 00:40:18.405202 kubelet[1405]: W0517 00:40:18.405144 1405 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope May 17 00:40:18.405382 kubelet[1405]: E0517 00:40:18.405208 1405 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" May 17 00:40:18.405704 kubelet[1405]: E0517 00:40:18.405677 1405 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.140\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" May 17 00:40:18.405947 kubelet[1405]: W0517 00:40:18.405911 1405 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.0.0.140" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope May 17 00:40:18.405947 kubelet[1405]: W0517 00:40:18.405941 1405 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope May 17 00:40:18.406051 kubelet[1405]: E0517 00:40:18.405964 1405 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" May 17 00:40:18.406051 kubelet[1405]: E0517 00:40:18.405941 1405 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"10.0.0.140\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" May 17 00:40:18.418957 kubelet[1405]: E0517 00:40:18.418879 1405 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.140\" not found" May 17 00:40:18.519237 kubelet[1405]: E0517 00:40:18.519138 1405 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.140\" not found" May 17 00:40:18.607438 kubelet[1405]: E0517 00:40:18.607306 1405 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.140\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="400ms" May 17 00:40:18.620226 kubelet[1405]: E0517 00:40:18.620085 1405 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.140\" not found" May 17 00:40:18.720860 kubelet[1405]: E0517 00:40:18.720798 1405 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.140\" not found" May 17 00:40:18.821465 kubelet[1405]: E0517 00:40:18.821398 1405 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.140\" not found" May 17 00:40:18.922319 kubelet[1405]: E0517 00:40:18.922178 1405 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.140\" not found" May 17 00:40:18.940451 kubelet[1405]: I0517 00:40:18.940398 1405 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" May 17 00:40:18.991851 kubelet[1405]: E0517 00:40:18.991781 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:19.015605 kubelet[1405]: E0517 00:40:19.015552 1405 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.140\" not found" node="10.0.0.140" May 17 00:40:19.023020 kubelet[1405]: E0517 00:40:19.022973 1405 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.140\" not found" May 17 00:40:19.031129 kubelet[1405]: E0517 00:40:18.405601 1405 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.140.1840299eeeff0b87 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.140,UID:10.0.0.140,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.0.0.140,},FirstTimestamp:2025-05-17 00:40:18.011016071 +0000 UTC m=+0.562407416,LastTimestamp:2025-05-17 00:40:18.011016071 +0000 UTC m=+0.562407416,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.140,}" May 17 00:40:19.053667 kubelet[1405]: I0517 00:40:19.053619 1405 policy_none.go:49] "None policy: Start" May 17 00:40:19.053667 kubelet[1405]: I0517 00:40:19.053658 1405 memory_manager.go:186] "Starting memorymanager" policy="None" May 17 00:40:19.053667 kubelet[1405]: I0517 00:40:19.053676 1405 state_mem.go:35] "Initializing new in-memory state store" May 17 00:40:19.099221 kubelet[1405]: I0517 00:40:19.099153 1405 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 17 00:40:19.100490 kubelet[1405]: I0517 00:40:19.100446 1405 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 17 00:40:19.100490 kubelet[1405]: I0517 00:40:19.100496 1405 status_manager.go:227] "Starting to sync pod status with apiserver" May 17 00:40:19.100686 kubelet[1405]: I0517 00:40:19.100528 1405 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 17 00:40:19.100686 kubelet[1405]: I0517 00:40:19.100538 1405 kubelet.go:2382] "Starting kubelet main sync loop" May 17 00:40:19.100686 kubelet[1405]: E0517 00:40:19.100612 1405 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 00:40:19.124114 kubelet[1405]: E0517 00:40:19.124073 1405 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.140\" not found" May 17 00:40:19.135303 systemd[1]: Created slice kubepods.slice. May 17 00:40:19.139254 systemd[1]: Created slice kubepods-burstable.slice. May 17 00:40:19.142306 systemd[1]: Created slice kubepods-besteffort.slice. May 17 00:40:19.154488 kubelet[1405]: I0517 00:40:19.154433 1405 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 17 00:40:19.154661 kubelet[1405]: I0517 00:40:19.154612 1405 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:40:19.154661 kubelet[1405]: I0517 00:40:19.154629 1405 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:40:19.155083 kubelet[1405]: I0517 00:40:19.154994 1405 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:40:19.155800 kubelet[1405]: E0517 00:40:19.155778 1405 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 17 00:40:19.155915 kubelet[1405]: E0517 00:40:19.155897 1405 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.140\" not found" May 17 00:40:19.256433 kubelet[1405]: I0517 00:40:19.255740 1405 kubelet_node_status.go:75] "Attempting to register node" node="10.0.0.140" May 17 00:40:19.289836 kubelet[1405]: I0517 00:40:19.289745 1405 kubelet_node_status.go:78] "Successfully registered node" node="10.0.0.140" May 17 00:40:19.289836 kubelet[1405]: E0517 00:40:19.289800 1405 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"10.0.0.140\": node \"10.0.0.140\" not found" May 17 00:40:19.305743 kubelet[1405]: I0517 00:40:19.305698 1405 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" May 17 00:40:19.306198 env[1203]: time="2025-05-17T00:40:19.306160021Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 17 00:40:19.306480 kubelet[1405]: I0517 00:40:19.306334 1405 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" May 17 00:40:19.346130 kubelet[1405]: E0517 00:40:19.346092 1405 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.140\" not found" May 17 00:40:19.446395 kubelet[1405]: E0517 00:40:19.446330 1405 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.140\" not found" May 17 00:40:19.516700 sudo[1298]: pam_unix(sudo:session): session closed for user root May 17 00:40:19.518807 sshd[1294]: pam_unix(sshd:session): session closed for user core May 17 00:40:19.522120 systemd[1]: sshd@4-10.0.0.140:22-10.0.0.1:55230.service: Deactivated successfully. May 17 00:40:19.522961 systemd[1]: session-5.scope: Deactivated successfully. May 17 00:40:19.523715 systemd-logind[1188]: Session 5 logged out. Waiting for processes to exit. May 17 00:40:19.524603 systemd-logind[1188]: Removed session 5. May 17 00:40:19.547375 kubelet[1405]: E0517 00:40:19.547330 1405 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.140\" not found" May 17 00:40:19.648560 kubelet[1405]: E0517 00:40:19.648494 1405 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.140\" not found" May 17 00:40:19.749595 kubelet[1405]: E0517 00:40:19.749527 1405 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.140\" not found" May 17 00:40:19.850524 kubelet[1405]: E0517 00:40:19.850374 1405 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.140\" not found" May 17 00:40:19.992214 kubelet[1405]: E0517 00:40:19.992138 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:20.003421 kubelet[1405]: I0517 00:40:20.003371 1405 apiserver.go:52] "Watching apiserver" May 17 00:40:20.011235 systemd[1]: Created slice kubepods-besteffort-podf4dda1a7_9d77_47de_90a3_e2d48210e58d.slice. May 17 00:40:20.017254 kubelet[1405]: I0517 00:40:20.017218 1405 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 17 00:40:20.019990 systemd[1]: Created slice kubepods-burstable-podd2eb883d_e50b_483f_a74d_3846f4b60594.slice. May 17 00:40:20.028323 kubelet[1405]: I0517 00:40:20.028281 1405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d2eb883d-e50b-483f-a74d-3846f4b60594-cilium-config-path\") pod \"cilium-924j2\" (UID: \"d2eb883d-e50b-483f-a74d-3846f4b60594\") " pod="kube-system/cilium-924j2" May 17 00:40:20.028323 kubelet[1405]: I0517 00:40:20.028327 1405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f4dda1a7-9d77-47de-90a3-e2d48210e58d-lib-modules\") pod \"kube-proxy-zxpmn\" (UID: \"f4dda1a7-9d77-47de-90a3-e2d48210e58d\") " pod="kube-system/kube-proxy-zxpmn" May 17 00:40:20.028323 kubelet[1405]: I0517 00:40:20.028342 1405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d2eb883d-e50b-483f-a74d-3846f4b60594-etc-cni-netd\") pod \"cilium-924j2\" (UID: \"d2eb883d-e50b-483f-a74d-3846f4b60594\") " pod="kube-system/cilium-924j2" May 17 00:40:20.028626 kubelet[1405]: I0517 00:40:20.028355 1405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d2eb883d-e50b-483f-a74d-3846f4b60594-xtables-lock\") pod \"cilium-924j2\" (UID: \"d2eb883d-e50b-483f-a74d-3846f4b60594\") " pod="kube-system/cilium-924j2" May 17 00:40:20.028626 kubelet[1405]: I0517 00:40:20.028372 1405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d2eb883d-e50b-483f-a74d-3846f4b60594-host-proc-sys-kernel\") pod \"cilium-924j2\" (UID: \"d2eb883d-e50b-483f-a74d-3846f4b60594\") " pod="kube-system/cilium-924j2" May 17 00:40:20.028626 kubelet[1405]: I0517 00:40:20.028415 1405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndcx7\" (UniqueName: \"kubernetes.io/projected/d2eb883d-e50b-483f-a74d-3846f4b60594-kube-api-access-ndcx7\") pod \"cilium-924j2\" (UID: \"d2eb883d-e50b-483f-a74d-3846f4b60594\") " pod="kube-system/cilium-924j2" May 17 00:40:20.028626 kubelet[1405]: I0517 00:40:20.028442 1405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d2eb883d-e50b-483f-a74d-3846f4b60594-host-proc-sys-net\") pod \"cilium-924j2\" (UID: \"d2eb883d-e50b-483f-a74d-3846f4b60594\") " pod="kube-system/cilium-924j2" May 17 00:40:20.028626 kubelet[1405]: I0517 00:40:20.028465 1405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f4dda1a7-9d77-47de-90a3-e2d48210e58d-xtables-lock\") pod \"kube-proxy-zxpmn\" (UID: \"f4dda1a7-9d77-47de-90a3-e2d48210e58d\") " pod="kube-system/kube-proxy-zxpmn" May 17 00:40:20.028778 kubelet[1405]: I0517 00:40:20.028483 1405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xvd8\" (UniqueName: \"kubernetes.io/projected/f4dda1a7-9d77-47de-90a3-e2d48210e58d-kube-api-access-6xvd8\") pod \"kube-proxy-zxpmn\" (UID: \"f4dda1a7-9d77-47de-90a3-e2d48210e58d\") " pod="kube-system/kube-proxy-zxpmn" May 17 00:40:20.028778 kubelet[1405]: I0517 00:40:20.028500 1405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d2eb883d-e50b-483f-a74d-3846f4b60594-cilium-run\") pod \"cilium-924j2\" (UID: \"d2eb883d-e50b-483f-a74d-3846f4b60594\") " pod="kube-system/cilium-924j2" May 17 00:40:20.028778 kubelet[1405]: I0517 00:40:20.028519 1405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d2eb883d-e50b-483f-a74d-3846f4b60594-cni-path\") pod \"cilium-924j2\" (UID: \"d2eb883d-e50b-483f-a74d-3846f4b60594\") " pod="kube-system/cilium-924j2" May 17 00:40:20.028778 kubelet[1405]: I0517 00:40:20.028534 1405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d2eb883d-e50b-483f-a74d-3846f4b60594-lib-modules\") pod \"cilium-924j2\" (UID: \"d2eb883d-e50b-483f-a74d-3846f4b60594\") " pod="kube-system/cilium-924j2" May 17 00:40:20.028778 kubelet[1405]: I0517 00:40:20.028558 1405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d2eb883d-e50b-483f-a74d-3846f4b60594-clustermesh-secrets\") pod \"cilium-924j2\" (UID: \"d2eb883d-e50b-483f-a74d-3846f4b60594\") " pod="kube-system/cilium-924j2" May 17 00:40:20.028778 kubelet[1405]: I0517 00:40:20.028590 1405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f4dda1a7-9d77-47de-90a3-e2d48210e58d-kube-proxy\") pod \"kube-proxy-zxpmn\" (UID: \"f4dda1a7-9d77-47de-90a3-e2d48210e58d\") " pod="kube-system/kube-proxy-zxpmn" May 17 00:40:20.028939 kubelet[1405]: I0517 00:40:20.028644 1405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d2eb883d-e50b-483f-a74d-3846f4b60594-bpf-maps\") pod \"cilium-924j2\" (UID: \"d2eb883d-e50b-483f-a74d-3846f4b60594\") " pod="kube-system/cilium-924j2" May 17 00:40:20.028939 kubelet[1405]: I0517 00:40:20.028672 1405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d2eb883d-e50b-483f-a74d-3846f4b60594-hostproc\") pod \"cilium-924j2\" (UID: \"d2eb883d-e50b-483f-a74d-3846f4b60594\") " pod="kube-system/cilium-924j2" May 17 00:40:20.028939 kubelet[1405]: I0517 00:40:20.028697 1405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d2eb883d-e50b-483f-a74d-3846f4b60594-cilium-cgroup\") pod \"cilium-924j2\" (UID: \"d2eb883d-e50b-483f-a74d-3846f4b60594\") " pod="kube-system/cilium-924j2" May 17 00:40:20.028939 kubelet[1405]: I0517 00:40:20.028736 1405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d2eb883d-e50b-483f-a74d-3846f4b60594-hubble-tls\") pod \"cilium-924j2\" (UID: \"d2eb883d-e50b-483f-a74d-3846f4b60594\") " pod="kube-system/cilium-924j2" May 17 00:40:20.131293 kubelet[1405]: I0517 00:40:20.130520 1405 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" May 17 00:40:20.318305 kubelet[1405]: E0517 00:40:20.318265 1405 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:40:20.318978 env[1203]: time="2025-05-17T00:40:20.318935859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zxpmn,Uid:f4dda1a7-9d77-47de-90a3-e2d48210e58d,Namespace:kube-system,Attempt:0,}" May 17 00:40:20.332238 kubelet[1405]: E0517 00:40:20.332219 1405 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:40:20.359256 env[1203]: time="2025-05-17T00:40:20.359211525Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-924j2,Uid:d2eb883d-e50b-483f-a74d-3846f4b60594,Namespace:kube-system,Attempt:0,}" May 17 00:40:20.992428 kubelet[1405]: E0517 00:40:20.992380 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:21.992670 kubelet[1405]: E0517 00:40:21.992622 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:22.334751 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1056080381.mount: Deactivated successfully. May 17 00:40:22.343935 env[1203]: time="2025-05-17T00:40:22.343881648Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:40:22.345164 env[1203]: time="2025-05-17T00:40:22.345108049Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:40:22.348442 env[1203]: time="2025-05-17T00:40:22.348410834Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:40:22.350041 env[1203]: time="2025-05-17T00:40:22.349991118Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:40:22.351546 env[1203]: time="2025-05-17T00:40:22.351497844Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:40:22.352957 env[1203]: time="2025-05-17T00:40:22.352920232Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:40:22.354253 env[1203]: time="2025-05-17T00:40:22.354211574Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:40:22.355782 env[1203]: time="2025-05-17T00:40:22.355749720Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:40:22.388090 env[1203]: time="2025-05-17T00:40:22.388010882Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:40:22.388090 env[1203]: time="2025-05-17T00:40:22.388092345Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:40:22.388277 env[1203]: time="2025-05-17T00:40:22.388118464Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:40:22.390199 env[1203]: time="2025-05-17T00:40:22.390134345Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a3adc4758f010afac16fa8fdc27aed2b3b9ed7214bbc65e59dd92a13d25c1620 pid=1465 runtime=io.containerd.runc.v2 May 17 00:40:22.390634 env[1203]: time="2025-05-17T00:40:22.390559062Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:40:22.390634 env[1203]: time="2025-05-17T00:40:22.390604637Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:40:22.390634 env[1203]: time="2025-05-17T00:40:22.390618123Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:40:22.392611 env[1203]: time="2025-05-17T00:40:22.390734371Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/12931c2887c8c4e55739704af3f56c6cfff962ca816ee0f3462abb090582a0c5 pid=1468 runtime=io.containerd.runc.v2 May 17 00:40:22.450089 systemd[1]: Started cri-containerd-a3adc4758f010afac16fa8fdc27aed2b3b9ed7214bbc65e59dd92a13d25c1620.scope. May 17 00:40:22.461482 systemd[1]: Started cri-containerd-12931c2887c8c4e55739704af3f56c6cfff962ca816ee0f3462abb090582a0c5.scope. May 17 00:40:22.482353 env[1203]: time="2025-05-17T00:40:22.482288300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-924j2,Uid:d2eb883d-e50b-483f-a74d-3846f4b60594,Namespace:kube-system,Attempt:0,} returns sandbox id \"a3adc4758f010afac16fa8fdc27aed2b3b9ed7214bbc65e59dd92a13d25c1620\"" May 17 00:40:22.483765 kubelet[1405]: E0517 00:40:22.483725 1405 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:40:22.484675 env[1203]: time="2025-05-17T00:40:22.484637135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zxpmn,Uid:f4dda1a7-9d77-47de-90a3-e2d48210e58d,Namespace:kube-system,Attempt:0,} returns sandbox id \"12931c2887c8c4e55739704af3f56c6cfff962ca816ee0f3462abb090582a0c5\"" May 17 00:40:22.485221 env[1203]: time="2025-05-17T00:40:22.485195733Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 17 00:40:22.485683 kubelet[1405]: E0517 00:40:22.485656 1405 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:40:22.993246 kubelet[1405]: E0517 00:40:22.993169 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:23.994115 kubelet[1405]: E0517 00:40:23.994060 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:24.994651 kubelet[1405]: E0517 00:40:24.994621 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:25.995630 kubelet[1405]: E0517 00:40:25.995564 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:26.996396 kubelet[1405]: E0517 00:40:26.996327 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:27.997148 kubelet[1405]: E0517 00:40:27.997099 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:28.998400 kubelet[1405]: E0517 00:40:28.998352 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:29.998801 kubelet[1405]: E0517 00:40:29.998745 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:30.054625 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2939903541.mount: Deactivated successfully. May 17 00:40:30.999046 kubelet[1405]: E0517 00:40:30.998994 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:31.999727 kubelet[1405]: E0517 00:40:31.999679 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:33.000456 kubelet[1405]: E0517 00:40:33.000383 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:34.001257 kubelet[1405]: E0517 00:40:34.001180 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:34.527203 env[1203]: time="2025-05-17T00:40:34.527114150Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:40:34.529139 env[1203]: time="2025-05-17T00:40:34.529075058Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:40:34.531123 env[1203]: time="2025-05-17T00:40:34.531073597Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:40:34.531588 env[1203]: time="2025-05-17T00:40:34.531524132Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 17 00:40:34.533329 env[1203]: time="2025-05-17T00:40:34.533277801Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\"" May 17 00:40:34.534865 env[1203]: time="2025-05-17T00:40:34.534809645Z" level=info msg="CreateContainer within sandbox \"a3adc4758f010afac16fa8fdc27aed2b3b9ed7214bbc65e59dd92a13d25c1620\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 17 00:40:34.552113 env[1203]: time="2025-05-17T00:40:34.552056535Z" level=info msg="CreateContainer within sandbox \"a3adc4758f010afac16fa8fdc27aed2b3b9ed7214bbc65e59dd92a13d25c1620\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"38ac7c9d1e053fe6e2d0e5ec429f9b6b761765a43e3e7f54c9a2b4fcab72c3dd\"" May 17 00:40:34.552965 env[1203]: time="2025-05-17T00:40:34.552929142Z" level=info msg="StartContainer for \"38ac7c9d1e053fe6e2d0e5ec429f9b6b761765a43e3e7f54c9a2b4fcab72c3dd\"" May 17 00:40:34.577428 systemd[1]: Started cri-containerd-38ac7c9d1e053fe6e2d0e5ec429f9b6b761765a43e3e7f54c9a2b4fcab72c3dd.scope. May 17 00:40:34.649069 env[1203]: time="2025-05-17T00:40:34.649019700Z" level=info msg="StartContainer for \"38ac7c9d1e053fe6e2d0e5ec429f9b6b761765a43e3e7f54c9a2b4fcab72c3dd\" returns successfully" May 17 00:40:34.664045 systemd[1]: cri-containerd-38ac7c9d1e053fe6e2d0e5ec429f9b6b761765a43e3e7f54c9a2b4fcab72c3dd.scope: Deactivated successfully. May 17 00:40:35.002285 kubelet[1405]: E0517 00:40:35.002166 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:35.127624 kubelet[1405]: E0517 00:40:35.127596 1405 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:40:35.270892 env[1203]: time="2025-05-17T00:40:35.270754828Z" level=info msg="shim disconnected" id=38ac7c9d1e053fe6e2d0e5ec429f9b6b761765a43e3e7f54c9a2b4fcab72c3dd May 17 00:40:35.270892 env[1203]: time="2025-05-17T00:40:35.270806374Z" level=warning msg="cleaning up after shim disconnected" id=38ac7c9d1e053fe6e2d0e5ec429f9b6b761765a43e3e7f54c9a2b4fcab72c3dd namespace=k8s.io May 17 00:40:35.270892 env[1203]: time="2025-05-17T00:40:35.270818166Z" level=info msg="cleaning up dead shim" May 17 00:40:35.281924 env[1203]: time="2025-05-17T00:40:35.281863184Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:40:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1583 runtime=io.containerd.runc.v2\n" May 17 00:40:35.545717 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-38ac7c9d1e053fe6e2d0e5ec429f9b6b761765a43e3e7f54c9a2b4fcab72c3dd-rootfs.mount: Deactivated successfully. May 17 00:40:36.002960 kubelet[1405]: E0517 00:40:36.002812 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:36.130770 kubelet[1405]: E0517 00:40:36.130739 1405 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:40:36.132431 env[1203]: time="2025-05-17T00:40:36.132386562Z" level=info msg="CreateContainer within sandbox \"a3adc4758f010afac16fa8fdc27aed2b3b9ed7214bbc65e59dd92a13d25c1620\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 17 00:40:36.153052 env[1203]: time="2025-05-17T00:40:36.152998474Z" level=info msg="CreateContainer within sandbox \"a3adc4758f010afac16fa8fdc27aed2b3b9ed7214bbc65e59dd92a13d25c1620\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"73473386265d8e8b96dd5b6a05f5c80240f42f8950191f6bb9d80c3e7ce204de\"" May 17 00:40:36.153528 env[1203]: time="2025-05-17T00:40:36.153491088Z" level=info msg="StartContainer for \"73473386265d8e8b96dd5b6a05f5c80240f42f8950191f6bb9d80c3e7ce204de\"" May 17 00:40:36.176905 systemd[1]: Started cri-containerd-73473386265d8e8b96dd5b6a05f5c80240f42f8950191f6bb9d80c3e7ce204de.scope. May 17 00:40:36.227063 env[1203]: time="2025-05-17T00:40:36.226997245Z" level=info msg="StartContainer for \"73473386265d8e8b96dd5b6a05f5c80240f42f8950191f6bb9d80c3e7ce204de\" returns successfully" May 17 00:40:36.227337 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 00:40:36.227562 systemd[1]: Stopped systemd-sysctl.service. May 17 00:40:36.227791 systemd[1]: Stopping systemd-sysctl.service... May 17 00:40:36.229141 systemd[1]: Starting systemd-sysctl.service... May 17 00:40:36.230102 systemd[1]: cri-containerd-73473386265d8e8b96dd5b6a05f5c80240f42f8950191f6bb9d80c3e7ce204de.scope: Deactivated successfully. May 17 00:40:36.236758 systemd[1]: Finished systemd-sysctl.service. May 17 00:40:36.378052 env[1203]: time="2025-05-17T00:40:36.377379299Z" level=info msg="shim disconnected" id=73473386265d8e8b96dd5b6a05f5c80240f42f8950191f6bb9d80c3e7ce204de May 17 00:40:36.378052 env[1203]: time="2025-05-17T00:40:36.377426267Z" level=warning msg="cleaning up after shim disconnected" id=73473386265d8e8b96dd5b6a05f5c80240f42f8950191f6bb9d80c3e7ce204de namespace=k8s.io May 17 00:40:36.378052 env[1203]: time="2025-05-17T00:40:36.377435294Z" level=info msg="cleaning up dead shim" May 17 00:40:36.391000 env[1203]: time="2025-05-17T00:40:36.390936959Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:40:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1648 runtime=io.containerd.runc.v2\ntime=\"2025-05-17T00:40:36Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" May 17 00:40:36.547186 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-73473386265d8e8b96dd5b6a05f5c80240f42f8950191f6bb9d80c3e7ce204de-rootfs.mount: Deactivated successfully. May 17 00:40:36.941003 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1707091350.mount: Deactivated successfully. May 17 00:40:37.003773 kubelet[1405]: E0517 00:40:37.003710 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:37.134012 kubelet[1405]: E0517 00:40:37.133969 1405 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:40:37.135932 env[1203]: time="2025-05-17T00:40:37.135888074Z" level=info msg="CreateContainer within sandbox \"a3adc4758f010afac16fa8fdc27aed2b3b9ed7214bbc65e59dd92a13d25c1620\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 17 00:40:37.632563 env[1203]: time="2025-05-17T00:40:37.632454621Z" level=info msg="CreateContainer within sandbox \"a3adc4758f010afac16fa8fdc27aed2b3b9ed7214bbc65e59dd92a13d25c1620\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"30837864e61de3b26f2542553f2a08c389af702e02e7c37015794e4649007efe\"" May 17 00:40:37.633204 env[1203]: time="2025-05-17T00:40:37.633173039Z" level=info msg="StartContainer for \"30837864e61de3b26f2542553f2a08c389af702e02e7c37015794e4649007efe\"" May 17 00:40:37.689949 systemd[1]: run-containerd-runc-k8s.io-30837864e61de3b26f2542553f2a08c389af702e02e7c37015794e4649007efe-runc.C02lra.mount: Deactivated successfully. May 17 00:40:37.695541 systemd[1]: Started cri-containerd-30837864e61de3b26f2542553f2a08c389af702e02e7c37015794e4649007efe.scope. May 17 00:40:37.814074 systemd[1]: cri-containerd-30837864e61de3b26f2542553f2a08c389af702e02e7c37015794e4649007efe.scope: Deactivated successfully. May 17 00:40:37.814254 env[1203]: time="2025-05-17T00:40:37.814157073Z" level=info msg="StartContainer for \"30837864e61de3b26f2542553f2a08c389af702e02e7c37015794e4649007efe\" returns successfully" May 17 00:40:37.832845 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-30837864e61de3b26f2542553f2a08c389af702e02e7c37015794e4649007efe-rootfs.mount: Deactivated successfully. May 17 00:40:37.991414 kubelet[1405]: E0517 00:40:37.991275 1405 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:38.004615 kubelet[1405]: E0517 00:40:38.004523 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:38.064155 env[1203]: time="2025-05-17T00:40:38.064105044Z" level=info msg="shim disconnected" id=30837864e61de3b26f2542553f2a08c389af702e02e7c37015794e4649007efe May 17 00:40:38.064155 env[1203]: time="2025-05-17T00:40:38.064152914Z" level=warning msg="cleaning up after shim disconnected" id=30837864e61de3b26f2542553f2a08c389af702e02e7c37015794e4649007efe namespace=k8s.io May 17 00:40:38.064155 env[1203]: time="2025-05-17T00:40:38.064161340Z" level=info msg="cleaning up dead shim" May 17 00:40:38.081687 env[1203]: time="2025-05-17T00:40:38.081618314Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:40:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1707 runtime=io.containerd.runc.v2\n" May 17 00:40:38.137214 kubelet[1405]: E0517 00:40:38.137182 1405 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:40:38.138913 env[1203]: time="2025-05-17T00:40:38.138869711Z" level=info msg="CreateContainer within sandbox \"a3adc4758f010afac16fa8fdc27aed2b3b9ed7214bbc65e59dd92a13d25c1620\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 17 00:40:38.154151 env[1203]: time="2025-05-17T00:40:38.154090411Z" level=info msg="CreateContainer within sandbox \"a3adc4758f010afac16fa8fdc27aed2b3b9ed7214bbc65e59dd92a13d25c1620\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2930b15c1b900093fcef85494204cd267c77b591e81a84b60ea2901777830e74\"" May 17 00:40:38.154780 env[1203]: time="2025-05-17T00:40:38.154749838Z" level=info msg="StartContainer for \"2930b15c1b900093fcef85494204cd267c77b591e81a84b60ea2901777830e74\"" May 17 00:40:38.178471 systemd[1]: Started cri-containerd-2930b15c1b900093fcef85494204cd267c77b591e81a84b60ea2901777830e74.scope. May 17 00:40:38.242659 systemd[1]: cri-containerd-2930b15c1b900093fcef85494204cd267c77b591e81a84b60ea2901777830e74.scope: Deactivated successfully. May 17 00:40:38.244242 env[1203]: time="2025-05-17T00:40:38.244198799Z" level=info msg="StartContainer for \"2930b15c1b900093fcef85494204cd267c77b591e81a84b60ea2901777830e74\" returns successfully" May 17 00:40:38.604602 env[1203]: time="2025-05-17T00:40:38.604449085Z" level=info msg="shim disconnected" id=2930b15c1b900093fcef85494204cd267c77b591e81a84b60ea2901777830e74 May 17 00:40:38.604602 env[1203]: time="2025-05-17T00:40:38.604518195Z" level=warning msg="cleaning up after shim disconnected" id=2930b15c1b900093fcef85494204cd267c77b591e81a84b60ea2901777830e74 namespace=k8s.io May 17 00:40:38.604602 env[1203]: time="2025-05-17T00:40:38.604530027Z" level=info msg="cleaning up dead shim" May 17 00:40:38.621234 env[1203]: time="2025-05-17T00:40:38.621175068Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:40:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1764 runtime=io.containerd.runc.v2\n" May 17 00:40:38.781590 env[1203]: time="2025-05-17T00:40:38.781238371Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.32.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:40:38.784276 env[1203]: time="2025-05-17T00:40:38.784231506Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:40:38.785960 env[1203]: time="2025-05-17T00:40:38.785898673Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.32.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:40:38.788559 env[1203]: time="2025-05-17T00:40:38.788514539Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:9dc6553459c3319525ba4090a780db1a133d5dee68c08e07f9b9d6ba83b42a0b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:40:38.788994 env[1203]: time="2025-05-17T00:40:38.788940348Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\" returns image reference \"sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363\"" May 17 00:40:38.791502 env[1203]: time="2025-05-17T00:40:38.791458181Z" level=info msg="CreateContainer within sandbox \"12931c2887c8c4e55739704af3f56c6cfff962ca816ee0f3462abb090582a0c5\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 17 00:40:38.804795 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3281223366.mount: Deactivated successfully. May 17 00:40:38.808246 env[1203]: time="2025-05-17T00:40:38.808196677Z" level=info msg="CreateContainer within sandbox \"12931c2887c8c4e55739704af3f56c6cfff962ca816ee0f3462abb090582a0c5\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"58b21703032c8395a9b7f5dd9229ae69432e075c61d905e22a0662dfd941c537\"" May 17 00:40:38.808828 env[1203]: time="2025-05-17T00:40:38.808792575Z" level=info msg="StartContainer for \"58b21703032c8395a9b7f5dd9229ae69432e075c61d905e22a0662dfd941c537\"" May 17 00:40:38.846202 systemd[1]: Started cri-containerd-58b21703032c8395a9b7f5dd9229ae69432e075c61d905e22a0662dfd941c537.scope. May 17 00:40:38.881414 env[1203]: time="2025-05-17T00:40:38.881250956Z" level=info msg="StartContainer for \"58b21703032c8395a9b7f5dd9229ae69432e075c61d905e22a0662dfd941c537\" returns successfully" May 17 00:40:39.006122 kubelet[1405]: E0517 00:40:39.006084 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:39.140290 kubelet[1405]: E0517 00:40:39.140196 1405 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:40:39.145534 env[1203]: time="2025-05-17T00:40:39.145483180Z" level=info msg="CreateContainer within sandbox \"a3adc4758f010afac16fa8fdc27aed2b3b9ed7214bbc65e59dd92a13d25c1620\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 17 00:40:39.145891 kubelet[1405]: E0517 00:40:39.145792 1405 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:40:39.163525 env[1203]: time="2025-05-17T00:40:39.163473335Z" level=info msg="CreateContainer within sandbox \"a3adc4758f010afac16fa8fdc27aed2b3b9ed7214bbc65e59dd92a13d25c1620\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"28e2b7f537e532330291e17caa40332bf2c48577b48a7a51bae170869c7861ac\"" May 17 00:40:39.163912 env[1203]: time="2025-05-17T00:40:39.163886770Z" level=info msg="StartContainer for \"28e2b7f537e532330291e17caa40332bf2c48577b48a7a51bae170869c7861ac\"" May 17 00:40:39.166155 kubelet[1405]: I0517 00:40:39.166085 1405 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zxpmn" podStartSLOduration=3.861948795 podStartE2EDuration="20.166059386s" podCreationTimestamp="2025-05-17 00:40:19 +0000 UTC" firstStartedPulling="2025-05-17 00:40:22.486000373 +0000 UTC m=+5.037391718" lastFinishedPulling="2025-05-17 00:40:38.790110964 +0000 UTC m=+21.341502309" observedRunningTime="2025-05-17 00:40:39.165959067 +0000 UTC m=+21.717350432" watchObservedRunningTime="2025-05-17 00:40:39.166059386 +0000 UTC m=+21.717450731" May 17 00:40:39.186421 systemd[1]: Started cri-containerd-28e2b7f537e532330291e17caa40332bf2c48577b48a7a51bae170869c7861ac.scope. May 17 00:40:39.237211 env[1203]: time="2025-05-17T00:40:39.237153428Z" level=info msg="StartContainer for \"28e2b7f537e532330291e17caa40332bf2c48577b48a7a51bae170869c7861ac\" returns successfully" May 17 00:40:39.339763 kubelet[1405]: I0517 00:40:39.339728 1405 kubelet_node_status.go:501] "Fast updating node status as it just became ready" May 17 00:40:39.592654 kernel: Initializing XFRM netlink socket May 17 00:40:40.006454 kubelet[1405]: E0517 00:40:40.006396 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:40.152906 kubelet[1405]: E0517 00:40:40.152861 1405 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:40:40.153111 kubelet[1405]: E0517 00:40:40.153089 1405 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:40:41.012813 kubelet[1405]: E0517 00:40:41.007482 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:41.080816 systemd-networkd[1027]: cilium_host: Link UP May 17 00:40:41.086253 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready May 17 00:40:41.086417 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready May 17 00:40:41.085740 systemd-networkd[1027]: cilium_net: Link UP May 17 00:40:41.087167 systemd-networkd[1027]: cilium_net: Gained carrier May 17 00:40:41.087434 systemd-networkd[1027]: cilium_host: Gained carrier May 17 00:40:41.157956 kubelet[1405]: E0517 00:40:41.157926 1405 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:40:41.504066 kubelet[1405]: I0517 00:40:41.500633 1405 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-924j2" podStartSLOduration=10.45231656 podStartE2EDuration="22.500602431s" podCreationTimestamp="2025-05-17 00:40:19 +0000 UTC" firstStartedPulling="2025-05-17 00:40:22.484779993 +0000 UTC m=+5.036171338" lastFinishedPulling="2025-05-17 00:40:34.533065864 +0000 UTC m=+17.084457209" observedRunningTime="2025-05-17 00:40:40.226656803 +0000 UTC m=+22.778048178" watchObservedRunningTime="2025-05-17 00:40:41.500602431 +0000 UTC m=+24.051993786" May 17 00:40:41.519527 systemd-networkd[1027]: cilium_net: Gained IPv6LL May 17 00:40:41.522918 systemd[1]: Created slice kubepods-besteffort-pod6c528100_177f_4b54_a52f_1510ab69e01d.slice. May 17 00:40:41.593641 systemd-networkd[1027]: cilium_vxlan: Link UP May 17 00:40:41.593652 systemd-networkd[1027]: cilium_vxlan: Gained carrier May 17 00:40:41.682365 kubelet[1405]: I0517 00:40:41.681428 1405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m888k\" (UniqueName: \"kubernetes.io/projected/6c528100-177f-4b54-a52f-1510ab69e01d-kube-api-access-m888k\") pod \"nginx-deployment-7fcdb87857-wkjq2\" (UID: \"6c528100-177f-4b54-a52f-1510ab69e01d\") " pod="default/nginx-deployment-7fcdb87857-wkjq2" May 17 00:40:41.964213 systemd-networkd[1027]: cilium_host: Gained IPv6LL May 17 00:40:42.745069 kubelet[1405]: E0517 00:40:42.744314 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:42.745768 env[1203]: time="2025-05-17T00:40:42.745715347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-wkjq2,Uid:6c528100-177f-4b54-a52f-1510ab69e01d,Namespace:default,Attempt:0,}" May 17 00:40:42.746468 kubelet[1405]: E0517 00:40:42.746448 1405 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:40:42.939753 kernel: NET: Registered PF_ALG protocol family May 17 00:40:43.745955 kubelet[1405]: E0517 00:40:43.745890 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:43.756909 systemd-networkd[1027]: cilium_vxlan: Gained IPv6LL May 17 00:40:44.098743 kubelet[1405]: E0517 00:40:44.095594 1405 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:40:44.746618 kubelet[1405]: E0517 00:40:44.746553 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:45.209705 systemd-networkd[1027]: lxc_health: Link UP May 17 00:40:45.332873 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 17 00:40:45.333348 systemd-networkd[1027]: lxc_health: Gained carrier May 17 00:40:45.501636 systemd-networkd[1027]: lxc15591ff34f67: Link UP May 17 00:40:45.560409 kernel: eth0: renamed from tmp5e182 May 17 00:40:45.573670 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc15591ff34f67: link becomes ready May 17 00:40:45.573503 systemd-networkd[1027]: lxc15591ff34f67: Gained carrier May 17 00:40:45.748664 kubelet[1405]: E0517 00:40:45.748602 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:46.334669 kubelet[1405]: E0517 00:40:46.334629 1405 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:40:46.762825 kubelet[1405]: E0517 00:40:46.749897 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:46.762825 kubelet[1405]: E0517 00:40:46.757201 1405 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:40:47.152600 systemd-networkd[1027]: lxc15591ff34f67: Gained IPv6LL May 17 00:40:47.280871 systemd-networkd[1027]: lxc_health: Gained IPv6LL May 17 00:40:47.755996 kubelet[1405]: E0517 00:40:47.755931 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:48.757218 kubelet[1405]: E0517 00:40:48.757083 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:49.757928 kubelet[1405]: E0517 00:40:49.757865 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:50.762625 kubelet[1405]: E0517 00:40:50.758807 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:51.762835 kubelet[1405]: E0517 00:40:51.762768 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:51.855071 env[1203]: time="2025-05-17T00:40:51.854932080Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:40:51.855071 env[1203]: time="2025-05-17T00:40:51.854994068Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:40:51.855071 env[1203]: time="2025-05-17T00:40:51.855014447Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:40:51.855664 env[1203]: time="2025-05-17T00:40:51.855240970Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5e182ef2baadd11db9eb9665a166dff358c2785c81769ef1992a7a2dcb80485a pid=2487 runtime=io.containerd.runc.v2 May 17 00:40:51.903862 systemd[1]: run-containerd-runc-k8s.io-5e182ef2baadd11db9eb9665a166dff358c2785c81769ef1992a7a2dcb80485a-runc.YQIpxw.mount: Deactivated successfully. May 17 00:40:51.959151 systemd[1]: Started cri-containerd-5e182ef2baadd11db9eb9665a166dff358c2785c81769ef1992a7a2dcb80485a.scope. May 17 00:40:51.984652 systemd-resolved[1142]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 17 00:40:52.023209 env[1203]: time="2025-05-17T00:40:52.022716692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-wkjq2,Uid:6c528100-177f-4b54-a52f-1510ab69e01d,Namespace:default,Attempt:0,} returns sandbox id \"5e182ef2baadd11db9eb9665a166dff358c2785c81769ef1992a7a2dcb80485a\"" May 17 00:40:52.024712 env[1203]: time="2025-05-17T00:40:52.024687083Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" May 17 00:40:52.763761 kubelet[1405]: E0517 00:40:52.763521 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:53.765083 kubelet[1405]: E0517 00:40:53.764982 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:54.765554 kubelet[1405]: E0517 00:40:54.765480 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:55.659662 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1096763966.mount: Deactivated successfully. May 17 00:40:55.766035 kubelet[1405]: E0517 00:40:55.765908 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:56.514759 update_engine[1191]: I0517 00:40:56.514657 1191 update_attempter.cc:509] Updating boot flags... May 17 00:40:56.766833 kubelet[1405]: E0517 00:40:56.766674 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:57.766889 kubelet[1405]: E0517 00:40:57.766827 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:57.991948 kubelet[1405]: E0517 00:40:57.991893 1405 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:58.504817 env[1203]: time="2025-05-17T00:40:58.504712139Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:40:58.508607 env[1203]: time="2025-05-17T00:40:58.508428103Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7e2dd24abce21cd256091445aca4b7eb00774264c2b0a8714701dd7091509efa,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:40:58.511716 env[1203]: time="2025-05-17T00:40:58.511614732Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:40:58.517232 env[1203]: time="2025-05-17T00:40:58.516930132Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:40:58.517232 env[1203]: time="2025-05-17T00:40:58.517181720Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:7e2dd24abce21cd256091445aca4b7eb00774264c2b0a8714701dd7091509efa\"" May 17 00:40:58.523200 env[1203]: time="2025-05-17T00:40:58.522764566Z" level=info msg="CreateContainer within sandbox \"5e182ef2baadd11db9eb9665a166dff358c2785c81769ef1992a7a2dcb80485a\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" May 17 00:40:58.673176 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount962903909.mount: Deactivated successfully. May 17 00:40:58.768470 kubelet[1405]: E0517 00:40:58.767992 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:58.820741 env[1203]: time="2025-05-17T00:40:58.818966377Z" level=info msg="CreateContainer within sandbox \"5e182ef2baadd11db9eb9665a166dff358c2785c81769ef1992a7a2dcb80485a\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"80abb55da836b219d7825ded5d57104805969c80cf04cf25195343e933c6db08\"" May 17 00:40:58.821726 env[1203]: time="2025-05-17T00:40:58.821541214Z" level=info msg="StartContainer for \"80abb55da836b219d7825ded5d57104805969c80cf04cf25195343e933c6db08\"" May 17 00:40:58.848377 systemd[1]: Started cri-containerd-80abb55da836b219d7825ded5d57104805969c80cf04cf25195343e933c6db08.scope. May 17 00:40:59.302440 env[1203]: time="2025-05-17T00:40:59.302374747Z" level=info msg="StartContainer for \"80abb55da836b219d7825ded5d57104805969c80cf04cf25195343e933c6db08\" returns successfully" May 17 00:40:59.670499 systemd[1]: run-containerd-runc-k8s.io-80abb55da836b219d7825ded5d57104805969c80cf04cf25195343e933c6db08-runc.hpmyJ9.mount: Deactivated successfully. May 17 00:40:59.768932 kubelet[1405]: E0517 00:40:59.768850 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:40:59.891148 kubelet[1405]: I0517 00:40:59.889567 1405 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-wkjq2" podStartSLOduration=12.393643898 podStartE2EDuration="18.889539505s" podCreationTimestamp="2025-05-17 00:40:41 +0000 UTC" firstStartedPulling="2025-05-17 00:40:52.024186929 +0000 UTC m=+34.575578274" lastFinishedPulling="2025-05-17 00:40:58.520082536 +0000 UTC m=+41.071473881" observedRunningTime="2025-05-17 00:40:59.882973621 +0000 UTC m=+42.434364976" watchObservedRunningTime="2025-05-17 00:40:59.889539505 +0000 UTC m=+42.440930850" May 17 00:41:00.769349 kubelet[1405]: E0517 00:41:00.769200 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:01.769889 kubelet[1405]: E0517 00:41:01.769742 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:02.770274 kubelet[1405]: E0517 00:41:02.770177 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:03.770946 kubelet[1405]: E0517 00:41:03.770781 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:04.771915 kubelet[1405]: E0517 00:41:04.771766 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:05.772489 kubelet[1405]: E0517 00:41:05.772329 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:05.836518 systemd[1]: Created slice kubepods-besteffort-pod72a66f79_9348_46f0_8e06_e213b6de7912.slice. May 17 00:41:06.007371 kubelet[1405]: I0517 00:41:06.000605 1405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/72a66f79-9348-46f0-8e06-e213b6de7912-data\") pod \"nfs-server-provisioner-0\" (UID: \"72a66f79-9348-46f0-8e06-e213b6de7912\") " pod="default/nfs-server-provisioner-0" May 17 00:41:06.007371 kubelet[1405]: I0517 00:41:06.000679 1405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mzh5\" (UniqueName: \"kubernetes.io/projected/72a66f79-9348-46f0-8e06-e213b6de7912-kube-api-access-2mzh5\") pod \"nfs-server-provisioner-0\" (UID: \"72a66f79-9348-46f0-8e06-e213b6de7912\") " pod="default/nfs-server-provisioner-0" May 17 00:41:06.160279 env[1203]: time="2025-05-17T00:41:06.158774353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:72a66f79-9348-46f0-8e06-e213b6de7912,Namespace:default,Attempt:0,}" May 17 00:41:06.451323 systemd-networkd[1027]: lxc5fe3f8ec4eae: Link UP May 17 00:41:06.464027 kernel: eth0: renamed from tmpe1626 May 17 00:41:06.482237 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 17 00:41:06.482407 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc5fe3f8ec4eae: link becomes ready May 17 00:41:06.483409 systemd-networkd[1027]: lxc5fe3f8ec4eae: Gained carrier May 17 00:41:06.773157 kubelet[1405]: E0517 00:41:06.773006 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:06.886514 env[1203]: time="2025-05-17T00:41:06.886378959Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:41:06.886514 env[1203]: time="2025-05-17T00:41:06.886441096Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:41:06.886514 env[1203]: time="2025-05-17T00:41:06.886461305Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:41:06.895067 env[1203]: time="2025-05-17T00:41:06.887404185Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e16267f8ef0c48fe93bf91a26e0508c72fb59bbeaae05ca117614f570f863570 pid=2633 runtime=io.containerd.runc.v2 May 17 00:41:06.943769 systemd[1]: Started cri-containerd-e16267f8ef0c48fe93bf91a26e0508c72fb59bbeaae05ca117614f570f863570.scope. May 17 00:41:07.002235 systemd-resolved[1142]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 17 00:41:07.052343 env[1203]: time="2025-05-17T00:41:07.052186631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:72a66f79-9348-46f0-8e06-e213b6de7912,Namespace:default,Attempt:0,} returns sandbox id \"e16267f8ef0c48fe93bf91a26e0508c72fb59bbeaae05ca117614f570f863570\"" May 17 00:41:07.054750 env[1203]: time="2025-05-17T00:41:07.054716989Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" May 17 00:41:07.773879 kubelet[1405]: E0517 00:41:07.773734 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:08.143951 systemd-networkd[1027]: lxc5fe3f8ec4eae: Gained IPv6LL May 17 00:41:08.777786 kubelet[1405]: E0517 00:41:08.775183 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:09.783371 kubelet[1405]: E0517 00:41:09.777862 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:10.783148 kubelet[1405]: E0517 00:41:10.783004 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:11.361054 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2724997199.mount: Deactivated successfully. May 17 00:41:11.783774 kubelet[1405]: E0517 00:41:11.783684 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:12.784874 kubelet[1405]: E0517 00:41:12.784698 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:13.787377 kubelet[1405]: E0517 00:41:13.787304 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:14.788640 kubelet[1405]: E0517 00:41:14.788589 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:15.437340 env[1203]: time="2025-05-17T00:41:15.437248221Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:15.443613 env[1203]: time="2025-05-17T00:41:15.443509323Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:15.448567 env[1203]: time="2025-05-17T00:41:15.448492076Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:15.479857 env[1203]: time="2025-05-17T00:41:15.479787423Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:15.484308 env[1203]: time="2025-05-17T00:41:15.480922521Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" May 17 00:41:15.485091 env[1203]: time="2025-05-17T00:41:15.485023433Z" level=info msg="CreateContainer within sandbox \"e16267f8ef0c48fe93bf91a26e0508c72fb59bbeaae05ca117614f570f863570\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" May 17 00:41:15.708457 env[1203]: time="2025-05-17T00:41:15.708299003Z" level=info msg="CreateContainer within sandbox \"e16267f8ef0c48fe93bf91a26e0508c72fb59bbeaae05ca117614f570f863570\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"71ea756606ebb277e44619ee0a79127aaa960077051f7eabdd5c5189f3f60482\"" May 17 00:41:15.709985 env[1203]: time="2025-05-17T00:41:15.709901380Z" level=info msg="StartContainer for \"71ea756606ebb277e44619ee0a79127aaa960077051f7eabdd5c5189f3f60482\"" May 17 00:41:15.788716 systemd[1]: Started cri-containerd-71ea756606ebb277e44619ee0a79127aaa960077051f7eabdd5c5189f3f60482.scope. May 17 00:41:15.790695 kubelet[1405]: E0517 00:41:15.789919 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:16.242645 env[1203]: time="2025-05-17T00:41:16.242380619Z" level=info msg="StartContainer for \"71ea756606ebb277e44619ee0a79127aaa960077051f7eabdd5c5189f3f60482\" returns successfully" May 17 00:41:16.791194 kubelet[1405]: E0517 00:41:16.791044 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:17.324762 kubelet[1405]: I0517 00:41:17.324169 1405 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=3.895247077 podStartE2EDuration="12.324146084s" podCreationTimestamp="2025-05-17 00:41:05 +0000 UTC" firstStartedPulling="2025-05-17 00:41:07.054101567 +0000 UTC m=+49.605492912" lastFinishedPulling="2025-05-17 00:41:15.483000574 +0000 UTC m=+58.034391919" observedRunningTime="2025-05-17 00:41:17.323858041 +0000 UTC m=+59.875249427" watchObservedRunningTime="2025-05-17 00:41:17.324146084 +0000 UTC m=+59.875537459" May 17 00:41:17.793180 kubelet[1405]: E0517 00:41:17.791314 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:17.992410 kubelet[1405]: E0517 00:41:17.992228 1405 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:18.792610 kubelet[1405]: E0517 00:41:18.792499 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:19.793597 kubelet[1405]: E0517 00:41:19.793418 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:20.794065 kubelet[1405]: E0517 00:41:20.793975 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:21.795036 kubelet[1405]: E0517 00:41:21.794879 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:22.795497 kubelet[1405]: E0517 00:41:22.795333 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:23.796648 kubelet[1405]: E0517 00:41:23.796476 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:24.797427 kubelet[1405]: E0517 00:41:24.797291 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:25.748671 systemd[1]: Created slice kubepods-besteffort-pod6378e247_beb4_4f9c_bfe0_7519d93b8666.slice. May 17 00:41:25.801098 kubelet[1405]: E0517 00:41:25.800988 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:25.806724 kubelet[1405]: I0517 00:41:25.804818 1405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-19595c5c-d77c-476f-bb43-287b28f7b642\" (UniqueName: \"kubernetes.io/nfs/6378e247-beb4-4f9c-bfe0-7519d93b8666-pvc-19595c5c-d77c-476f-bb43-287b28f7b642\") pod \"test-pod-1\" (UID: \"6378e247-beb4-4f9c-bfe0-7519d93b8666\") " pod="default/test-pod-1" May 17 00:41:25.807103 kubelet[1405]: I0517 00:41:25.806991 1405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mbwp\" (UniqueName: \"kubernetes.io/projected/6378e247-beb4-4f9c-bfe0-7519d93b8666-kube-api-access-6mbwp\") pod \"test-pod-1\" (UID: \"6378e247-beb4-4f9c-bfe0-7519d93b8666\") " pod="default/test-pod-1" May 17 00:41:26.164634 kernel: FS-Cache: Loaded May 17 00:41:26.242806 kernel: RPC: Registered named UNIX socket transport module. May 17 00:41:26.243006 kernel: RPC: Registered udp transport module. May 17 00:41:26.243038 kernel: RPC: Registered tcp transport module. May 17 00:41:26.243831 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. May 17 00:41:26.342968 kernel: FS-Cache: Netfs 'nfs' registered for caching May 17 00:41:26.645273 kernel: NFS: Registering the id_resolver key type May 17 00:41:26.645442 kernel: Key type id_resolver registered May 17 00:41:26.645483 kernel: Key type id_legacy registered May 17 00:41:26.729331 nfsidmap[2757]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' May 17 00:41:26.734087 nfsidmap[2760]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' May 17 00:41:26.801740 kubelet[1405]: E0517 00:41:26.801643 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:26.957019 env[1203]: time="2025-05-17T00:41:26.956927627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:6378e247-beb4-4f9c-bfe0-7519d93b8666,Namespace:default,Attempt:0,}" May 17 00:41:27.539928 systemd-networkd[1027]: lxc9475efc294dc: Link UP May 17 00:41:27.556605 kernel: eth0: renamed from tmp19518 May 17 00:41:27.570219 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 17 00:41:27.570355 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc9475efc294dc: link becomes ready May 17 00:41:27.570346 systemd-networkd[1027]: lxc9475efc294dc: Gained carrier May 17 00:41:27.802114 kubelet[1405]: E0517 00:41:27.801921 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:27.870857 env[1203]: time="2025-05-17T00:41:27.870476848Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:41:27.870857 env[1203]: time="2025-05-17T00:41:27.870531260Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:41:27.870857 env[1203]: time="2025-05-17T00:41:27.870546419Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:41:27.871245 env[1203]: time="2025-05-17T00:41:27.871167566Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/195180febfed91dfc8f0cd0441d434fe43d43fb0572f40aa922964a70657d6cf pid=2792 runtime=io.containerd.runc.v2 May 17 00:41:27.902550 systemd[1]: run-containerd-runc-k8s.io-195180febfed91dfc8f0cd0441d434fe43d43fb0572f40aa922964a70657d6cf-runc.AoFowV.mount: Deactivated successfully. May 17 00:41:27.920422 systemd[1]: Started cri-containerd-195180febfed91dfc8f0cd0441d434fe43d43fb0572f40aa922964a70657d6cf.scope. May 17 00:41:27.983877 systemd-resolved[1142]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 17 00:41:28.054471 env[1203]: time="2025-05-17T00:41:28.054303319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:6378e247-beb4-4f9c-bfe0-7519d93b8666,Namespace:default,Attempt:0,} returns sandbox id \"195180febfed91dfc8f0cd0441d434fe43d43fb0572f40aa922964a70657d6cf\"" May 17 00:41:28.056738 env[1203]: time="2025-05-17T00:41:28.056673473Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" May 17 00:41:28.449302 env[1203]: time="2025-05-17T00:41:28.449132429Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:28.456975 env[1203]: time="2025-05-17T00:41:28.454957363Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7e2dd24abce21cd256091445aca4b7eb00774264c2b0a8714701dd7091509efa,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:28.461348 env[1203]: time="2025-05-17T00:41:28.461209542Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:28.465388 env[1203]: time="2025-05-17T00:41:28.465316478Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:28.466466 env[1203]: time="2025-05-17T00:41:28.466395134Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:7e2dd24abce21cd256091445aca4b7eb00774264c2b0a8714701dd7091509efa\"" May 17 00:41:28.470236 env[1203]: time="2025-05-17T00:41:28.470159557Z" level=info msg="CreateContainer within sandbox \"195180febfed91dfc8f0cd0441d434fe43d43fb0572f40aa922964a70657d6cf\" for container &ContainerMetadata{Name:test,Attempt:0,}" May 17 00:41:28.516043 env[1203]: time="2025-05-17T00:41:28.515850148Z" level=info msg="CreateContainer within sandbox \"195180febfed91dfc8f0cd0441d434fe43d43fb0572f40aa922964a70657d6cf\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"3fe5d435112433cd2498a4c85a716bfd919455135b0b7d159ce5e66a59386cd1\"" May 17 00:41:28.517705 env[1203]: time="2025-05-17T00:41:28.517484039Z" level=info msg="StartContainer for \"3fe5d435112433cd2498a4c85a716bfd919455135b0b7d159ce5e66a59386cd1\"" May 17 00:41:28.567537 systemd[1]: Started cri-containerd-3fe5d435112433cd2498a4c85a716bfd919455135b0b7d159ce5e66a59386cd1.scope. May 17 00:41:28.620075 env[1203]: time="2025-05-17T00:41:28.620003328Z" level=info msg="StartContainer for \"3fe5d435112433cd2498a4c85a716bfd919455135b0b7d159ce5e66a59386cd1\" returns successfully" May 17 00:41:28.802610 kubelet[1405]: E0517 00:41:28.802404 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:29.380510 kubelet[1405]: I0517 00:41:29.380418 1405 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=22.968078001 podStartE2EDuration="23.380397624s" podCreationTimestamp="2025-05-17 00:41:06 +0000 UTC" firstStartedPulling="2025-05-17 00:41:28.055925808 +0000 UTC m=+70.607317153" lastFinishedPulling="2025-05-17 00:41:28.468245431 +0000 UTC m=+71.019636776" observedRunningTime="2025-05-17 00:41:29.379955955 +0000 UTC m=+71.931347320" watchObservedRunningTime="2025-05-17 00:41:29.380397624 +0000 UTC m=+71.931788969" May 17 00:41:29.494711 systemd[1]: run-containerd-runc-k8s.io-3fe5d435112433cd2498a4c85a716bfd919455135b0b7d159ce5e66a59386cd1-runc.MtTKoD.mount: Deactivated successfully. May 17 00:41:29.516416 systemd-networkd[1027]: lxc9475efc294dc: Gained IPv6LL May 17 00:41:29.802916 kubelet[1405]: E0517 00:41:29.802793 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:30.804131 kubelet[1405]: E0517 00:41:30.803441 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:31.804491 kubelet[1405]: E0517 00:41:31.804335 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:32.805840 kubelet[1405]: E0517 00:41:32.805432 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:33.806108 kubelet[1405]: E0517 00:41:33.805985 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:34.807216 kubelet[1405]: E0517 00:41:34.807145 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:35.789243 systemd[1]: run-containerd-runc-k8s.io-28e2b7f537e532330291e17caa40332bf2c48577b48a7a51bae170869c7861ac-runc.NEfovd.mount: Deactivated successfully. May 17 00:41:35.812852 kubelet[1405]: E0517 00:41:35.812773 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:35.827096 env[1203]: time="2025-05-17T00:41:35.826974377Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 00:41:35.852086 env[1203]: time="2025-05-17T00:41:35.852014036Z" level=info msg="StopContainer for \"28e2b7f537e532330291e17caa40332bf2c48577b48a7a51bae170869c7861ac\" with timeout 2 (s)" May 17 00:41:35.852265 env[1203]: time="2025-05-17T00:41:35.852243979Z" level=info msg="Stop container \"28e2b7f537e532330291e17caa40332bf2c48577b48a7a51bae170869c7861ac\" with signal terminated" May 17 00:41:35.868771 systemd-networkd[1027]: lxc_health: Link DOWN May 17 00:41:35.868787 systemd-networkd[1027]: lxc_health: Lost carrier May 17 00:41:35.972809 systemd[1]: cri-containerd-28e2b7f537e532330291e17caa40332bf2c48577b48a7a51bae170869c7861ac.scope: Deactivated successfully. May 17 00:41:35.973114 systemd[1]: cri-containerd-28e2b7f537e532330291e17caa40332bf2c48577b48a7a51bae170869c7861ac.scope: Consumed 12.008s CPU time. May 17 00:41:36.046533 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-28e2b7f537e532330291e17caa40332bf2c48577b48a7a51bae170869c7861ac-rootfs.mount: Deactivated successfully. May 17 00:41:36.326470 env[1203]: time="2025-05-17T00:41:36.325992244Z" level=info msg="shim disconnected" id=28e2b7f537e532330291e17caa40332bf2c48577b48a7a51bae170869c7861ac May 17 00:41:36.326470 env[1203]: time="2025-05-17T00:41:36.326058458Z" level=warning msg="cleaning up after shim disconnected" id=28e2b7f537e532330291e17caa40332bf2c48577b48a7a51bae170869c7861ac namespace=k8s.io May 17 00:41:36.326470 env[1203]: time="2025-05-17T00:41:36.326073657Z" level=info msg="cleaning up dead shim" May 17 00:41:36.363038 env[1203]: time="2025-05-17T00:41:36.361454188Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:41:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2918 runtime=io.containerd.runc.v2\n" May 17 00:41:36.410586 env[1203]: time="2025-05-17T00:41:36.410482416Z" level=info msg="StopContainer for \"28e2b7f537e532330291e17caa40332bf2c48577b48a7a51bae170869c7861ac\" returns successfully" May 17 00:41:36.419954 env[1203]: time="2025-05-17T00:41:36.411290003Z" level=info msg="StopPodSandbox for \"a3adc4758f010afac16fa8fdc27aed2b3b9ed7214bbc65e59dd92a13d25c1620\"" May 17 00:41:36.419954 env[1203]: time="2025-05-17T00:41:36.411353582Z" level=info msg="Container to stop \"38ac7c9d1e053fe6e2d0e5ec429f9b6b761765a43e3e7f54c9a2b4fcab72c3dd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:41:36.419954 env[1203]: time="2025-05-17T00:41:36.411371907Z" level=info msg="Container to stop \"73473386265d8e8b96dd5b6a05f5c80240f42f8950191f6bb9d80c3e7ce204de\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:41:36.419954 env[1203]: time="2025-05-17T00:41:36.411385452Z" level=info msg="Container to stop \"30837864e61de3b26f2542553f2a08c389af702e02e7c37015794e4649007efe\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:41:36.419954 env[1203]: time="2025-05-17T00:41:36.411399178Z" level=info msg="Container to stop \"2930b15c1b900093fcef85494204cd267c77b591e81a84b60ea2901777830e74\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:41:36.419954 env[1203]: time="2025-05-17T00:41:36.411413505Z" level=info msg="Container to stop \"28e2b7f537e532330291e17caa40332bf2c48577b48a7a51bae170869c7861ac\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:41:36.416236 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a3adc4758f010afac16fa8fdc27aed2b3b9ed7214bbc65e59dd92a13d25c1620-shm.mount: Deactivated successfully. May 17 00:41:36.438139 systemd[1]: cri-containerd-a3adc4758f010afac16fa8fdc27aed2b3b9ed7214bbc65e59dd92a13d25c1620.scope: Deactivated successfully. May 17 00:41:36.519825 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a3adc4758f010afac16fa8fdc27aed2b3b9ed7214bbc65e59dd92a13d25c1620-rootfs.mount: Deactivated successfully. May 17 00:41:36.546017 env[1203]: time="2025-05-17T00:41:36.537705978Z" level=info msg="shim disconnected" id=a3adc4758f010afac16fa8fdc27aed2b3b9ed7214bbc65e59dd92a13d25c1620 May 17 00:41:36.549640 env[1203]: time="2025-05-17T00:41:36.546104858Z" level=warning msg="cleaning up after shim disconnected" id=a3adc4758f010afac16fa8fdc27aed2b3b9ed7214bbc65e59dd92a13d25c1620 namespace=k8s.io May 17 00:41:36.549640 env[1203]: time="2025-05-17T00:41:36.546292891Z" level=info msg="cleaning up dead shim" May 17 00:41:36.596349 env[1203]: time="2025-05-17T00:41:36.595693098Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:41:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2950 runtime=io.containerd.runc.v2\n" May 17 00:41:36.596349 env[1203]: time="2025-05-17T00:41:36.596098819Z" level=info msg="TearDown network for sandbox \"a3adc4758f010afac16fa8fdc27aed2b3b9ed7214bbc65e59dd92a13d25c1620\" successfully" May 17 00:41:36.596349 env[1203]: time="2025-05-17T00:41:36.596126210Z" level=info msg="StopPodSandbox for \"a3adc4758f010afac16fa8fdc27aed2b3b9ed7214bbc65e59dd92a13d25c1620\" returns successfully" May 17 00:41:36.758136 kubelet[1405]: I0517 00:41:36.747487 1405 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d2eb883d-e50b-483f-a74d-3846f4b60594-hostproc\") pod \"d2eb883d-e50b-483f-a74d-3846f4b60594\" (UID: \"d2eb883d-e50b-483f-a74d-3846f4b60594\") " May 17 00:41:36.758136 kubelet[1405]: I0517 00:41:36.750720 1405 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d2eb883d-e50b-483f-a74d-3846f4b60594-cilium-config-path\") pod \"d2eb883d-e50b-483f-a74d-3846f4b60594\" (UID: \"d2eb883d-e50b-483f-a74d-3846f4b60594\") " May 17 00:41:36.758136 kubelet[1405]: I0517 00:41:36.750763 1405 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ndcx7\" (UniqueName: \"kubernetes.io/projected/d2eb883d-e50b-483f-a74d-3846f4b60594-kube-api-access-ndcx7\") pod \"d2eb883d-e50b-483f-a74d-3846f4b60594\" (UID: \"d2eb883d-e50b-483f-a74d-3846f4b60594\") " May 17 00:41:36.758136 kubelet[1405]: I0517 00:41:36.750786 1405 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d2eb883d-e50b-483f-a74d-3846f4b60594-cilium-run\") pod \"d2eb883d-e50b-483f-a74d-3846f4b60594\" (UID: \"d2eb883d-e50b-483f-a74d-3846f4b60594\") " May 17 00:41:36.758136 kubelet[1405]: I0517 00:41:36.750811 1405 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d2eb883d-e50b-483f-a74d-3846f4b60594-cni-path\") pod \"d2eb883d-e50b-483f-a74d-3846f4b60594\" (UID: \"d2eb883d-e50b-483f-a74d-3846f4b60594\") " May 17 00:41:36.758136 kubelet[1405]: I0517 00:41:36.750835 1405 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d2eb883d-e50b-483f-a74d-3846f4b60594-host-proc-sys-kernel\") pod \"d2eb883d-e50b-483f-a74d-3846f4b60594\" (UID: \"d2eb883d-e50b-483f-a74d-3846f4b60594\") " May 17 00:41:36.758516 kubelet[1405]: I0517 00:41:36.750858 1405 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d2eb883d-e50b-483f-a74d-3846f4b60594-etc-cni-netd\") pod \"d2eb883d-e50b-483f-a74d-3846f4b60594\" (UID: \"d2eb883d-e50b-483f-a74d-3846f4b60594\") " May 17 00:41:36.758516 kubelet[1405]: I0517 00:41:36.750879 1405 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d2eb883d-e50b-483f-a74d-3846f4b60594-xtables-lock\") pod \"d2eb883d-e50b-483f-a74d-3846f4b60594\" (UID: \"d2eb883d-e50b-483f-a74d-3846f4b60594\") " May 17 00:41:36.758516 kubelet[1405]: I0517 00:41:36.750902 1405 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d2eb883d-e50b-483f-a74d-3846f4b60594-hubble-tls\") pod \"d2eb883d-e50b-483f-a74d-3846f4b60594\" (UID: \"d2eb883d-e50b-483f-a74d-3846f4b60594\") " May 17 00:41:36.758516 kubelet[1405]: I0517 00:41:36.750929 1405 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d2eb883d-e50b-483f-a74d-3846f4b60594-host-proc-sys-net\") pod \"d2eb883d-e50b-483f-a74d-3846f4b60594\" (UID: \"d2eb883d-e50b-483f-a74d-3846f4b60594\") " May 17 00:41:36.758516 kubelet[1405]: I0517 00:41:36.750955 1405 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d2eb883d-e50b-483f-a74d-3846f4b60594-clustermesh-secrets\") pod \"d2eb883d-e50b-483f-a74d-3846f4b60594\" (UID: \"d2eb883d-e50b-483f-a74d-3846f4b60594\") " May 17 00:41:36.758516 kubelet[1405]: I0517 00:41:36.750975 1405 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d2eb883d-e50b-483f-a74d-3846f4b60594-lib-modules\") pod \"d2eb883d-e50b-483f-a74d-3846f4b60594\" (UID: \"d2eb883d-e50b-483f-a74d-3846f4b60594\") " May 17 00:41:36.758786 kubelet[1405]: I0517 00:41:36.750995 1405 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d2eb883d-e50b-483f-a74d-3846f4b60594-cilium-cgroup\") pod \"d2eb883d-e50b-483f-a74d-3846f4b60594\" (UID: \"d2eb883d-e50b-483f-a74d-3846f4b60594\") " May 17 00:41:36.758786 kubelet[1405]: I0517 00:41:36.751016 1405 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d2eb883d-e50b-483f-a74d-3846f4b60594-bpf-maps\") pod \"d2eb883d-e50b-483f-a74d-3846f4b60594\" (UID: \"d2eb883d-e50b-483f-a74d-3846f4b60594\") " May 17 00:41:36.758786 kubelet[1405]: I0517 00:41:36.751108 1405 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2eb883d-e50b-483f-a74d-3846f4b60594-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "d2eb883d-e50b-483f-a74d-3846f4b60594" (UID: "d2eb883d-e50b-483f-a74d-3846f4b60594"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:41:36.758786 kubelet[1405]: I0517 00:41:36.757182 1405 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2eb883d-e50b-483f-a74d-3846f4b60594-hostproc" (OuterVolumeSpecName: "hostproc") pod "d2eb883d-e50b-483f-a74d-3846f4b60594" (UID: "d2eb883d-e50b-483f-a74d-3846f4b60594"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:41:36.758786 kubelet[1405]: I0517 00:41:36.757256 1405 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2eb883d-e50b-483f-a74d-3846f4b60594-cni-path" (OuterVolumeSpecName: "cni-path") pod "d2eb883d-e50b-483f-a74d-3846f4b60594" (UID: "d2eb883d-e50b-483f-a74d-3846f4b60594"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:41:36.758972 kubelet[1405]: I0517 00:41:36.757840 1405 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2eb883d-e50b-483f-a74d-3846f4b60594-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "d2eb883d-e50b-483f-a74d-3846f4b60594" (UID: "d2eb883d-e50b-483f-a74d-3846f4b60594"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:41:36.761365 kubelet[1405]: I0517 00:41:36.761331 1405 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2eb883d-e50b-483f-a74d-3846f4b60594-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "d2eb883d-e50b-483f-a74d-3846f4b60594" (UID: "d2eb883d-e50b-483f-a74d-3846f4b60594"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:41:36.762339 kubelet[1405]: I0517 00:41:36.762314 1405 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2eb883d-e50b-483f-a74d-3846f4b60594-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "d2eb883d-e50b-483f-a74d-3846f4b60594" (UID: "d2eb883d-e50b-483f-a74d-3846f4b60594"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:41:36.762478 kubelet[1405]: I0517 00:41:36.762458 1405 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2eb883d-e50b-483f-a74d-3846f4b60594-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "d2eb883d-e50b-483f-a74d-3846f4b60594" (UID: "d2eb883d-e50b-483f-a74d-3846f4b60594"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:41:36.762647 kubelet[1405]: I0517 00:41:36.762621 1405 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2eb883d-e50b-483f-a74d-3846f4b60594-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d2eb883d-e50b-483f-a74d-3846f4b60594" (UID: "d2eb883d-e50b-483f-a74d-3846f4b60594"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:41:36.768557 kubelet[1405]: I0517 00:41:36.767310 1405 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2eb883d-e50b-483f-a74d-3846f4b60594-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "d2eb883d-e50b-483f-a74d-3846f4b60594" (UID: "d2eb883d-e50b-483f-a74d-3846f4b60594"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:41:36.768557 kubelet[1405]: I0517 00:41:36.767379 1405 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2eb883d-e50b-483f-a74d-3846f4b60594-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "d2eb883d-e50b-483f-a74d-3846f4b60594" (UID: "d2eb883d-e50b-483f-a74d-3846f4b60594"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:41:36.768557 kubelet[1405]: I0517 00:41:36.767743 1405 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2eb883d-e50b-483f-a74d-3846f4b60594-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d2eb883d-e50b-483f-a74d-3846f4b60594" (UID: "d2eb883d-e50b-483f-a74d-3846f4b60594"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 17 00:41:36.774723 kubelet[1405]: I0517 00:41:36.774569 1405 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2eb883d-e50b-483f-a74d-3846f4b60594-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "d2eb883d-e50b-483f-a74d-3846f4b60594" (UID: "d2eb883d-e50b-483f-a74d-3846f4b60594"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 17 00:41:36.775546 kubelet[1405]: I0517 00:41:36.775456 1405 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2eb883d-e50b-483f-a74d-3846f4b60594-kube-api-access-ndcx7" (OuterVolumeSpecName: "kube-api-access-ndcx7") pod "d2eb883d-e50b-483f-a74d-3846f4b60594" (UID: "d2eb883d-e50b-483f-a74d-3846f4b60594"). InnerVolumeSpecName "kube-api-access-ndcx7". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 17 00:41:36.779206 kubelet[1405]: I0517 00:41:36.778346 1405 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2eb883d-e50b-483f-a74d-3846f4b60594-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "d2eb883d-e50b-483f-a74d-3846f4b60594" (UID: "d2eb883d-e50b-483f-a74d-3846f4b60594"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 17 00:41:36.788100 systemd[1]: var-lib-kubelet-pods-d2eb883d\x2de50b\x2d483f\x2da74d\x2d3846f4b60594-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dndcx7.mount: Deactivated successfully. May 17 00:41:36.788229 systemd[1]: var-lib-kubelet-pods-d2eb883d\x2de50b\x2d483f\x2da74d\x2d3846f4b60594-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 17 00:41:36.788305 systemd[1]: var-lib-kubelet-pods-d2eb883d\x2de50b\x2d483f\x2da74d\x2d3846f4b60594-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 17 00:41:36.813454 kubelet[1405]: E0517 00:41:36.813278 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:36.853371 kubelet[1405]: I0517 00:41:36.852096 1405 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d2eb883d-e50b-483f-a74d-3846f4b60594-host-proc-sys-net\") on node \"10.0.0.140\" DevicePath \"\"" May 17 00:41:36.853371 kubelet[1405]: I0517 00:41:36.852150 1405 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d2eb883d-e50b-483f-a74d-3846f4b60594-clustermesh-secrets\") on node \"10.0.0.140\" DevicePath \"\"" May 17 00:41:36.853371 kubelet[1405]: I0517 00:41:36.852163 1405 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d2eb883d-e50b-483f-a74d-3846f4b60594-lib-modules\") on node \"10.0.0.140\" DevicePath \"\"" May 17 00:41:36.853371 kubelet[1405]: I0517 00:41:36.852175 1405 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d2eb883d-e50b-483f-a74d-3846f4b60594-cilium-cgroup\") on node \"10.0.0.140\" DevicePath \"\"" May 17 00:41:36.853371 kubelet[1405]: I0517 00:41:36.852186 1405 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d2eb883d-e50b-483f-a74d-3846f4b60594-hubble-tls\") on node \"10.0.0.140\" DevicePath \"\"" May 17 00:41:36.853371 kubelet[1405]: I0517 00:41:36.852205 1405 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d2eb883d-e50b-483f-a74d-3846f4b60594-bpf-maps\") on node \"10.0.0.140\" DevicePath \"\"" May 17 00:41:36.853371 kubelet[1405]: I0517 00:41:36.852217 1405 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d2eb883d-e50b-483f-a74d-3846f4b60594-cilium-config-path\") on node \"10.0.0.140\" DevicePath \"\"" May 17 00:41:36.853371 kubelet[1405]: I0517 00:41:36.852228 1405 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ndcx7\" (UniqueName: \"kubernetes.io/projected/d2eb883d-e50b-483f-a74d-3846f4b60594-kube-api-access-ndcx7\") on node \"10.0.0.140\" DevicePath \"\"" May 17 00:41:36.853829 kubelet[1405]: I0517 00:41:36.852238 1405 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d2eb883d-e50b-483f-a74d-3846f4b60594-cilium-run\") on node \"10.0.0.140\" DevicePath \"\"" May 17 00:41:36.853829 kubelet[1405]: I0517 00:41:36.852246 1405 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d2eb883d-e50b-483f-a74d-3846f4b60594-hostproc\") on node \"10.0.0.140\" DevicePath \"\"" May 17 00:41:36.853829 kubelet[1405]: I0517 00:41:36.852257 1405 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d2eb883d-e50b-483f-a74d-3846f4b60594-host-proc-sys-kernel\") on node \"10.0.0.140\" DevicePath \"\"" May 17 00:41:36.853829 kubelet[1405]: I0517 00:41:36.852268 1405 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d2eb883d-e50b-483f-a74d-3846f4b60594-etc-cni-netd\") on node \"10.0.0.140\" DevicePath \"\"" May 17 00:41:36.853829 kubelet[1405]: I0517 00:41:36.852277 1405 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d2eb883d-e50b-483f-a74d-3846f4b60594-xtables-lock\") on node \"10.0.0.140\" DevicePath \"\"" May 17 00:41:36.853829 kubelet[1405]: I0517 00:41:36.852287 1405 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d2eb883d-e50b-483f-a74d-3846f4b60594-cni-path\") on node \"10.0.0.140\" DevicePath \"\"" May 17 00:41:37.125827 systemd[1]: Removed slice kubepods-burstable-podd2eb883d_e50b_483f_a74d_3846f4b60594.slice. May 17 00:41:37.125939 systemd[1]: kubepods-burstable-podd2eb883d_e50b_483f_a74d_3846f4b60594.slice: Consumed 12.231s CPU time. May 17 00:41:37.451022 kubelet[1405]: I0517 00:41:37.450901 1405 scope.go:117] "RemoveContainer" containerID="28e2b7f537e532330291e17caa40332bf2c48577b48a7a51bae170869c7861ac" May 17 00:41:37.456873 env[1203]: time="2025-05-17T00:41:37.453694020Z" level=info msg="RemoveContainer for \"28e2b7f537e532330291e17caa40332bf2c48577b48a7a51bae170869c7861ac\"" May 17 00:41:37.482803 env[1203]: time="2025-05-17T00:41:37.482722332Z" level=info msg="RemoveContainer for \"28e2b7f537e532330291e17caa40332bf2c48577b48a7a51bae170869c7861ac\" returns successfully" May 17 00:41:37.483257 kubelet[1405]: I0517 00:41:37.483191 1405 scope.go:117] "RemoveContainer" containerID="2930b15c1b900093fcef85494204cd267c77b591e81a84b60ea2901777830e74" May 17 00:41:37.494968 env[1203]: time="2025-05-17T00:41:37.494871162Z" level=info msg="RemoveContainer for \"2930b15c1b900093fcef85494204cd267c77b591e81a84b60ea2901777830e74\"" May 17 00:41:37.508854 env[1203]: time="2025-05-17T00:41:37.508784294Z" level=info msg="RemoveContainer for \"2930b15c1b900093fcef85494204cd267c77b591e81a84b60ea2901777830e74\" returns successfully" May 17 00:41:37.509411 kubelet[1405]: I0517 00:41:37.509354 1405 scope.go:117] "RemoveContainer" containerID="30837864e61de3b26f2542553f2a08c389af702e02e7c37015794e4649007efe" May 17 00:41:37.531699 env[1203]: time="2025-05-17T00:41:37.531269742Z" level=info msg="RemoveContainer for \"30837864e61de3b26f2542553f2a08c389af702e02e7c37015794e4649007efe\"" May 17 00:41:37.536797 env[1203]: time="2025-05-17T00:41:37.536715045Z" level=info msg="RemoveContainer for \"30837864e61de3b26f2542553f2a08c389af702e02e7c37015794e4649007efe\" returns successfully" May 17 00:41:37.537860 kubelet[1405]: I0517 00:41:37.537234 1405 scope.go:117] "RemoveContainer" containerID="73473386265d8e8b96dd5b6a05f5c80240f42f8950191f6bb9d80c3e7ce204de" May 17 00:41:37.542831 env[1203]: time="2025-05-17T00:41:37.542199733Z" level=info msg="RemoveContainer for \"73473386265d8e8b96dd5b6a05f5c80240f42f8950191f6bb9d80c3e7ce204de\"" May 17 00:41:37.554876 env[1203]: time="2025-05-17T00:41:37.552982778Z" level=info msg="RemoveContainer for \"73473386265d8e8b96dd5b6a05f5c80240f42f8950191f6bb9d80c3e7ce204de\" returns successfully" May 17 00:41:37.554876 env[1203]: time="2025-05-17T00:41:37.554441397Z" level=info msg="RemoveContainer for \"38ac7c9d1e053fe6e2d0e5ec429f9b6b761765a43e3e7f54c9a2b4fcab72c3dd\"" May 17 00:41:37.555080 kubelet[1405]: I0517 00:41:37.553332 1405 scope.go:117] "RemoveContainer" containerID="38ac7c9d1e053fe6e2d0e5ec429f9b6b761765a43e3e7f54c9a2b4fcab72c3dd" May 17 00:41:37.564142 env[1203]: time="2025-05-17T00:41:37.564090383Z" level=info msg="RemoveContainer for \"38ac7c9d1e053fe6e2d0e5ec429f9b6b761765a43e3e7f54c9a2b4fcab72c3dd\" returns successfully" May 17 00:41:37.819616 kubelet[1405]: E0517 00:41:37.814649 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:37.991762 kubelet[1405]: E0517 00:41:37.991593 1405 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:38.815013 kubelet[1405]: E0517 00:41:38.814847 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:38.945449 kubelet[1405]: I0517 00:41:38.943210 1405 memory_manager.go:355] "RemoveStaleState removing state" podUID="d2eb883d-e50b-483f-a74d-3846f4b60594" containerName="cilium-agent" May 17 00:41:38.986386 systemd[1]: Created slice kubepods-burstable-pode53c8757_1e08_4be3_a0f6_0a6041a77634.slice. May 17 00:41:39.008385 systemd[1]: Created slice kubepods-besteffort-pod51c97435_2724_4237_9900_832ea6cf243d.slice. May 17 00:41:39.073105 kubelet[1405]: I0517 00:41:39.072646 1405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e53c8757-1e08-4be3-a0f6-0a6041a77634-bpf-maps\") pod \"cilium-r7ffp\" (UID: \"e53c8757-1e08-4be3-a0f6-0a6041a77634\") " pod="kube-system/cilium-r7ffp" May 17 00:41:39.073105 kubelet[1405]: I0517 00:41:39.072702 1405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e53c8757-1e08-4be3-a0f6-0a6041a77634-lib-modules\") pod \"cilium-r7ffp\" (UID: \"e53c8757-1e08-4be3-a0f6-0a6041a77634\") " pod="kube-system/cilium-r7ffp" May 17 00:41:39.073105 kubelet[1405]: I0517 00:41:39.072727 1405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e53c8757-1e08-4be3-a0f6-0a6041a77634-hubble-tls\") pod \"cilium-r7ffp\" (UID: \"e53c8757-1e08-4be3-a0f6-0a6041a77634\") " pod="kube-system/cilium-r7ffp" May 17 00:41:39.073105 kubelet[1405]: I0517 00:41:39.072748 1405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctjkt\" (UniqueName: \"kubernetes.io/projected/e53c8757-1e08-4be3-a0f6-0a6041a77634-kube-api-access-ctjkt\") pod \"cilium-r7ffp\" (UID: \"e53c8757-1e08-4be3-a0f6-0a6041a77634\") " pod="kube-system/cilium-r7ffp" May 17 00:41:39.073105 kubelet[1405]: I0517 00:41:39.072767 1405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e53c8757-1e08-4be3-a0f6-0a6041a77634-cilium-ipsec-secrets\") pod \"cilium-r7ffp\" (UID: \"e53c8757-1e08-4be3-a0f6-0a6041a77634\") " pod="kube-system/cilium-r7ffp" May 17 00:41:39.073105 kubelet[1405]: I0517 00:41:39.072784 1405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e53c8757-1e08-4be3-a0f6-0a6041a77634-host-proc-sys-kernel\") pod \"cilium-r7ffp\" (UID: \"e53c8757-1e08-4be3-a0f6-0a6041a77634\") " pod="kube-system/cilium-r7ffp" May 17 00:41:39.073406 kubelet[1405]: I0517 00:41:39.072801 1405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/51c97435-2724-4237-9900-832ea6cf243d-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-bndfc\" (UID: \"51c97435-2724-4237-9900-832ea6cf243d\") " pod="kube-system/cilium-operator-6c4d7847fc-bndfc" May 17 00:41:39.073406 kubelet[1405]: I0517 00:41:39.072826 1405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e53c8757-1e08-4be3-a0f6-0a6041a77634-cilium-run\") pod \"cilium-r7ffp\" (UID: \"e53c8757-1e08-4be3-a0f6-0a6041a77634\") " pod="kube-system/cilium-r7ffp" May 17 00:41:39.073406 kubelet[1405]: I0517 00:41:39.072841 1405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e53c8757-1e08-4be3-a0f6-0a6041a77634-hostproc\") pod \"cilium-r7ffp\" (UID: \"e53c8757-1e08-4be3-a0f6-0a6041a77634\") " pod="kube-system/cilium-r7ffp" May 17 00:41:39.073406 kubelet[1405]: I0517 00:41:39.072857 1405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e53c8757-1e08-4be3-a0f6-0a6041a77634-cilium-cgroup\") pod \"cilium-r7ffp\" (UID: \"e53c8757-1e08-4be3-a0f6-0a6041a77634\") " pod="kube-system/cilium-r7ffp" May 17 00:41:39.073406 kubelet[1405]: I0517 00:41:39.072874 1405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e53c8757-1e08-4be3-a0f6-0a6041a77634-cni-path\") pod \"cilium-r7ffp\" (UID: \"e53c8757-1e08-4be3-a0f6-0a6041a77634\") " pod="kube-system/cilium-r7ffp" May 17 00:41:39.073406 kubelet[1405]: I0517 00:41:39.072890 1405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e53c8757-1e08-4be3-a0f6-0a6041a77634-etc-cni-netd\") pod \"cilium-r7ffp\" (UID: \"e53c8757-1e08-4be3-a0f6-0a6041a77634\") " pod="kube-system/cilium-r7ffp" May 17 00:41:39.073606 kubelet[1405]: I0517 00:41:39.072908 1405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e53c8757-1e08-4be3-a0f6-0a6041a77634-xtables-lock\") pod \"cilium-r7ffp\" (UID: \"e53c8757-1e08-4be3-a0f6-0a6041a77634\") " pod="kube-system/cilium-r7ffp" May 17 00:41:39.073606 kubelet[1405]: I0517 00:41:39.072944 1405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e53c8757-1e08-4be3-a0f6-0a6041a77634-clustermesh-secrets\") pod \"cilium-r7ffp\" (UID: \"e53c8757-1e08-4be3-a0f6-0a6041a77634\") " pod="kube-system/cilium-r7ffp" May 17 00:41:39.073606 kubelet[1405]: I0517 00:41:39.072973 1405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qccvp\" (UniqueName: \"kubernetes.io/projected/51c97435-2724-4237-9900-832ea6cf243d-kube-api-access-qccvp\") pod \"cilium-operator-6c4d7847fc-bndfc\" (UID: \"51c97435-2724-4237-9900-832ea6cf243d\") " pod="kube-system/cilium-operator-6c4d7847fc-bndfc" May 17 00:41:39.073606 kubelet[1405]: I0517 00:41:39.072996 1405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e53c8757-1e08-4be3-a0f6-0a6041a77634-cilium-config-path\") pod \"cilium-r7ffp\" (UID: \"e53c8757-1e08-4be3-a0f6-0a6041a77634\") " pod="kube-system/cilium-r7ffp" May 17 00:41:39.073606 kubelet[1405]: I0517 00:41:39.073018 1405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e53c8757-1e08-4be3-a0f6-0a6041a77634-host-proc-sys-net\") pod \"cilium-r7ffp\" (UID: \"e53c8757-1e08-4be3-a0f6-0a6041a77634\") " pod="kube-system/cilium-r7ffp" May 17 00:41:39.105318 kubelet[1405]: I0517 00:41:39.105241 1405 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2eb883d-e50b-483f-a74d-3846f4b60594" path="/var/lib/kubelet/pods/d2eb883d-e50b-483f-a74d-3846f4b60594/volumes" May 17 00:41:39.181716 kubelet[1405]: E0517 00:41:39.178662 1405 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 17 00:41:39.305952 kubelet[1405]: E0517 00:41:39.305895 1405 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:41:39.312390 env[1203]: time="2025-05-17T00:41:39.312089594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-r7ffp,Uid:e53c8757-1e08-4be3-a0f6-0a6041a77634,Namespace:kube-system,Attempt:0,}" May 17 00:41:39.313439 kubelet[1405]: E0517 00:41:39.313411 1405 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:41:39.319640 env[1203]: time="2025-05-17T00:41:39.314994839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-bndfc,Uid:51c97435-2724-4237-9900-832ea6cf243d,Namespace:kube-system,Attempt:0,}" May 17 00:41:39.380262 env[1203]: time="2025-05-17T00:41:39.372899174Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:41:39.380262 env[1203]: time="2025-05-17T00:41:39.372957503Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:41:39.380262 env[1203]: time="2025-05-17T00:41:39.372972651Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:41:39.380262 env[1203]: time="2025-05-17T00:41:39.373150355Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e0988df2c32d4078a9a1776d16d4d3d4e77503d315b0f201d3fb6a304c8c1e4c pid=2981 runtime=io.containerd.runc.v2 May 17 00:41:39.394452 env[1203]: time="2025-05-17T00:41:39.394256478Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:41:39.394452 env[1203]: time="2025-05-17T00:41:39.394302153Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:41:39.394452 env[1203]: time="2025-05-17T00:41:39.394316600Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:41:39.394870 env[1203]: time="2025-05-17T00:41:39.394788757Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7178ae4285bb6817fa54af99d2154cfdcb3cbd23a3fb6b49cb8081e4eebe1a0c pid=2997 runtime=io.containerd.runc.v2 May 17 00:41:39.425763 systemd[1]: Started cri-containerd-e0988df2c32d4078a9a1776d16d4d3d4e77503d315b0f201d3fb6a304c8c1e4c.scope. May 17 00:41:39.443275 systemd[1]: Started cri-containerd-7178ae4285bb6817fa54af99d2154cfdcb3cbd23a3fb6b49cb8081e4eebe1a0c.scope. May 17 00:41:39.553743 env[1203]: time="2025-05-17T00:41:39.549911601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-r7ffp,Uid:e53c8757-1e08-4be3-a0f6-0a6041a77634,Namespace:kube-system,Attempt:0,} returns sandbox id \"e0988df2c32d4078a9a1776d16d4d3d4e77503d315b0f201d3fb6a304c8c1e4c\"" May 17 00:41:39.553743 env[1203]: time="2025-05-17T00:41:39.552817897Z" level=info msg="CreateContainer within sandbox \"e0988df2c32d4078a9a1776d16d4d3d4e77503d315b0f201d3fb6a304c8c1e4c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 17 00:41:39.553914 kubelet[1405]: E0517 00:41:39.550710 1405 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:41:39.601635 env[1203]: time="2025-05-17T00:41:39.601414778Z" level=info msg="CreateContainer within sandbox \"e0988df2c32d4078a9a1776d16d4d3d4e77503d315b0f201d3fb6a304c8c1e4c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0e59c9a76f1dceeffb163864bf58a47e945ef88c1e5c7751676ae82b9caf3980\"" May 17 00:41:39.602177 env[1203]: time="2025-05-17T00:41:39.602085888Z" level=info msg="StartContainer for \"0e59c9a76f1dceeffb163864bf58a47e945ef88c1e5c7751676ae82b9caf3980\"" May 17 00:41:39.639496 env[1203]: time="2025-05-17T00:41:39.638884811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-bndfc,Uid:51c97435-2724-4237-9900-832ea6cf243d,Namespace:kube-system,Attempt:0,} returns sandbox id \"7178ae4285bb6817fa54af99d2154cfdcb3cbd23a3fb6b49cb8081e4eebe1a0c\"" May 17 00:41:39.650436 kubelet[1405]: E0517 00:41:39.648415 1405 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:41:39.656923 env[1203]: time="2025-05-17T00:41:39.656813399Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 17 00:41:39.677538 systemd[1]: Started cri-containerd-0e59c9a76f1dceeffb163864bf58a47e945ef88c1e5c7751676ae82b9caf3980.scope. May 17 00:41:39.710426 systemd[1]: cri-containerd-0e59c9a76f1dceeffb163864bf58a47e945ef88c1e5c7751676ae82b9caf3980.scope: Deactivated successfully. May 17 00:41:39.763390 env[1203]: time="2025-05-17T00:41:39.763312149Z" level=info msg="shim disconnected" id=0e59c9a76f1dceeffb163864bf58a47e945ef88c1e5c7751676ae82b9caf3980 May 17 00:41:39.763390 env[1203]: time="2025-05-17T00:41:39.763386329Z" level=warning msg="cleaning up after shim disconnected" id=0e59c9a76f1dceeffb163864bf58a47e945ef88c1e5c7751676ae82b9caf3980 namespace=k8s.io May 17 00:41:39.763390 env[1203]: time="2025-05-17T00:41:39.763403561Z" level=info msg="cleaning up dead shim" May 17 00:41:39.787422 env[1203]: time="2025-05-17T00:41:39.787314158Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:41:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3080 runtime=io.containerd.runc.v2\ntime=\"2025-05-17T00:41:39Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/0e59c9a76f1dceeffb163864bf58a47e945ef88c1e5c7751676ae82b9caf3980/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" May 17 00:41:39.794584 env[1203]: time="2025-05-17T00:41:39.787673443Z" level=error msg="copy shim log" error="read /proc/self/fd/59: file already closed" May 17 00:41:39.794584 env[1203]: time="2025-05-17T00:41:39.787939693Z" level=error msg="Failed to pipe stdout of container \"0e59c9a76f1dceeffb163864bf58a47e945ef88c1e5c7751676ae82b9caf3980\"" error="reading from a closed fifo" May 17 00:41:39.802788 env[1203]: time="2025-05-17T00:41:39.802683933Z" level=error msg="Failed to pipe stderr of container \"0e59c9a76f1dceeffb163864bf58a47e945ef88c1e5c7751676ae82b9caf3980\"" error="reading from a closed fifo" May 17 00:41:39.815764 kubelet[1405]: E0517 00:41:39.815476 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:39.846490 env[1203]: time="2025-05-17T00:41:39.846357421Z" level=error msg="StartContainer for \"0e59c9a76f1dceeffb163864bf58a47e945ef88c1e5c7751676ae82b9caf3980\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" May 17 00:41:39.846833 kubelet[1405]: E0517 00:41:39.846757 1405 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="0e59c9a76f1dceeffb163864bf58a47e945ef88c1e5c7751676ae82b9caf3980" May 17 00:41:39.847005 kubelet[1405]: E0517 00:41:39.846985 1405 kuberuntime_manager.go:1341] "Unhandled Error" err=< May 17 00:41:39.847005 kubelet[1405]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; May 17 00:41:39.847005 kubelet[1405]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; May 17 00:41:39.847005 kubelet[1405]: rm /hostbin/cilium-mount May 17 00:41:39.847323 kubelet[1405]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ctjkt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-r7ffp_kube-system(e53c8757-1e08-4be3-a0f6-0a6041a77634): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown May 17 00:41:39.847323 kubelet[1405]: > logger="UnhandledError" May 17 00:41:39.848525 kubelet[1405]: E0517 00:41:39.848433 1405 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-r7ffp" podUID="e53c8757-1e08-4be3-a0f6-0a6041a77634" May 17 00:41:40.473355 env[1203]: time="2025-05-17T00:41:40.473290272Z" level=info msg="StopPodSandbox for \"e0988df2c32d4078a9a1776d16d4d3d4e77503d315b0f201d3fb6a304c8c1e4c\"" May 17 00:41:40.473355 env[1203]: time="2025-05-17T00:41:40.473357078Z" level=info msg="Container to stop \"0e59c9a76f1dceeffb163864bf58a47e945ef88c1e5c7751676ae82b9caf3980\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:41:40.479296 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e0988df2c32d4078a9a1776d16d4d3d4e77503d315b0f201d3fb6a304c8c1e4c-shm.mount: Deactivated successfully. May 17 00:41:40.491404 systemd[1]: cri-containerd-e0988df2c32d4078a9a1776d16d4d3d4e77503d315b0f201d3fb6a304c8c1e4c.scope: Deactivated successfully. May 17 00:41:40.559676 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e0988df2c32d4078a9a1776d16d4d3d4e77503d315b0f201d3fb6a304c8c1e4c-rootfs.mount: Deactivated successfully. May 17 00:41:40.586317 env[1203]: time="2025-05-17T00:41:40.586262124Z" level=info msg="shim disconnected" id=e0988df2c32d4078a9a1776d16d4d3d4e77503d315b0f201d3fb6a304c8c1e4c May 17 00:41:40.586653 env[1203]: time="2025-05-17T00:41:40.586602574Z" level=warning msg="cleaning up after shim disconnected" id=e0988df2c32d4078a9a1776d16d4d3d4e77503d315b0f201d3fb6a304c8c1e4c namespace=k8s.io May 17 00:41:40.586653 env[1203]: time="2025-05-17T00:41:40.586626959Z" level=info msg="cleaning up dead shim" May 17 00:41:40.596225 env[1203]: time="2025-05-17T00:41:40.596154094Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:41:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3110 runtime=io.containerd.runc.v2\n" May 17 00:41:40.596561 env[1203]: time="2025-05-17T00:41:40.596514390Z" level=info msg="TearDown network for sandbox \"e0988df2c32d4078a9a1776d16d4d3d4e77503d315b0f201d3fb6a304c8c1e4c\" successfully" May 17 00:41:40.596561 env[1203]: time="2025-05-17T00:41:40.596544687Z" level=info msg="StopPodSandbox for \"e0988df2c32d4078a9a1776d16d4d3d4e77503d315b0f201d3fb6a304c8c1e4c\" returns successfully" May 17 00:41:40.718537 kubelet[1405]: I0517 00:41:40.712270 1405 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e53c8757-1e08-4be3-a0f6-0a6041a77634-bpf-maps\") pod \"e53c8757-1e08-4be3-a0f6-0a6041a77634\" (UID: \"e53c8757-1e08-4be3-a0f6-0a6041a77634\") " May 17 00:41:40.718537 kubelet[1405]: I0517 00:41:40.712420 1405 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e53c8757-1e08-4be3-a0f6-0a6041a77634-cilium-ipsec-secrets\") pod \"e53c8757-1e08-4be3-a0f6-0a6041a77634\" (UID: \"e53c8757-1e08-4be3-a0f6-0a6041a77634\") " May 17 00:41:40.718537 kubelet[1405]: I0517 00:41:40.712444 1405 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e53c8757-1e08-4be3-a0f6-0a6041a77634-cilium-run\") pod \"e53c8757-1e08-4be3-a0f6-0a6041a77634\" (UID: \"e53c8757-1e08-4be3-a0f6-0a6041a77634\") " May 17 00:41:40.718537 kubelet[1405]: I0517 00:41:40.712468 1405 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e53c8757-1e08-4be3-a0f6-0a6041a77634-cilium-config-path\") pod \"e53c8757-1e08-4be3-a0f6-0a6041a77634\" (UID: \"e53c8757-1e08-4be3-a0f6-0a6041a77634\") " May 17 00:41:40.718537 kubelet[1405]: I0517 00:41:40.712459 1405 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e53c8757-1e08-4be3-a0f6-0a6041a77634-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "e53c8757-1e08-4be3-a0f6-0a6041a77634" (UID: "e53c8757-1e08-4be3-a0f6-0a6041a77634"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:41:40.718537 kubelet[1405]: I0517 00:41:40.712505 1405 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e53c8757-1e08-4be3-a0f6-0a6041a77634-clustermesh-secrets\") pod \"e53c8757-1e08-4be3-a0f6-0a6041a77634\" (UID: \"e53c8757-1e08-4be3-a0f6-0a6041a77634\") " May 17 00:41:40.718537 kubelet[1405]: I0517 00:41:40.712632 1405 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e53c8757-1e08-4be3-a0f6-0a6041a77634-hubble-tls\") pod \"e53c8757-1e08-4be3-a0f6-0a6041a77634\" (UID: \"e53c8757-1e08-4be3-a0f6-0a6041a77634\") " May 17 00:41:40.718537 kubelet[1405]: I0517 00:41:40.712656 1405 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e53c8757-1e08-4be3-a0f6-0a6041a77634-xtables-lock\") pod \"e53c8757-1e08-4be3-a0f6-0a6041a77634\" (UID: \"e53c8757-1e08-4be3-a0f6-0a6041a77634\") " May 17 00:41:40.718537 kubelet[1405]: I0517 00:41:40.712639 1405 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e53c8757-1e08-4be3-a0f6-0a6041a77634-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "e53c8757-1e08-4be3-a0f6-0a6041a77634" (UID: "e53c8757-1e08-4be3-a0f6-0a6041a77634"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:41:40.718537 kubelet[1405]: I0517 00:41:40.712691 1405 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e53c8757-1e08-4be3-a0f6-0a6041a77634-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "e53c8757-1e08-4be3-a0f6-0a6041a77634" (UID: "e53c8757-1e08-4be3-a0f6-0a6041a77634"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:41:40.718537 kubelet[1405]: I0517 00:41:40.712673 1405 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e53c8757-1e08-4be3-a0f6-0a6041a77634-cilium-cgroup\") pod \"e53c8757-1e08-4be3-a0f6-0a6041a77634\" (UID: \"e53c8757-1e08-4be3-a0f6-0a6041a77634\") " May 17 00:41:40.718537 kubelet[1405]: I0517 00:41:40.713263 1405 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e53c8757-1e08-4be3-a0f6-0a6041a77634-cni-path\") pod \"e53c8757-1e08-4be3-a0f6-0a6041a77634\" (UID: \"e53c8757-1e08-4be3-a0f6-0a6041a77634\") " May 17 00:41:40.718537 kubelet[1405]: I0517 00:41:40.713310 1405 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ctjkt\" (UniqueName: \"kubernetes.io/projected/e53c8757-1e08-4be3-a0f6-0a6041a77634-kube-api-access-ctjkt\") pod \"e53c8757-1e08-4be3-a0f6-0a6041a77634\" (UID: \"e53c8757-1e08-4be3-a0f6-0a6041a77634\") " May 17 00:41:40.718537 kubelet[1405]: I0517 00:41:40.713333 1405 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e53c8757-1e08-4be3-a0f6-0a6041a77634-etc-cni-netd\") pod \"e53c8757-1e08-4be3-a0f6-0a6041a77634\" (UID: \"e53c8757-1e08-4be3-a0f6-0a6041a77634\") " May 17 00:41:40.718537 kubelet[1405]: I0517 00:41:40.713356 1405 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e53c8757-1e08-4be3-a0f6-0a6041a77634-host-proc-sys-net\") pod \"e53c8757-1e08-4be3-a0f6-0a6041a77634\" (UID: \"e53c8757-1e08-4be3-a0f6-0a6041a77634\") " May 17 00:41:40.718537 kubelet[1405]: I0517 00:41:40.713374 1405 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e53c8757-1e08-4be3-a0f6-0a6041a77634-lib-modules\") pod \"e53c8757-1e08-4be3-a0f6-0a6041a77634\" (UID: \"e53c8757-1e08-4be3-a0f6-0a6041a77634\") " May 17 00:41:40.719491 kubelet[1405]: I0517 00:41:40.713396 1405 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e53c8757-1e08-4be3-a0f6-0a6041a77634-host-proc-sys-kernel\") pod \"e53c8757-1e08-4be3-a0f6-0a6041a77634\" (UID: \"e53c8757-1e08-4be3-a0f6-0a6041a77634\") " May 17 00:41:40.719491 kubelet[1405]: I0517 00:41:40.713414 1405 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e53c8757-1e08-4be3-a0f6-0a6041a77634-hostproc\") pod \"e53c8757-1e08-4be3-a0f6-0a6041a77634\" (UID: \"e53c8757-1e08-4be3-a0f6-0a6041a77634\") " May 17 00:41:40.719491 kubelet[1405]: I0517 00:41:40.713465 1405 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e53c8757-1e08-4be3-a0f6-0a6041a77634-bpf-maps\") on node \"10.0.0.140\" DevicePath \"\"" May 17 00:41:40.719491 kubelet[1405]: I0517 00:41:40.713490 1405 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e53c8757-1e08-4be3-a0f6-0a6041a77634-cilium-run\") on node \"10.0.0.140\" DevicePath \"\"" May 17 00:41:40.719491 kubelet[1405]: I0517 00:41:40.713502 1405 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e53c8757-1e08-4be3-a0f6-0a6041a77634-cilium-cgroup\") on node \"10.0.0.140\" DevicePath \"\"" May 17 00:41:40.719491 kubelet[1405]: I0517 00:41:40.713529 1405 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e53c8757-1e08-4be3-a0f6-0a6041a77634-hostproc" (OuterVolumeSpecName: "hostproc") pod "e53c8757-1e08-4be3-a0f6-0a6041a77634" (UID: "e53c8757-1e08-4be3-a0f6-0a6041a77634"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:41:40.719491 kubelet[1405]: I0517 00:41:40.713551 1405 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e53c8757-1e08-4be3-a0f6-0a6041a77634-cni-path" (OuterVolumeSpecName: "cni-path") pod "e53c8757-1e08-4be3-a0f6-0a6041a77634" (UID: "e53c8757-1e08-4be3-a0f6-0a6041a77634"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:41:40.719491 kubelet[1405]: I0517 00:41:40.713818 1405 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e53c8757-1e08-4be3-a0f6-0a6041a77634-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e53c8757-1e08-4be3-a0f6-0a6041a77634" (UID: "e53c8757-1e08-4be3-a0f6-0a6041a77634"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:41:40.719491 kubelet[1405]: I0517 00:41:40.713849 1405 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e53c8757-1e08-4be3-a0f6-0a6041a77634-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "e53c8757-1e08-4be3-a0f6-0a6041a77634" (UID: "e53c8757-1e08-4be3-a0f6-0a6041a77634"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:41:40.719491 kubelet[1405]: I0517 00:41:40.713883 1405 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e53c8757-1e08-4be3-a0f6-0a6041a77634-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "e53c8757-1e08-4be3-a0f6-0a6041a77634" (UID: "e53c8757-1e08-4be3-a0f6-0a6041a77634"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:41:40.719491 kubelet[1405]: I0517 00:41:40.713902 1405 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e53c8757-1e08-4be3-a0f6-0a6041a77634-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e53c8757-1e08-4be3-a0f6-0a6041a77634" (UID: "e53c8757-1e08-4be3-a0f6-0a6041a77634"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:41:40.719491 kubelet[1405]: I0517 00:41:40.713919 1405 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e53c8757-1e08-4be3-a0f6-0a6041a77634-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "e53c8757-1e08-4be3-a0f6-0a6041a77634" (UID: "e53c8757-1e08-4be3-a0f6-0a6041a77634"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:41:40.719491 kubelet[1405]: I0517 00:41:40.715318 1405 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e53c8757-1e08-4be3-a0f6-0a6041a77634-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e53c8757-1e08-4be3-a0f6-0a6041a77634" (UID: "e53c8757-1e08-4be3-a0f6-0a6041a77634"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 17 00:41:40.723938 systemd[1]: var-lib-kubelet-pods-e53c8757\x2d1e08\x2d4be3\x2da0f6\x2d0a6041a77634-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 17 00:41:40.724054 systemd[1]: var-lib-kubelet-pods-e53c8757\x2d1e08\x2d4be3\x2da0f6\x2d0a6041a77634-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 17 00:41:40.724115 systemd[1]: var-lib-kubelet-pods-e53c8757\x2d1e08\x2d4be3\x2da0f6\x2d0a6041a77634-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. May 17 00:41:40.727090 kubelet[1405]: I0517 00:41:40.726395 1405 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e53c8757-1e08-4be3-a0f6-0a6041a77634-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "e53c8757-1e08-4be3-a0f6-0a6041a77634" (UID: "e53c8757-1e08-4be3-a0f6-0a6041a77634"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 17 00:41:40.731005 systemd[1]: var-lib-kubelet-pods-e53c8757\x2d1e08\x2d4be3\x2da0f6\x2d0a6041a77634-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dctjkt.mount: Deactivated successfully. May 17 00:41:40.732496 kubelet[1405]: I0517 00:41:40.732413 1405 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e53c8757-1e08-4be3-a0f6-0a6041a77634-kube-api-access-ctjkt" (OuterVolumeSpecName: "kube-api-access-ctjkt") pod "e53c8757-1e08-4be3-a0f6-0a6041a77634" (UID: "e53c8757-1e08-4be3-a0f6-0a6041a77634"). InnerVolumeSpecName "kube-api-access-ctjkt". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 17 00:41:40.732604 kubelet[1405]: I0517 00:41:40.732537 1405 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e53c8757-1e08-4be3-a0f6-0a6041a77634-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "e53c8757-1e08-4be3-a0f6-0a6041a77634" (UID: "e53c8757-1e08-4be3-a0f6-0a6041a77634"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 17 00:41:40.733282 kubelet[1405]: I0517 00:41:40.733203 1405 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e53c8757-1e08-4be3-a0f6-0a6041a77634-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "e53c8757-1e08-4be3-a0f6-0a6041a77634" (UID: "e53c8757-1e08-4be3-a0f6-0a6041a77634"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 17 00:41:40.792354 kubelet[1405]: I0517 00:41:40.791365 1405 setters.go:602] "Node became not ready" node="10.0.0.140" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-17T00:41:40Z","lastTransitionTime":"2025-05-17T00:41:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 17 00:41:40.814670 kubelet[1405]: I0517 00:41:40.814525 1405 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e53c8757-1e08-4be3-a0f6-0a6041a77634-hubble-tls\") on node \"10.0.0.140\" DevicePath \"\"" May 17 00:41:40.814670 kubelet[1405]: I0517 00:41:40.814564 1405 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e53c8757-1e08-4be3-a0f6-0a6041a77634-xtables-lock\") on node \"10.0.0.140\" DevicePath \"\"" May 17 00:41:40.814670 kubelet[1405]: I0517 00:41:40.814601 1405 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e53c8757-1e08-4be3-a0f6-0a6041a77634-cni-path\") on node \"10.0.0.140\" DevicePath \"\"" May 17 00:41:40.814670 kubelet[1405]: I0517 00:41:40.814612 1405 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e53c8757-1e08-4be3-a0f6-0a6041a77634-clustermesh-secrets\") on node \"10.0.0.140\" DevicePath \"\"" May 17 00:41:40.814670 kubelet[1405]: I0517 00:41:40.814624 1405 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ctjkt\" (UniqueName: \"kubernetes.io/projected/e53c8757-1e08-4be3-a0f6-0a6041a77634-kube-api-access-ctjkt\") on node \"10.0.0.140\" DevicePath \"\"" May 17 00:41:40.814670 kubelet[1405]: I0517 00:41:40.814634 1405 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e53c8757-1e08-4be3-a0f6-0a6041a77634-etc-cni-netd\") on node \"10.0.0.140\" DevicePath \"\"" May 17 00:41:40.814670 kubelet[1405]: I0517 00:41:40.814643 1405 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e53c8757-1e08-4be3-a0f6-0a6041a77634-lib-modules\") on node \"10.0.0.140\" DevicePath \"\"" May 17 00:41:40.814670 kubelet[1405]: I0517 00:41:40.814651 1405 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e53c8757-1e08-4be3-a0f6-0a6041a77634-host-proc-sys-kernel\") on node \"10.0.0.140\" DevicePath \"\"" May 17 00:41:40.814670 kubelet[1405]: I0517 00:41:40.814660 1405 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e53c8757-1e08-4be3-a0f6-0a6041a77634-hostproc\") on node \"10.0.0.140\" DevicePath \"\"" May 17 00:41:40.814670 kubelet[1405]: I0517 00:41:40.814669 1405 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e53c8757-1e08-4be3-a0f6-0a6041a77634-host-proc-sys-net\") on node \"10.0.0.140\" DevicePath \"\"" May 17 00:41:40.814670 kubelet[1405]: I0517 00:41:40.814678 1405 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e53c8757-1e08-4be3-a0f6-0a6041a77634-cilium-ipsec-secrets\") on node \"10.0.0.140\" DevicePath \"\"" May 17 00:41:40.814670 kubelet[1405]: I0517 00:41:40.814687 1405 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e53c8757-1e08-4be3-a0f6-0a6041a77634-cilium-config-path\") on node \"10.0.0.140\" DevicePath \"\"" May 17 00:41:40.815855 kubelet[1405]: E0517 00:41:40.815802 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:41.115149 systemd[1]: Removed slice kubepods-burstable-pode53c8757_1e08_4be3_a0f6_0a6041a77634.slice. May 17 00:41:41.400519 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount611689603.mount: Deactivated successfully. May 17 00:41:41.485497 kubelet[1405]: I0517 00:41:41.481229 1405 scope.go:117] "RemoveContainer" containerID="0e59c9a76f1dceeffb163864bf58a47e945ef88c1e5c7751676ae82b9caf3980" May 17 00:41:41.485706 env[1203]: time="2025-05-17T00:41:41.483319005Z" level=info msg="RemoveContainer for \"0e59c9a76f1dceeffb163864bf58a47e945ef88c1e5c7751676ae82b9caf3980\"" May 17 00:41:41.503504 env[1203]: time="2025-05-17T00:41:41.503422080Z" level=info msg="RemoveContainer for \"0e59c9a76f1dceeffb163864bf58a47e945ef88c1e5c7751676ae82b9caf3980\" returns successfully" May 17 00:41:41.618005 kubelet[1405]: I0517 00:41:41.617952 1405 memory_manager.go:355] "RemoveStaleState removing state" podUID="e53c8757-1e08-4be3-a0f6-0a6041a77634" containerName="mount-cgroup" May 17 00:41:41.649924 systemd[1]: Created slice kubepods-burstable-pod5412539b_211a_4050_8aa5_73b599fbad40.slice. May 17 00:41:41.732903 kubelet[1405]: I0517 00:41:41.728957 1405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5412539b-211a-4050-8aa5-73b599fbad40-cilium-config-path\") pod \"cilium-w5cr7\" (UID: \"5412539b-211a-4050-8aa5-73b599fbad40\") " pod="kube-system/cilium-w5cr7" May 17 00:41:41.732903 kubelet[1405]: I0517 00:41:41.729006 1405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75ldk\" (UniqueName: \"kubernetes.io/projected/5412539b-211a-4050-8aa5-73b599fbad40-kube-api-access-75ldk\") pod \"cilium-w5cr7\" (UID: \"5412539b-211a-4050-8aa5-73b599fbad40\") " pod="kube-system/cilium-w5cr7" May 17 00:41:41.732903 kubelet[1405]: I0517 00:41:41.729030 1405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5412539b-211a-4050-8aa5-73b599fbad40-xtables-lock\") pod \"cilium-w5cr7\" (UID: \"5412539b-211a-4050-8aa5-73b599fbad40\") " pod="kube-system/cilium-w5cr7" May 17 00:41:41.732903 kubelet[1405]: I0517 00:41:41.729049 1405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5412539b-211a-4050-8aa5-73b599fbad40-cilium-ipsec-secrets\") pod \"cilium-w5cr7\" (UID: \"5412539b-211a-4050-8aa5-73b599fbad40\") " pod="kube-system/cilium-w5cr7" May 17 00:41:41.732903 kubelet[1405]: I0517 00:41:41.729069 1405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5412539b-211a-4050-8aa5-73b599fbad40-host-proc-sys-kernel\") pod \"cilium-w5cr7\" (UID: \"5412539b-211a-4050-8aa5-73b599fbad40\") " pod="kube-system/cilium-w5cr7" May 17 00:41:41.732903 kubelet[1405]: I0517 00:41:41.729088 1405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5412539b-211a-4050-8aa5-73b599fbad40-cilium-cgroup\") pod \"cilium-w5cr7\" (UID: \"5412539b-211a-4050-8aa5-73b599fbad40\") " pod="kube-system/cilium-w5cr7" May 17 00:41:41.732903 kubelet[1405]: I0517 00:41:41.729105 1405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5412539b-211a-4050-8aa5-73b599fbad40-clustermesh-secrets\") pod \"cilium-w5cr7\" (UID: \"5412539b-211a-4050-8aa5-73b599fbad40\") " pod="kube-system/cilium-w5cr7" May 17 00:41:41.732903 kubelet[1405]: I0517 00:41:41.729123 1405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5412539b-211a-4050-8aa5-73b599fbad40-cilium-run\") pod \"cilium-w5cr7\" (UID: \"5412539b-211a-4050-8aa5-73b599fbad40\") " pod="kube-system/cilium-w5cr7" May 17 00:41:41.732903 kubelet[1405]: I0517 00:41:41.729142 1405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5412539b-211a-4050-8aa5-73b599fbad40-hostproc\") pod \"cilium-w5cr7\" (UID: \"5412539b-211a-4050-8aa5-73b599fbad40\") " pod="kube-system/cilium-w5cr7" May 17 00:41:41.732903 kubelet[1405]: I0517 00:41:41.729159 1405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5412539b-211a-4050-8aa5-73b599fbad40-lib-modules\") pod \"cilium-w5cr7\" (UID: \"5412539b-211a-4050-8aa5-73b599fbad40\") " pod="kube-system/cilium-w5cr7" May 17 00:41:41.732903 kubelet[1405]: I0517 00:41:41.729180 1405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5412539b-211a-4050-8aa5-73b599fbad40-bpf-maps\") pod \"cilium-w5cr7\" (UID: \"5412539b-211a-4050-8aa5-73b599fbad40\") " pod="kube-system/cilium-w5cr7" May 17 00:41:41.732903 kubelet[1405]: I0517 00:41:41.729200 1405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5412539b-211a-4050-8aa5-73b599fbad40-cni-path\") pod \"cilium-w5cr7\" (UID: \"5412539b-211a-4050-8aa5-73b599fbad40\") " pod="kube-system/cilium-w5cr7" May 17 00:41:41.732903 kubelet[1405]: I0517 00:41:41.729217 1405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5412539b-211a-4050-8aa5-73b599fbad40-etc-cni-netd\") pod \"cilium-w5cr7\" (UID: \"5412539b-211a-4050-8aa5-73b599fbad40\") " pod="kube-system/cilium-w5cr7" May 17 00:41:41.732903 kubelet[1405]: I0517 00:41:41.729238 1405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5412539b-211a-4050-8aa5-73b599fbad40-host-proc-sys-net\") pod \"cilium-w5cr7\" (UID: \"5412539b-211a-4050-8aa5-73b599fbad40\") " pod="kube-system/cilium-w5cr7" May 17 00:41:41.732903 kubelet[1405]: I0517 00:41:41.729256 1405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5412539b-211a-4050-8aa5-73b599fbad40-hubble-tls\") pod \"cilium-w5cr7\" (UID: \"5412539b-211a-4050-8aa5-73b599fbad40\") " pod="kube-system/cilium-w5cr7" May 17 00:41:41.816159 kubelet[1405]: E0517 00:41:41.816064 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:41.973025 kubelet[1405]: E0517 00:41:41.972969 1405 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:41:41.973795 env[1203]: time="2025-05-17T00:41:41.973728120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-w5cr7,Uid:5412539b-211a-4050-8aa5-73b599fbad40,Namespace:kube-system,Attempt:0,}" May 17 00:41:42.016661 env[1203]: time="2025-05-17T00:41:42.015726004Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:41:42.016661 env[1203]: time="2025-05-17T00:41:42.015792459Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:41:42.016661 env[1203]: time="2025-05-17T00:41:42.015807948Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:41:42.016661 env[1203]: time="2025-05-17T00:41:42.016186629Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5f42ed00d5cee09e63c2aeb3f5c3cda91ea6e5a6654dc93b6162d4076c0b0e9d pid=3137 runtime=io.containerd.runc.v2 May 17 00:41:42.086013 systemd[1]: Started cri-containerd-5f42ed00d5cee09e63c2aeb3f5c3cda91ea6e5a6654dc93b6162d4076c0b0e9d.scope. May 17 00:41:42.102968 kubelet[1405]: E0517 00:41:42.101580 1405 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:41:42.236423 env[1203]: time="2025-05-17T00:41:42.236158032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-w5cr7,Uid:5412539b-211a-4050-8aa5-73b599fbad40,Namespace:kube-system,Attempt:0,} returns sandbox id \"5f42ed00d5cee09e63c2aeb3f5c3cda91ea6e5a6654dc93b6162d4076c0b0e9d\"" May 17 00:41:42.242489 kubelet[1405]: E0517 00:41:42.241941 1405 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:41:42.248772 env[1203]: time="2025-05-17T00:41:42.248726494Z" level=info msg="CreateContainer within sandbox \"5f42ed00d5cee09e63c2aeb3f5c3cda91ea6e5a6654dc93b6162d4076c0b0e9d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 17 00:41:42.321125 env[1203]: time="2025-05-17T00:41:42.315362692Z" level=info msg="CreateContainer within sandbox \"5f42ed00d5cee09e63c2aeb3f5c3cda91ea6e5a6654dc93b6162d4076c0b0e9d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3244476fd02e5f23c27f3101e3b55c4aec6e10b41b2611bb809124fae2f22b4a\"" May 17 00:41:42.327365 env[1203]: time="2025-05-17T00:41:42.325872459Z" level=info msg="StartContainer for \"3244476fd02e5f23c27f3101e3b55c4aec6e10b41b2611bb809124fae2f22b4a\"" May 17 00:41:42.400700 systemd[1]: Started cri-containerd-3244476fd02e5f23c27f3101e3b55c4aec6e10b41b2611bb809124fae2f22b4a.scope. May 17 00:41:42.602980 env[1203]: time="2025-05-17T00:41:42.602683071Z" level=info msg="StartContainer for \"3244476fd02e5f23c27f3101e3b55c4aec6e10b41b2611bb809124fae2f22b4a\" returns successfully" May 17 00:41:42.623036 systemd[1]: cri-containerd-3244476fd02e5f23c27f3101e3b55c4aec6e10b41b2611bb809124fae2f22b4a.scope: Deactivated successfully. May 17 00:41:42.684646 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3244476fd02e5f23c27f3101e3b55c4aec6e10b41b2611bb809124fae2f22b4a-rootfs.mount: Deactivated successfully. May 17 00:41:42.816675 kubelet[1405]: E0517 00:41:42.816511 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:42.872291 kubelet[1405]: W0517 00:41:42.871038 1405 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode53c8757_1e08_4be3_a0f6_0a6041a77634.slice/cri-containerd-0e59c9a76f1dceeffb163864bf58a47e945ef88c1e5c7751676ae82b9caf3980.scope WatchSource:0}: container "0e59c9a76f1dceeffb163864bf58a47e945ef88c1e5c7751676ae82b9caf3980" in namespace "k8s.io": not found May 17 00:41:42.908843 env[1203]: time="2025-05-17T00:41:42.908780525Z" level=info msg="shim disconnected" id=3244476fd02e5f23c27f3101e3b55c4aec6e10b41b2611bb809124fae2f22b4a May 17 00:41:42.908843 env[1203]: time="2025-05-17T00:41:42.908833995Z" level=warning msg="cleaning up after shim disconnected" id=3244476fd02e5f23c27f3101e3b55c4aec6e10b41b2611bb809124fae2f22b4a namespace=k8s.io May 17 00:41:42.908843 env[1203]: time="2025-05-17T00:41:42.908846058Z" level=info msg="cleaning up dead shim" May 17 00:41:42.927941 env[1203]: time="2025-05-17T00:41:42.927867962Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:41:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3222 runtime=io.containerd.runc.v2\n" May 17 00:41:43.103774 kubelet[1405]: I0517 00:41:43.103662 1405 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e53c8757-1e08-4be3-a0f6-0a6041a77634" path="/var/lib/kubelet/pods/e53c8757-1e08-4be3-a0f6-0a6041a77634/volumes" May 17 00:41:43.120290 env[1203]: time="2025-05-17T00:41:43.115333451Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:43.120290 env[1203]: time="2025-05-17T00:41:43.118063566Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:43.125541 env[1203]: time="2025-05-17T00:41:43.123167666Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:41:43.125541 env[1203]: time="2025-05-17T00:41:43.123475124Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 17 00:41:43.128362 env[1203]: time="2025-05-17T00:41:43.128298276Z" level=info msg="CreateContainer within sandbox \"7178ae4285bb6817fa54af99d2154cfdcb3cbd23a3fb6b49cb8081e4eebe1a0c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 17 00:41:43.168674 env[1203]: time="2025-05-17T00:41:43.168568661Z" level=info msg="CreateContainer within sandbox \"7178ae4285bb6817fa54af99d2154cfdcb3cbd23a3fb6b49cb8081e4eebe1a0c\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"e9e0633d6996012eef099a4b4ad1a46b14e8b08778b82acf92ad21d1255e8da3\"" May 17 00:41:43.169778 env[1203]: time="2025-05-17T00:41:43.169734069Z" level=info msg="StartContainer for \"e9e0633d6996012eef099a4b4ad1a46b14e8b08778b82acf92ad21d1255e8da3\"" May 17 00:41:43.208927 systemd[1]: Started cri-containerd-e9e0633d6996012eef099a4b4ad1a46b14e8b08778b82acf92ad21d1255e8da3.scope. May 17 00:41:43.265650 env[1203]: time="2025-05-17T00:41:43.262601135Z" level=info msg="StartContainer for \"e9e0633d6996012eef099a4b4ad1a46b14e8b08778b82acf92ad21d1255e8da3\" returns successfully" May 17 00:41:43.495453 kubelet[1405]: E0517 00:41:43.495387 1405 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:41:43.498486 kubelet[1405]: E0517 00:41:43.498443 1405 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:41:43.520389 env[1203]: time="2025-05-17T00:41:43.520331171Z" level=info msg="CreateContainer within sandbox \"5f42ed00d5cee09e63c2aeb3f5c3cda91ea6e5a6654dc93b6162d4076c0b0e9d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 17 00:41:43.579205 kubelet[1405]: I0517 00:41:43.579013 1405 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-bndfc" podStartSLOduration=2.106929322 podStartE2EDuration="5.578986454s" podCreationTimestamp="2025-05-17 00:41:38 +0000 UTC" firstStartedPulling="2025-05-17 00:41:39.65453958 +0000 UTC m=+82.205930925" lastFinishedPulling="2025-05-17 00:41:43.126596712 +0000 UTC m=+85.677988057" observedRunningTime="2025-05-17 00:41:43.57876603 +0000 UTC m=+86.130157396" watchObservedRunningTime="2025-05-17 00:41:43.578986454 +0000 UTC m=+86.130377799" May 17 00:41:43.647027 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1967787587.mount: Deactivated successfully. May 17 00:41:43.816986 kubelet[1405]: E0517 00:41:43.816774 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:43.846841 env[1203]: time="2025-05-17T00:41:43.846652450Z" level=info msg="CreateContainer within sandbox \"5f42ed00d5cee09e63c2aeb3f5c3cda91ea6e5a6654dc93b6162d4076c0b0e9d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"faeb8a9824358a984b8810459197ca274a233135a134c698e24410c63a969fca\"" May 17 00:41:43.851915 env[1203]: time="2025-05-17T00:41:43.851741291Z" level=info msg="StartContainer for \"faeb8a9824358a984b8810459197ca274a233135a134c698e24410c63a969fca\"" May 17 00:41:43.927689 systemd[1]: Started cri-containerd-faeb8a9824358a984b8810459197ca274a233135a134c698e24410c63a969fca.scope. May 17 00:41:44.023211 systemd[1]: cri-containerd-faeb8a9824358a984b8810459197ca274a233135a134c698e24410c63a969fca.scope: Deactivated successfully. May 17 00:41:44.029064 env[1203]: time="2025-05-17T00:41:44.028989785Z" level=info msg="StartContainer for \"faeb8a9824358a984b8810459197ca274a233135a134c698e24410c63a969fca\" returns successfully" May 17 00:41:44.180425 kubelet[1405]: E0517 00:41:44.179873 1405 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 17 00:41:44.386148 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-faeb8a9824358a984b8810459197ca274a233135a134c698e24410c63a969fca-rootfs.mount: Deactivated successfully. May 17 00:41:44.401237 env[1203]: time="2025-05-17T00:41:44.401185879Z" level=info msg="shim disconnected" id=faeb8a9824358a984b8810459197ca274a233135a134c698e24410c63a969fca May 17 00:41:44.401504 env[1203]: time="2025-05-17T00:41:44.401484229Z" level=warning msg="cleaning up after shim disconnected" id=faeb8a9824358a984b8810459197ca274a233135a134c698e24410c63a969fca namespace=k8s.io May 17 00:41:44.401600 env[1203]: time="2025-05-17T00:41:44.401566673Z" level=info msg="cleaning up dead shim" May 17 00:41:44.429908 env[1203]: time="2025-05-17T00:41:44.429795646Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:41:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3318 runtime=io.containerd.runc.v2\ntime=\"2025-05-17T00:41:44Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" May 17 00:41:44.509064 kubelet[1405]: E0517 00:41:44.507993 1405 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:41:44.509064 kubelet[1405]: E0517 00:41:44.508609 1405 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:41:44.519947 env[1203]: time="2025-05-17T00:41:44.519303561Z" level=info msg="CreateContainer within sandbox \"5f42ed00d5cee09e63c2aeb3f5c3cda91ea6e5a6654dc93b6162d4076c0b0e9d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 17 00:41:44.678496 env[1203]: time="2025-05-17T00:41:44.671761415Z" level=info msg="CreateContainer within sandbox \"5f42ed00d5cee09e63c2aeb3f5c3cda91ea6e5a6654dc93b6162d4076c0b0e9d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e567cdf5ad54b5388bb50aca25c5f6a7796bda45498a5790235962ec42d8d5f5\"" May 17 00:41:44.678496 env[1203]: time="2025-05-17T00:41:44.677381412Z" level=info msg="StartContainer for \"e567cdf5ad54b5388bb50aca25c5f6a7796bda45498a5790235962ec42d8d5f5\"" May 17 00:41:44.764753 systemd[1]: Started cri-containerd-e567cdf5ad54b5388bb50aca25c5f6a7796bda45498a5790235962ec42d8d5f5.scope. May 17 00:41:44.817695 kubelet[1405]: E0517 00:41:44.817561 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:44.862614 env[1203]: time="2025-05-17T00:41:44.861623501Z" level=info msg="StartContainer for \"e567cdf5ad54b5388bb50aca25c5f6a7796bda45498a5790235962ec42d8d5f5\" returns successfully" May 17 00:41:44.871541 systemd[1]: cri-containerd-e567cdf5ad54b5388bb50aca25c5f6a7796bda45498a5790235962ec42d8d5f5.scope: Deactivated successfully. May 17 00:41:44.980468 env[1203]: time="2025-05-17T00:41:44.975386891Z" level=info msg="shim disconnected" id=e567cdf5ad54b5388bb50aca25c5f6a7796bda45498a5790235962ec42d8d5f5 May 17 00:41:44.980468 env[1203]: time="2025-05-17T00:41:44.975439199Z" level=warning msg="cleaning up after shim disconnected" id=e567cdf5ad54b5388bb50aca25c5f6a7796bda45498a5790235962ec42d8d5f5 namespace=k8s.io May 17 00:41:44.980468 env[1203]: time="2025-05-17T00:41:44.975451552Z" level=info msg="cleaning up dead shim" May 17 00:41:45.003172 env[1203]: time="2025-05-17T00:41:45.003046584Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:41:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3377 runtime=io.containerd.runc.v2\n" May 17 00:41:45.385689 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e567cdf5ad54b5388bb50aca25c5f6a7796bda45498a5790235962ec42d8d5f5-rootfs.mount: Deactivated successfully. May 17 00:41:45.545387 kubelet[1405]: E0517 00:41:45.542232 1405 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:41:45.561221 env[1203]: time="2025-05-17T00:41:45.558783652Z" level=info msg="CreateContainer within sandbox \"5f42ed00d5cee09e63c2aeb3f5c3cda91ea6e5a6654dc93b6162d4076c0b0e9d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 17 00:41:45.591766 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount396067556.mount: Deactivated successfully. May 17 00:41:45.644721 env[1203]: time="2025-05-17T00:41:45.643118039Z" level=info msg="CreateContainer within sandbox \"5f42ed00d5cee09e63c2aeb3f5c3cda91ea6e5a6654dc93b6162d4076c0b0e9d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5e1e4fab7126f98635dafc11098ea038bba68ea1c6553d7ea30ce93abfe4407b\"" May 17 00:41:45.648620 env[1203]: time="2025-05-17T00:41:45.646325880Z" level=info msg="StartContainer for \"5e1e4fab7126f98635dafc11098ea038bba68ea1c6553d7ea30ce93abfe4407b\"" May 17 00:41:45.723685 systemd[1]: Started cri-containerd-5e1e4fab7126f98635dafc11098ea038bba68ea1c6553d7ea30ce93abfe4407b.scope. May 17 00:41:45.791426 systemd[1]: cri-containerd-5e1e4fab7126f98635dafc11098ea038bba68ea1c6553d7ea30ce93abfe4407b.scope: Deactivated successfully. May 17 00:41:45.793271 env[1203]: time="2025-05-17T00:41:45.793189513Z" level=info msg="StartContainer for \"5e1e4fab7126f98635dafc11098ea038bba68ea1c6553d7ea30ce93abfe4407b\" returns successfully" May 17 00:41:45.817754 kubelet[1405]: E0517 00:41:45.817702 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:45.888170 env[1203]: time="2025-05-17T00:41:45.887862726Z" level=info msg="shim disconnected" id=5e1e4fab7126f98635dafc11098ea038bba68ea1c6553d7ea30ce93abfe4407b May 17 00:41:45.888170 env[1203]: time="2025-05-17T00:41:45.887933348Z" level=warning msg="cleaning up after shim disconnected" id=5e1e4fab7126f98635dafc11098ea038bba68ea1c6553d7ea30ce93abfe4407b namespace=k8s.io May 17 00:41:45.888170 env[1203]: time="2025-05-17T00:41:45.887948176Z" level=info msg="cleaning up dead shim" May 17 00:41:45.906444 env[1203]: time="2025-05-17T00:41:45.906217464Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:41:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3430 runtime=io.containerd.runc.v2\ntime=\"2025-05-17T00:41:45Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" May 17 00:41:46.385499 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5e1e4fab7126f98635dafc11098ea038bba68ea1c6553d7ea30ce93abfe4407b-rootfs.mount: Deactivated successfully. May 17 00:41:46.579242 kubelet[1405]: E0517 00:41:46.578413 1405 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:41:46.583324 env[1203]: time="2025-05-17T00:41:46.583252783Z" level=info msg="CreateContainer within sandbox \"5f42ed00d5cee09e63c2aeb3f5c3cda91ea6e5a6654dc93b6162d4076c0b0e9d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 17 00:41:46.667637 env[1203]: time="2025-05-17T00:41:46.662813297Z" level=info msg="CreateContainer within sandbox \"5f42ed00d5cee09e63c2aeb3f5c3cda91ea6e5a6654dc93b6162d4076c0b0e9d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9187f8e47381879701b6ffcae157d0c58ba879ecf66bd003ace13b745e29144a\"" May 17 00:41:46.678326 env[1203]: time="2025-05-17T00:41:46.678261870Z" level=info msg="StartContainer for \"9187f8e47381879701b6ffcae157d0c58ba879ecf66bd003ace13b745e29144a\"" May 17 00:41:46.789118 systemd[1]: Started cri-containerd-9187f8e47381879701b6ffcae157d0c58ba879ecf66bd003ace13b745e29144a.scope. May 17 00:41:46.819361 kubelet[1405]: E0517 00:41:46.819206 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:46.940697 env[1203]: time="2025-05-17T00:41:46.940542540Z" level=info msg="StartContainer for \"9187f8e47381879701b6ffcae157d0c58ba879ecf66bd003ace13b745e29144a\" returns successfully" May 17 00:41:47.385707 systemd[1]: run-containerd-runc-k8s.io-9187f8e47381879701b6ffcae157d0c58ba879ecf66bd003ace13b745e29144a-runc.7v68hG.mount: Deactivated successfully. May 17 00:41:47.611871 kubelet[1405]: E0517 00:41:47.611835 1405 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:41:47.671966 kubelet[1405]: I0517 00:41:47.671783 1405 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-w5cr7" podStartSLOduration=6.67176223 podStartE2EDuration="6.67176223s" podCreationTimestamp="2025-05-17 00:41:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:41:47.671600326 +0000 UTC m=+90.222991691" watchObservedRunningTime="2025-05-17 00:41:47.67176223 +0000 UTC m=+90.223153595" May 17 00:41:47.828598 kubelet[1405]: E0517 00:41:47.828510 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:48.437628 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) May 17 00:41:48.615823 kubelet[1405]: E0517 00:41:48.615757 1405 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:41:48.828740 kubelet[1405]: E0517 00:41:48.828677 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:49.621405 kubelet[1405]: E0517 00:41:49.621367 1405 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:41:49.829770 kubelet[1405]: E0517 00:41:49.829697 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:50.830732 kubelet[1405]: E0517 00:41:50.830665 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:51.831913 kubelet[1405]: E0517 00:41:51.831847 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:52.836463 kubelet[1405]: E0517 00:41:52.836365 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:52.941158 systemd[1]: run-containerd-runc-k8s.io-9187f8e47381879701b6ffcae157d0c58ba879ecf66bd003ace13b745e29144a-runc.pB2wOE.mount: Deactivated successfully. May 17 00:41:53.751277 systemd-networkd[1027]: lxc_health: Link UP May 17 00:41:53.789128 systemd-networkd[1027]: lxc_health: Gained carrier May 17 00:41:53.789787 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 17 00:41:53.837554 kubelet[1405]: E0517 00:41:53.837448 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:53.978131 kubelet[1405]: E0517 00:41:53.978089 1405 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:41:54.677717 kubelet[1405]: E0517 00:41:54.677676 1405 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:41:54.837739 kubelet[1405]: E0517 00:41:54.837661 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:55.318693 systemd-networkd[1027]: lxc_health: Gained IPv6LL May 17 00:41:55.682535 kubelet[1405]: E0517 00:41:55.682417 1405 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:41:55.838356 kubelet[1405]: E0517 00:41:55.838312 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:56.839760 kubelet[1405]: E0517 00:41:56.839657 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:57.535001 systemd[1]: run-containerd-runc-k8s.io-9187f8e47381879701b6ffcae157d0c58ba879ecf66bd003ace13b745e29144a-runc.3JobGZ.mount: Deactivated successfully. May 17 00:41:57.840096 kubelet[1405]: E0517 00:41:57.839810 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:57.992002 kubelet[1405]: E0517 00:41:57.991921 1405 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:58.851809 kubelet[1405]: E0517 00:41:58.848336 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:41:59.841392 systemd[1]: run-containerd-runc-k8s.io-9187f8e47381879701b6ffcae157d0c58ba879ecf66bd003ace13b745e29144a-runc.jcd3eh.mount: Deactivated successfully. May 17 00:41:59.854899 kubelet[1405]: E0517 00:41:59.852959 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:42:00.854064 kubelet[1405]: E0517 00:42:00.853983 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:42:01.855765 kubelet[1405]: E0517 00:42:01.855680 1405 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"