May 13 00:49:31.857128 kernel: Linux version 5.15.181-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon May 12 23:08:12 -00 2025 May 13 00:49:31.857147 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=b36b4a233fdb797f33aa4a04cfdf4a35ceaebd893b04da45dfb96d44a18c6166 May 13 00:49:31.857155 kernel: BIOS-provided physical RAM map: May 13 00:49:31.857161 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 13 00:49:31.857166 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 13 00:49:31.857172 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 13 00:49:31.857178 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable May 13 00:49:31.857184 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved May 13 00:49:31.857191 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 13 00:49:31.857196 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved May 13 00:49:31.857202 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 13 00:49:31.857207 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 13 00:49:31.857213 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 13 00:49:31.857218 kernel: NX (Execute Disable) protection: active May 13 00:49:31.857226 kernel: SMBIOS 2.8 present. May 13 00:49:31.857233 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 May 13 00:49:31.857239 kernel: Hypervisor detected: KVM May 13 00:49:31.857244 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 13 00:49:31.857250 kernel: kvm-clock: cpu 0, msr 7e196001, primary cpu clock May 13 00:49:31.857257 kernel: kvm-clock: using sched offset of 2557136411 cycles May 13 00:49:31.857265 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 13 00:49:31.857273 kernel: tsc: Detected 2794.746 MHz processor May 13 00:49:31.857281 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 13 00:49:31.857291 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 13 00:49:31.857299 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 May 13 00:49:31.857305 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 13 00:49:31.857313 kernel: Using GB pages for direct mapping May 13 00:49:31.857320 kernel: ACPI: Early table checksum verification disabled May 13 00:49:31.857328 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) May 13 00:49:31.857336 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:49:31.857344 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:49:31.857352 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:49:31.857359 kernel: ACPI: FACS 0x000000009CFE0000 000040 May 13 00:49:31.857366 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:49:31.857372 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:49:31.857378 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:49:31.857384 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:49:31.857390 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] May 13 00:49:31.857397 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] May 13 00:49:31.857403 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] May 13 00:49:31.857413 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] May 13 00:49:31.857420 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] May 13 00:49:31.857426 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] May 13 00:49:31.857433 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] May 13 00:49:31.857448 kernel: No NUMA configuration found May 13 00:49:31.857455 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] May 13 00:49:31.857464 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] May 13 00:49:31.857470 kernel: Zone ranges: May 13 00:49:31.857477 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 13 00:49:31.857483 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] May 13 00:49:31.857490 kernel: Normal empty May 13 00:49:31.857496 kernel: Movable zone start for each node May 13 00:49:31.857503 kernel: Early memory node ranges May 13 00:49:31.857510 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 13 00:49:31.857516 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] May 13 00:49:31.857524 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] May 13 00:49:31.857531 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 13 00:49:31.857537 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 13 00:49:31.857544 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges May 13 00:49:31.857551 kernel: ACPI: PM-Timer IO Port: 0x608 May 13 00:49:31.857557 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 13 00:49:31.857564 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 13 00:49:31.857570 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 13 00:49:31.857577 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 13 00:49:31.857584 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 13 00:49:31.857592 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 13 00:49:31.857598 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 13 00:49:31.857605 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 13 00:49:31.857612 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 13 00:49:31.857618 kernel: TSC deadline timer available May 13 00:49:31.857624 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs May 13 00:49:31.857631 kernel: kvm-guest: KVM setup pv remote TLB flush May 13 00:49:31.857638 kernel: kvm-guest: setup PV sched yield May 13 00:49:31.857644 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices May 13 00:49:31.857652 kernel: Booting paravirtualized kernel on KVM May 13 00:49:31.857659 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 13 00:49:31.857666 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 May 13 00:49:31.857673 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 May 13 00:49:31.857680 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 May 13 00:49:31.857686 kernel: pcpu-alloc: [0] 0 1 2 3 May 13 00:49:31.857692 kernel: kvm-guest: setup async PF for cpu 0 May 13 00:49:31.857699 kernel: kvm-guest: stealtime: cpu 0, msr 9a41c0c0 May 13 00:49:31.857706 kernel: kvm-guest: PV spinlocks enabled May 13 00:49:31.857714 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 13 00:49:31.857720 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 May 13 00:49:31.857727 kernel: Policy zone: DMA32 May 13 00:49:31.857734 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=b36b4a233fdb797f33aa4a04cfdf4a35ceaebd893b04da45dfb96d44a18c6166 May 13 00:49:31.857742 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 13 00:49:31.857748 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 13 00:49:31.857755 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 13 00:49:31.857762 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 13 00:49:31.857770 kernel: Memory: 2436696K/2571752K available (12294K kernel code, 2276K rwdata, 13724K rodata, 47456K init, 4124K bss, 134796K reserved, 0K cma-reserved) May 13 00:49:31.857778 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 13 00:49:31.857786 kernel: ftrace: allocating 34584 entries in 136 pages May 13 00:49:31.857796 kernel: ftrace: allocated 136 pages with 2 groups May 13 00:49:31.857804 kernel: rcu: Hierarchical RCU implementation. May 13 00:49:31.857813 kernel: rcu: RCU event tracing is enabled. May 13 00:49:31.857820 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 13 00:49:31.857827 kernel: Rude variant of Tasks RCU enabled. May 13 00:49:31.857833 kernel: Tracing variant of Tasks RCU enabled. May 13 00:49:31.857842 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 13 00:49:31.857848 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 13 00:49:31.857855 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 13 00:49:31.857862 kernel: random: crng init done May 13 00:49:31.857868 kernel: Console: colour VGA+ 80x25 May 13 00:49:31.857875 kernel: printk: console [ttyS0] enabled May 13 00:49:31.857881 kernel: ACPI: Core revision 20210730 May 13 00:49:31.857888 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 13 00:49:31.857895 kernel: APIC: Switch to symmetric I/O mode setup May 13 00:49:31.857902 kernel: x2apic enabled May 13 00:49:31.857909 kernel: Switched APIC routing to physical x2apic. May 13 00:49:31.857915 kernel: kvm-guest: setup PV IPIs May 13 00:49:31.857922 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 13 00:49:31.857929 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 13 00:49:31.857936 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794746) May 13 00:49:31.857942 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 13 00:49:31.857949 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 13 00:49:31.857956 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 13 00:49:31.857981 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 13 00:49:31.857988 kernel: Spectre V2 : Mitigation: Retpolines May 13 00:49:31.857995 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 13 00:49:31.858003 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 13 00:49:31.858010 kernel: RETBleed: Mitigation: untrained return thunk May 13 00:49:31.858017 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 13 00:49:31.858025 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp May 13 00:49:31.858032 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 13 00:49:31.858039 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 13 00:49:31.858047 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 13 00:49:31.858054 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 13 00:49:31.858061 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. May 13 00:49:31.858068 kernel: Freeing SMP alternatives memory: 32K May 13 00:49:31.858075 kernel: pid_max: default: 32768 minimum: 301 May 13 00:49:31.858082 kernel: LSM: Security Framework initializing May 13 00:49:31.858089 kernel: SELinux: Initializing. May 13 00:49:31.858096 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 00:49:31.858104 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 00:49:31.858111 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 13 00:49:31.858118 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 13 00:49:31.858127 kernel: ... version: 0 May 13 00:49:31.858136 kernel: ... bit width: 48 May 13 00:49:31.858145 kernel: ... generic registers: 6 May 13 00:49:31.858154 kernel: ... value mask: 0000ffffffffffff May 13 00:49:31.858162 kernel: ... max period: 00007fffffffffff May 13 00:49:31.858169 kernel: ... fixed-purpose events: 0 May 13 00:49:31.858178 kernel: ... event mask: 000000000000003f May 13 00:49:31.858185 kernel: signal: max sigframe size: 1776 May 13 00:49:31.858192 kernel: rcu: Hierarchical SRCU implementation. May 13 00:49:31.858199 kernel: smp: Bringing up secondary CPUs ... May 13 00:49:31.858205 kernel: x86: Booting SMP configuration: May 13 00:49:31.858212 kernel: .... node #0, CPUs: #1 May 13 00:49:31.858219 kernel: kvm-clock: cpu 1, msr 7e196041, secondary cpu clock May 13 00:49:31.858226 kernel: kvm-guest: setup async PF for cpu 1 May 13 00:49:31.858233 kernel: kvm-guest: stealtime: cpu 1, msr 9a49c0c0 May 13 00:49:31.858241 kernel: #2 May 13 00:49:31.858248 kernel: kvm-clock: cpu 2, msr 7e196081, secondary cpu clock May 13 00:49:31.858255 kernel: kvm-guest: setup async PF for cpu 2 May 13 00:49:31.858262 kernel: kvm-guest: stealtime: cpu 2, msr 9a51c0c0 May 13 00:49:31.858269 kernel: #3 May 13 00:49:31.858275 kernel: kvm-clock: cpu 3, msr 7e1960c1, secondary cpu clock May 13 00:49:31.858282 kernel: kvm-guest: setup async PF for cpu 3 May 13 00:49:31.858289 kernel: kvm-guest: stealtime: cpu 3, msr 9a59c0c0 May 13 00:49:31.858296 kernel: smp: Brought up 1 node, 4 CPUs May 13 00:49:31.858304 kernel: smpboot: Max logical packages: 1 May 13 00:49:31.858311 kernel: smpboot: Total of 4 processors activated (22357.96 BogoMIPS) May 13 00:49:31.858318 kernel: devtmpfs: initialized May 13 00:49:31.858325 kernel: x86/mm: Memory block size: 128MB May 13 00:49:31.858332 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 13 00:49:31.858339 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 13 00:49:31.858346 kernel: pinctrl core: initialized pinctrl subsystem May 13 00:49:31.858353 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 13 00:49:31.858360 kernel: audit: initializing netlink subsys (disabled) May 13 00:49:31.858368 kernel: audit: type=2000 audit(1747097370.860:1): state=initialized audit_enabled=0 res=1 May 13 00:49:31.858375 kernel: thermal_sys: Registered thermal governor 'step_wise' May 13 00:49:31.858382 kernel: thermal_sys: Registered thermal governor 'user_space' May 13 00:49:31.858389 kernel: cpuidle: using governor menu May 13 00:49:31.858395 kernel: ACPI: bus type PCI registered May 13 00:49:31.858402 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 13 00:49:31.858409 kernel: dca service started, version 1.12.1 May 13 00:49:31.858416 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) May 13 00:49:31.858423 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 May 13 00:49:31.858432 kernel: PCI: Using configuration type 1 for base access May 13 00:49:31.858446 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 13 00:49:31.858454 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages May 13 00:49:31.858461 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages May 13 00:49:31.858468 kernel: ACPI: Added _OSI(Module Device) May 13 00:49:31.858475 kernel: ACPI: Added _OSI(Processor Device) May 13 00:49:31.858482 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 13 00:49:31.858489 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 13 00:49:31.858496 kernel: ACPI: Added _OSI(Linux-Dell-Video) May 13 00:49:31.858504 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) May 13 00:49:31.858511 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) May 13 00:49:31.858518 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 13 00:49:31.858525 kernel: ACPI: Interpreter enabled May 13 00:49:31.858532 kernel: ACPI: PM: (supports S0 S3 S5) May 13 00:49:31.858539 kernel: ACPI: Using IOAPIC for interrupt routing May 13 00:49:31.858546 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 13 00:49:31.858553 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 13 00:49:31.858559 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 13 00:49:31.858680 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 13 00:49:31.858753 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 13 00:49:31.858821 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 13 00:49:31.858831 kernel: PCI host bridge to bus 0000:00 May 13 00:49:31.858931 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 13 00:49:31.859011 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 13 00:49:31.859075 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 13 00:49:31.859140 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] May 13 00:49:31.859201 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 13 00:49:31.859259 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] May 13 00:49:31.859319 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 13 00:49:31.859451 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 13 00:49:31.859569 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 May 13 00:49:31.859686 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] May 13 00:49:31.859764 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] May 13 00:49:31.859837 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] May 13 00:49:31.859903 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 13 00:49:31.860000 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 May 13 00:49:31.860072 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] May 13 00:49:31.861237 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] May 13 00:49:31.861319 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] May 13 00:49:31.861601 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 May 13 00:49:31.861678 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] May 13 00:49:31.861749 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] May 13 00:49:31.861832 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] May 13 00:49:31.861910 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 May 13 00:49:31.862032 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] May 13 00:49:31.862107 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] May 13 00:49:31.862176 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] May 13 00:49:31.862253 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] May 13 00:49:31.862329 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 13 00:49:31.862398 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 13 00:49:31.862484 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 13 00:49:31.862558 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] May 13 00:49:31.862626 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] May 13 00:49:31.862698 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 13 00:49:31.862778 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] May 13 00:49:31.862792 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 13 00:49:31.862799 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 13 00:49:31.862807 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 13 00:49:31.862816 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 13 00:49:31.862828 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 13 00:49:31.862837 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 13 00:49:31.862846 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 13 00:49:31.862855 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 13 00:49:31.862864 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 13 00:49:31.862873 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 13 00:49:31.862881 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 13 00:49:31.862888 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 13 00:49:31.862895 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 13 00:49:31.862904 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 13 00:49:31.862911 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 13 00:49:31.862918 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 13 00:49:31.862925 kernel: iommu: Default domain type: Translated May 13 00:49:31.862932 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 13 00:49:31.863069 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 13 00:49:31.863143 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 13 00:49:31.863209 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 13 00:49:31.863222 kernel: vgaarb: loaded May 13 00:49:31.863229 kernel: pps_core: LinuxPPS API ver. 1 registered May 13 00:49:31.863236 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 13 00:49:31.863243 kernel: PTP clock support registered May 13 00:49:31.863250 kernel: PCI: Using ACPI for IRQ routing May 13 00:49:31.863257 kernel: PCI: pci_cache_line_size set to 64 bytes May 13 00:49:31.863265 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 13 00:49:31.863271 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] May 13 00:49:31.863278 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 13 00:49:31.863287 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 13 00:49:31.863294 kernel: clocksource: Switched to clocksource kvm-clock May 13 00:49:31.863301 kernel: VFS: Disk quotas dquot_6.6.0 May 13 00:49:31.863308 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 13 00:49:31.863315 kernel: pnp: PnP ACPI init May 13 00:49:31.863388 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved May 13 00:49:31.863398 kernel: pnp: PnP ACPI: found 6 devices May 13 00:49:31.863405 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 13 00:49:31.863413 kernel: NET: Registered PF_INET protocol family May 13 00:49:31.863422 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 13 00:49:31.863429 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 13 00:49:31.863446 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 13 00:49:31.863454 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 13 00:49:31.863461 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) May 13 00:49:31.863468 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 13 00:49:31.863476 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 00:49:31.863483 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 00:49:31.863491 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 13 00:49:31.863498 kernel: NET: Registered PF_XDP protocol family May 13 00:49:31.863560 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 13 00:49:31.863619 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 13 00:49:31.863676 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 13 00:49:31.863736 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] May 13 00:49:31.863797 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 13 00:49:31.863856 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] May 13 00:49:31.863865 kernel: PCI: CLS 0 bytes, default 64 May 13 00:49:31.863874 kernel: Initialise system trusted keyrings May 13 00:49:31.863881 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 13 00:49:31.863889 kernel: Key type asymmetric registered May 13 00:49:31.863896 kernel: Asymmetric key parser 'x509' registered May 13 00:49:31.863902 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 13 00:49:31.863909 kernel: io scheduler mq-deadline registered May 13 00:49:31.863917 kernel: io scheduler kyber registered May 13 00:49:31.863923 kernel: io scheduler bfq registered May 13 00:49:31.863930 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 13 00:49:31.863939 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 13 00:49:31.863947 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 13 00:49:31.863954 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 13 00:49:31.863971 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 13 00:49:31.863988 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 13 00:49:31.863995 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 13 00:49:31.864002 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 13 00:49:31.864009 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 13 00:49:31.864017 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 13 00:49:31.864097 kernel: rtc_cmos 00:04: RTC can wake from S4 May 13 00:49:31.864186 kernel: rtc_cmos 00:04: registered as rtc0 May 13 00:49:31.864249 kernel: rtc_cmos 00:04: setting system clock to 2025-05-13T00:49:31 UTC (1747097371) May 13 00:49:31.865102 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 13 00:49:31.865117 kernel: NET: Registered PF_INET6 protocol family May 13 00:49:31.865125 kernel: Segment Routing with IPv6 May 13 00:49:31.865132 kernel: In-situ OAM (IOAM) with IPv6 May 13 00:49:31.865139 kernel: NET: Registered PF_PACKET protocol family May 13 00:49:31.865150 kernel: Key type dns_resolver registered May 13 00:49:31.865157 kernel: IPI shorthand broadcast: enabled May 13 00:49:31.865164 kernel: sched_clock: Marking stable (452392880, 102583246)->(572175636, -17199510) May 13 00:49:31.865171 kernel: registered taskstats version 1 May 13 00:49:31.865178 kernel: Loading compiled-in X.509 certificates May 13 00:49:31.865185 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.181-flatcar: 52373c12592f53b0567bb941a0a0fec888191095' May 13 00:49:31.865192 kernel: Key type .fscrypt registered May 13 00:49:31.865199 kernel: Key type fscrypt-provisioning registered May 13 00:49:31.865206 kernel: ima: No TPM chip found, activating TPM-bypass! May 13 00:49:31.865216 kernel: ima: Allocated hash algorithm: sha1 May 13 00:49:31.865224 kernel: ima: No architecture policies found May 13 00:49:31.865231 kernel: clk: Disabling unused clocks May 13 00:49:31.865238 kernel: Freeing unused kernel image (initmem) memory: 47456K May 13 00:49:31.865247 kernel: Write protecting the kernel read-only data: 28672k May 13 00:49:31.865256 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K May 13 00:49:31.865264 kernel: Freeing unused kernel image (rodata/data gap) memory: 612K May 13 00:49:31.865272 kernel: Run /init as init process May 13 00:49:31.865281 kernel: with arguments: May 13 00:49:31.865288 kernel: /init May 13 00:49:31.865295 kernel: with environment: May 13 00:49:31.865303 kernel: HOME=/ May 13 00:49:31.865309 kernel: TERM=linux May 13 00:49:31.865317 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 13 00:49:31.865326 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 13 00:49:31.865335 systemd[1]: Detected virtualization kvm. May 13 00:49:31.865345 systemd[1]: Detected architecture x86-64. May 13 00:49:31.865352 systemd[1]: Running in initrd. May 13 00:49:31.865359 systemd[1]: No hostname configured, using default hostname. May 13 00:49:31.865367 systemd[1]: Hostname set to . May 13 00:49:31.865375 systemd[1]: Initializing machine ID from VM UUID. May 13 00:49:31.865382 systemd[1]: Queued start job for default target initrd.target. May 13 00:49:31.865390 systemd[1]: Started systemd-ask-password-console.path. May 13 00:49:31.865397 systemd[1]: Reached target cryptsetup.target. May 13 00:49:31.865404 systemd[1]: Reached target paths.target. May 13 00:49:31.865413 systemd[1]: Reached target slices.target. May 13 00:49:31.865428 systemd[1]: Reached target swap.target. May 13 00:49:31.865449 systemd[1]: Reached target timers.target. May 13 00:49:31.865458 systemd[1]: Listening on iscsid.socket. May 13 00:49:31.865466 systemd[1]: Listening on iscsiuio.socket. May 13 00:49:31.865476 systemd[1]: Listening on systemd-journald-audit.socket. May 13 00:49:31.865484 systemd[1]: Listening on systemd-journald-dev-log.socket. May 13 00:49:31.865491 systemd[1]: Listening on systemd-journald.socket. May 13 00:49:31.865499 systemd[1]: Listening on systemd-networkd.socket. May 13 00:49:31.865507 systemd[1]: Listening on systemd-udevd-control.socket. May 13 00:49:31.865514 systemd[1]: Listening on systemd-udevd-kernel.socket. May 13 00:49:31.865522 systemd[1]: Reached target sockets.target. May 13 00:49:31.865530 systemd[1]: Starting kmod-static-nodes.service... May 13 00:49:31.865538 systemd[1]: Finished network-cleanup.service. May 13 00:49:31.865547 systemd[1]: Starting systemd-fsck-usr.service... May 13 00:49:31.865555 systemd[1]: Starting systemd-journald.service... May 13 00:49:31.865563 systemd[1]: Starting systemd-modules-load.service... May 13 00:49:31.865571 systemd[1]: Starting systemd-resolved.service... May 13 00:49:31.865578 systemd[1]: Starting systemd-vconsole-setup.service... May 13 00:49:31.865586 systemd[1]: Finished kmod-static-nodes.service. May 13 00:49:31.865594 systemd[1]: Finished systemd-fsck-usr.service. May 13 00:49:31.865603 kernel: audit: type=1130 audit(1747097371.855:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:31.865613 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 13 00:49:31.865625 systemd-journald[196]: Journal started May 13 00:49:31.865669 systemd-journald[196]: Runtime Journal (/run/log/journal/2fa55d76fcb348249f887f7980e13242) is 6.0M, max 48.5M, 42.5M free. May 13 00:49:31.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:31.859673 systemd-modules-load[197]: Inserted module 'overlay' May 13 00:49:31.898887 systemd[1]: Started systemd-journald.service. May 13 00:49:31.898917 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 13 00:49:31.898928 kernel: Bridge firewalling registered May 13 00:49:31.881708 systemd-resolved[198]: Positive Trust Anchors: May 13 00:49:31.900000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:31.881718 systemd-resolved[198]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 00:49:31.905331 kernel: audit: type=1130 audit(1747097371.900:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:31.881745 systemd-resolved[198]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 13 00:49:31.910000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:31.883857 systemd-resolved[198]: Defaulting to hostname 'linux'. May 13 00:49:31.915150 kernel: audit: type=1130 audit(1747097371.910:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:31.894311 systemd-modules-load[197]: Inserted module 'br_netfilter' May 13 00:49:31.917514 kernel: SCSI subsystem initialized May 13 00:49:31.917528 kernel: audit: type=1130 audit(1747097371.916:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:31.916000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:31.901309 systemd[1]: Started systemd-resolved.service. May 13 00:49:31.922067 kernel: audit: type=1130 audit(1747097371.921:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:31.921000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:31.911686 systemd[1]: Finished systemd-vconsole-setup.service. May 13 00:49:31.917672 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 13 00:49:31.931283 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 13 00:49:31.931297 kernel: device-mapper: uevent: version 1.0.3 May 13 00:49:31.931330 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com May 13 00:49:31.922179 systemd[1]: Reached target nss-lookup.target. May 13 00:49:31.927330 systemd[1]: Starting dracut-cmdline-ask.service... May 13 00:49:31.933910 systemd-modules-load[197]: Inserted module 'dm_multipath' May 13 00:49:31.935085 systemd[1]: Finished systemd-modules-load.service. May 13 00:49:31.935000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:31.937335 systemd[1]: Starting systemd-sysctl.service... May 13 00:49:31.940591 kernel: audit: type=1130 audit(1747097371.935:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:31.944919 systemd[1]: Finished systemd-sysctl.service. May 13 00:49:31.944000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:31.948983 kernel: audit: type=1130 audit(1747097371.944:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:31.950315 systemd[1]: Finished dracut-cmdline-ask.service. May 13 00:49:31.950000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:31.954985 kernel: audit: type=1130 audit(1747097371.950:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:31.955463 systemd[1]: Starting dracut-cmdline.service... May 13 00:49:31.964310 dracut-cmdline[221]: dracut-dracut-053 May 13 00:49:31.966272 dracut-cmdline[221]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=b36b4a233fdb797f33aa4a04cfdf4a35ceaebd893b04da45dfb96d44a18c6166 May 13 00:49:32.018990 kernel: Loading iSCSI transport class v2.0-870. May 13 00:49:32.034990 kernel: iscsi: registered transport (tcp) May 13 00:49:32.055452 kernel: iscsi: registered transport (qla4xxx) May 13 00:49:32.055500 kernel: QLogic iSCSI HBA Driver May 13 00:49:32.077742 systemd[1]: Finished dracut-cmdline.service. May 13 00:49:32.082846 kernel: audit: type=1130 audit(1747097372.077:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:32.077000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:32.079445 systemd[1]: Starting dracut-pre-udev.service... May 13 00:49:32.123996 kernel: raid6: avx2x4 gen() 29811 MB/s May 13 00:49:32.140993 kernel: raid6: avx2x4 xor() 7635 MB/s May 13 00:49:32.158007 kernel: raid6: avx2x2 gen() 26373 MB/s May 13 00:49:32.174989 kernel: raid6: avx2x2 xor() 19069 MB/s May 13 00:49:32.191986 kernel: raid6: avx2x1 gen() 23383 MB/s May 13 00:49:32.208997 kernel: raid6: avx2x1 xor() 14438 MB/s May 13 00:49:32.226092 kernel: raid6: sse2x4 gen() 14323 MB/s May 13 00:49:32.242993 kernel: raid6: sse2x4 xor() 6590 MB/s May 13 00:49:32.259990 kernel: raid6: sse2x2 gen() 15120 MB/s May 13 00:49:32.277000 kernel: raid6: sse2x2 xor() 9301 MB/s May 13 00:49:32.293995 kernel: raid6: sse2x1 gen() 12071 MB/s May 13 00:49:32.314304 kernel: raid6: sse2x1 xor() 7741 MB/s May 13 00:49:32.314330 kernel: raid6: using algorithm avx2x4 gen() 29811 MB/s May 13 00:49:32.314339 kernel: raid6: .... xor() 7635 MB/s, rmw enabled May 13 00:49:32.315016 kernel: raid6: using avx2x2 recovery algorithm May 13 00:49:32.326984 kernel: xor: automatically using best checksumming function avx May 13 00:49:32.416006 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no May 13 00:49:32.423456 systemd[1]: Finished dracut-pre-udev.service. May 13 00:49:32.423000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:32.424000 audit: BPF prog-id=7 op=LOAD May 13 00:49:32.424000 audit: BPF prog-id=8 op=LOAD May 13 00:49:32.425743 systemd[1]: Starting systemd-udevd.service... May 13 00:49:32.437611 systemd-udevd[399]: Using default interface naming scheme 'v252'. May 13 00:49:32.441340 systemd[1]: Started systemd-udevd.service. May 13 00:49:32.454000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:32.456112 systemd[1]: Starting dracut-pre-trigger.service... May 13 00:49:32.464390 dracut-pre-trigger[408]: rd.md=0: removing MD RAID activation May 13 00:49:32.486458 systemd[1]: Finished dracut-pre-trigger.service. May 13 00:49:32.486000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:32.487977 systemd[1]: Starting systemd-udev-trigger.service... May 13 00:49:32.520174 systemd[1]: Finished systemd-udev-trigger.service. May 13 00:49:32.536000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:32.547244 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 13 00:49:32.556243 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 13 00:49:32.556258 kernel: GPT:9289727 != 19775487 May 13 00:49:32.556267 kernel: GPT:Alternate GPT header not at the end of the disk. May 13 00:49:32.556276 kernel: cryptd: max_cpu_qlen set to 1000 May 13 00:49:32.556285 kernel: GPT:9289727 != 19775487 May 13 00:49:32.556294 kernel: GPT: Use GNU Parted to correct GPT errors. May 13 00:49:32.556310 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:49:32.560997 kernel: libata version 3.00 loaded. May 13 00:49:32.569123 kernel: AVX2 version of gcm_enc/dec engaged. May 13 00:49:32.569158 kernel: AES CTR mode by8 optimization enabled May 13 00:49:32.569168 kernel: ahci 0000:00:1f.2: version 3.0 May 13 00:49:32.595323 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 13 00:49:32.595341 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 13 00:49:32.595445 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 13 00:49:32.595528 kernel: scsi host0: ahci May 13 00:49:32.595621 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (442) May 13 00:49:32.595631 kernel: scsi host1: ahci May 13 00:49:32.595713 kernel: scsi host2: ahci May 13 00:49:32.595796 kernel: scsi host3: ahci May 13 00:49:32.595873 kernel: scsi host4: ahci May 13 00:49:32.595956 kernel: scsi host5: ahci May 13 00:49:32.596055 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 May 13 00:49:32.596065 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 May 13 00:49:32.596073 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 May 13 00:49:32.596082 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 May 13 00:49:32.596091 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 May 13 00:49:32.596100 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 May 13 00:49:32.588550 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. May 13 00:49:32.637558 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. May 13 00:49:32.648479 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 13 00:49:32.651312 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. May 13 00:49:32.651374 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. May 13 00:49:32.655699 systemd[1]: Starting disk-uuid.service... May 13 00:49:32.928990 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 13 00:49:32.929045 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 13 00:49:32.929989 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 13 00:49:32.930998 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 13 00:49:32.931982 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 13 00:49:32.932999 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 13 00:49:32.933993 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 13 00:49:32.934014 kernel: ata3.00: applying bridge limits May 13 00:49:32.935255 kernel: ata3.00: configured for UDMA/100 May 13 00:49:32.935979 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 13 00:49:32.977132 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 13 00:49:32.994501 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 13 00:49:32.994513 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 13 00:49:33.095649 disk-uuid[524]: Primary Header is updated. May 13 00:49:33.095649 disk-uuid[524]: Secondary Entries is updated. May 13 00:49:33.095649 disk-uuid[524]: Secondary Header is updated. May 13 00:49:33.108036 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:49:33.111982 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:49:34.133993 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:49:34.134249 disk-uuid[538]: The operation has completed successfully. May 13 00:49:34.175155 systemd[1]: disk-uuid.service: Deactivated successfully. May 13 00:49:34.174000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:34.174000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:34.175239 systemd[1]: Finished disk-uuid.service. May 13 00:49:34.178977 systemd[1]: Starting verity-setup.service... May 13 00:49:34.200996 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" May 13 00:49:34.219897 systemd[1]: Found device dev-mapper-usr.device. May 13 00:49:34.222210 systemd[1]: Mounting sysusr-usr.mount... May 13 00:49:34.225407 systemd[1]: Finished verity-setup.service. May 13 00:49:34.224000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:34.280988 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. May 13 00:49:34.280987 systemd[1]: Mounted sysusr-usr.mount. May 13 00:49:34.281166 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. May 13 00:49:34.282397 systemd[1]: Starting ignition-setup.service... May 13 00:49:34.284820 systemd[1]: Starting parse-ip-for-networkd.service... May 13 00:49:34.294764 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 13 00:49:34.294787 kernel: BTRFS info (device vda6): using free space tree May 13 00:49:34.294796 kernel: BTRFS info (device vda6): has skinny extents May 13 00:49:34.302727 systemd[1]: mnt-oem.mount: Deactivated successfully. May 13 00:49:34.345365 systemd[1]: Finished parse-ip-for-networkd.service. May 13 00:49:34.345000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:34.346000 audit: BPF prog-id=9 op=LOAD May 13 00:49:34.347504 systemd[1]: Starting systemd-networkd.service... May 13 00:49:34.366713 systemd-networkd[709]: lo: Link UP May 13 00:49:34.366722 systemd-networkd[709]: lo: Gained carrier May 13 00:49:34.372154 systemd-networkd[709]: Enumeration completed May 13 00:49:34.372227 systemd[1]: Started systemd-networkd.service. May 13 00:49:34.373046 systemd[1]: Reached target network.target. May 13 00:49:34.372000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:34.373860 systemd[1]: Starting iscsiuio.service... May 13 00:49:34.377603 systemd[1]: Started iscsiuio.service. May 13 00:49:34.396000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:34.397185 systemd-networkd[709]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 00:49:34.397782 systemd[1]: Starting iscsid.service... May 13 00:49:34.400521 iscsid[714]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi May 13 00:49:34.400521 iscsid[714]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. May 13 00:49:34.400521 iscsid[714]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. May 13 00:49:34.400521 iscsid[714]: If using hardware iscsi like qla4xxx this message can be ignored. May 13 00:49:34.400521 iscsid[714]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi May 13 00:49:34.402000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:34.414000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:34.401142 systemd-networkd[709]: eth0: Link UP May 13 00:49:34.418363 iscsid[714]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf May 13 00:49:34.401145 systemd-networkd[709]: eth0: Gained carrier May 13 00:49:34.401733 systemd[1]: Started iscsid.service. May 13 00:49:34.403936 systemd[1]: Starting dracut-initqueue.service... May 13 00:49:34.413592 systemd[1]: Finished dracut-initqueue.service. May 13 00:49:34.415497 systemd[1]: Reached target remote-fs-pre.target. May 13 00:49:34.416568 systemd[1]: Reached target remote-cryptsetup.target. May 13 00:49:34.416621 systemd[1]: Reached target remote-fs.target. May 13 00:49:34.419947 systemd[1]: Starting dracut-pre-mount.service... May 13 00:49:34.429759 systemd[1]: Finished dracut-pre-mount.service. May 13 00:49:34.428000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:34.433026 systemd-networkd[709]: eth0: DHCPv4 address 10.0.0.142/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 00:49:34.605805 systemd[1]: Finished ignition-setup.service. May 13 00:49:34.604000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:34.606646 systemd[1]: Starting ignition-fetch-offline.service... May 13 00:49:34.654808 ignition[729]: Ignition 2.14.0 May 13 00:49:34.654818 ignition[729]: Stage: fetch-offline May 13 00:49:34.654859 ignition[729]: no configs at "/usr/lib/ignition/base.d" May 13 00:49:34.654867 ignition[729]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:49:34.654973 ignition[729]: parsed url from cmdline: "" May 13 00:49:34.654977 ignition[729]: no config URL provided May 13 00:49:34.654982 ignition[729]: reading system config file "/usr/lib/ignition/user.ign" May 13 00:49:34.654988 ignition[729]: no config at "/usr/lib/ignition/user.ign" May 13 00:49:34.655012 ignition[729]: op(1): [started] loading QEMU firmware config module May 13 00:49:34.655017 ignition[729]: op(1): executing: "modprobe" "qemu_fw_cfg" May 13 00:49:34.660609 ignition[729]: op(1): [finished] loading QEMU firmware config module May 13 00:49:34.660633 ignition[729]: QEMU firmware config was not found. Ignoring... May 13 00:49:34.662199 ignition[729]: parsing config with SHA512: f9ed6279a2b064be1c2bc4fa4922d00349414013cdf8d9f965467cef377c884c1fb19e9cbb176d20f835f561944d6c9efb70dd48f9967011c51c877311fe4734 May 13 00:49:34.667509 unknown[729]: fetched base config from "system" May 13 00:49:34.667516 unknown[729]: fetched user config from "qemu" May 13 00:49:34.667817 ignition[729]: fetch-offline: fetch-offline passed May 13 00:49:34.669000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:34.668741 systemd[1]: Finished ignition-fetch-offline.service. May 13 00:49:34.667866 ignition[729]: Ignition finished successfully May 13 00:49:34.670556 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 13 00:49:34.671282 systemd[1]: Starting ignition-kargs.service... May 13 00:49:34.680075 ignition[737]: Ignition 2.14.0 May 13 00:49:34.680083 ignition[737]: Stage: kargs May 13 00:49:34.680163 ignition[737]: no configs at "/usr/lib/ignition/base.d" May 13 00:49:34.680172 ignition[737]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:49:34.682646 systemd[1]: Finished ignition-kargs.service. May 13 00:49:34.683000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:34.680829 ignition[737]: kargs: kargs passed May 13 00:49:34.685080 systemd[1]: Starting ignition-disks.service... May 13 00:49:34.680861 ignition[737]: Ignition finished successfully May 13 00:49:34.691624 ignition[743]: Ignition 2.14.0 May 13 00:49:34.691633 ignition[743]: Stage: disks May 13 00:49:34.691722 ignition[743]: no configs at "/usr/lib/ignition/base.d" May 13 00:49:34.691738 ignition[743]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:49:34.710000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:34.693609 systemd[1]: Finished ignition-disks.service. May 13 00:49:34.692452 ignition[743]: disks: disks passed May 13 00:49:34.711448 systemd[1]: Reached target initrd-root-device.target. May 13 00:49:34.692489 ignition[743]: Ignition finished successfully May 13 00:49:34.713353 systemd[1]: Reached target local-fs-pre.target. May 13 00:49:34.714182 systemd[1]: Reached target local-fs.target. May 13 00:49:34.714974 systemd[1]: Reached target sysinit.target. May 13 00:49:34.716476 systemd[1]: Reached target basic.target. May 13 00:49:34.717788 systemd[1]: Starting systemd-fsck-root.service... May 13 00:49:34.772889 systemd-fsck[751]: ROOT: clean, 619/553520 files, 56023/553472 blocks May 13 00:49:34.879949 systemd[1]: Finished systemd-fsck-root.service. May 13 00:49:34.881000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:34.883216 systemd[1]: Mounting sysroot.mount... May 13 00:49:34.890776 systemd[1]: Mounted sysroot.mount. May 13 00:49:34.892276 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. May 13 00:49:34.892324 systemd[1]: Reached target initrd-root-fs.target. May 13 00:49:34.894950 systemd[1]: Mounting sysroot-usr.mount... May 13 00:49:34.896783 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. May 13 00:49:34.896828 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 13 00:49:34.898400 systemd[1]: Reached target ignition-diskful.target. May 13 00:49:34.902452 systemd[1]: Mounted sysroot-usr.mount. May 13 00:49:34.904671 systemd[1]: Starting initrd-setup-root.service... May 13 00:49:34.908586 initrd-setup-root[761]: cut: /sysroot/etc/passwd: No such file or directory May 13 00:49:34.912639 initrd-setup-root[769]: cut: /sysroot/etc/group: No such file or directory May 13 00:49:34.915201 initrd-setup-root[777]: cut: /sysroot/etc/shadow: No such file or directory May 13 00:49:34.919183 initrd-setup-root[785]: cut: /sysroot/etc/gshadow: No such file or directory May 13 00:49:34.946410 systemd[1]: Finished initrd-setup-root.service. May 13 00:49:34.948000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:34.948992 systemd[1]: Starting ignition-mount.service... May 13 00:49:34.951029 systemd[1]: Starting sysroot-boot.service... May 13 00:49:34.954464 bash[802]: umount: /sysroot/usr/share/oem: not mounted. May 13 00:49:34.962522 ignition[803]: INFO : Ignition 2.14.0 May 13 00:49:34.962522 ignition[803]: INFO : Stage: mount May 13 00:49:34.964383 ignition[803]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 00:49:34.964383 ignition[803]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:49:34.964383 ignition[803]: INFO : mount: mount passed May 13 00:49:34.964383 ignition[803]: INFO : Ignition finished successfully May 13 00:49:34.964000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:34.963825 systemd[1]: Finished ignition-mount.service. May 13 00:49:34.977844 systemd[1]: Finished sysroot-boot.service. May 13 00:49:34.976000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:35.231179 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 13 00:49:35.236982 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (813) May 13 00:49:35.237013 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 13 00:49:35.239745 kernel: BTRFS info (device vda6): using free space tree May 13 00:49:35.239764 kernel: BTRFS info (device vda6): has skinny extents May 13 00:49:35.242621 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 13 00:49:35.244048 systemd[1]: Starting ignition-files.service... May 13 00:49:35.256749 ignition[833]: INFO : Ignition 2.14.0 May 13 00:49:35.256749 ignition[833]: INFO : Stage: files May 13 00:49:35.258992 ignition[833]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 00:49:35.258992 ignition[833]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:49:35.258992 ignition[833]: DEBUG : files: compiled without relabeling support, skipping May 13 00:49:35.258992 ignition[833]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 13 00:49:35.258992 ignition[833]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 13 00:49:35.266040 ignition[833]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 13 00:49:35.266040 ignition[833]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 13 00:49:35.266040 ignition[833]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 13 00:49:35.266040 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" May 13 00:49:35.266040 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" May 13 00:49:35.266040 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" May 13 00:49:35.266040 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 13 00:49:35.266040 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 13 00:49:35.266040 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 13 00:49:35.266040 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 13 00:49:35.266040 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 May 13 00:49:35.261810 unknown[833]: wrote ssh authorized keys file for user: core May 13 00:49:35.538188 systemd-networkd[709]: eth0: Gained IPv6LL May 13 00:49:35.644493 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK May 13 00:49:36.048671 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 13 00:49:36.048671 ignition[833]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" May 13 00:49:36.053118 ignition[833]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 00:49:36.053118 ignition[833]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 00:49:36.053118 ignition[833]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" May 13 00:49:36.053118 ignition[833]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" May 13 00:49:36.053118 ignition[833]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" May 13 00:49:36.081222 ignition[833]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 13 00:49:36.083057 ignition[833]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" May 13 00:49:36.083057 ignition[833]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" May 13 00:49:36.083057 ignition[833]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" May 13 00:49:36.083057 ignition[833]: INFO : files: files passed May 13 00:49:36.083057 ignition[833]: INFO : Ignition finished successfully May 13 00:49:36.090290 systemd[1]: Finished ignition-files.service. May 13 00:49:36.091000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:36.092151 systemd[1]: Starting initrd-setup-root-after-ignition.service... May 13 00:49:36.092254 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). May 13 00:49:36.092935 systemd[1]: Starting ignition-quench.service... May 13 00:49:36.097000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:36.097000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:36.095975 systemd[1]: ignition-quench.service: Deactivated successfully. May 13 00:49:36.096081 systemd[1]: Finished ignition-quench.service. May 13 00:49:36.104434 initrd-setup-root-after-ignition[859]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory May 13 00:49:36.107541 initrd-setup-root-after-ignition[861]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 00:49:36.109605 systemd[1]: Finished initrd-setup-root-after-ignition.service. May 13 00:49:36.111000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:36.111637 systemd[1]: Reached target ignition-complete.target. May 13 00:49:36.113445 systemd[1]: Starting initrd-parse-etc.service... May 13 00:49:36.127906 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 13 00:49:36.128049 systemd[1]: Finished initrd-parse-etc.service. May 13 00:49:36.128000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:36.128000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:36.129218 systemd[1]: Reached target initrd-fs.target. May 13 00:49:36.131448 systemd[1]: Reached target initrd.target. May 13 00:49:36.133257 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. May 13 00:49:36.134123 systemd[1]: Starting dracut-pre-pivot.service... May 13 00:49:36.147230 systemd[1]: Finished dracut-pre-pivot.service. May 13 00:49:36.148000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:36.149120 systemd[1]: Starting initrd-cleanup.service... May 13 00:49:36.158248 systemd[1]: Stopped target nss-lookup.target. May 13 00:49:36.158404 systemd[1]: Stopped target remote-cryptsetup.target. May 13 00:49:36.160813 systemd[1]: Stopped target timers.target. May 13 00:49:36.161680 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 13 00:49:36.168366 kernel: kauditd_printk_skb: 31 callbacks suppressed May 13 00:49:36.168390 kernel: audit: type=1131 audit(1747097376.162:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:36.162000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:36.161768 systemd[1]: Stopped dracut-pre-pivot.service. May 13 00:49:36.163933 systemd[1]: Stopped target initrd.target. May 13 00:49:36.169215 systemd[1]: Stopped target basic.target. May 13 00:49:36.170030 systemd[1]: Stopped target ignition-complete.target. May 13 00:49:36.172208 systemd[1]: Stopped target ignition-diskful.target. May 13 00:49:36.172936 systemd[1]: Stopped target initrd-root-device.target. May 13 00:49:36.175373 systemd[1]: Stopped target remote-fs.target. May 13 00:49:36.176210 systemd[1]: Stopped target remote-fs-pre.target. May 13 00:49:36.178473 systemd[1]: Stopped target sysinit.target. May 13 00:49:36.180021 systemd[1]: Stopped target local-fs.target. May 13 00:49:36.180819 systemd[1]: Stopped target local-fs-pre.target. May 13 00:49:36.182221 systemd[1]: Stopped target swap.target. May 13 00:49:36.189383 kernel: audit: type=1131 audit(1747097376.184:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:36.184000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:36.184333 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 13 00:49:36.184424 systemd[1]: Stopped dracut-pre-mount.service. May 13 00:49:36.195443 kernel: audit: type=1131 audit(1747097376.190:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:36.190000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:36.185230 systemd[1]: Stopped target cryptsetup.target. May 13 00:49:36.195000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:36.190203 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 13 00:49:36.201564 kernel: audit: type=1131 audit(1747097376.195:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:36.190280 systemd[1]: Stopped dracut-initqueue.service. May 13 00:49:36.191233 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 13 00:49:36.191337 systemd[1]: Stopped ignition-fetch-offline.service. May 13 00:49:36.196416 systemd[1]: Stopped target paths.target. May 13 00:49:36.200679 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 13 00:49:36.202029 systemd[1]: Stopped systemd-ask-password-console.path. May 13 00:49:36.213374 kernel: audit: type=1131 audit(1747097376.207:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:36.207000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:36.202444 systemd[1]: Stopped target slices.target. May 13 00:49:36.213000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:36.204242 systemd[1]: Stopped target sockets.target. May 13 00:49:36.219370 kernel: audit: type=1131 audit(1747097376.213:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:36.205708 systemd[1]: iscsid.socket: Deactivated successfully. May 13 00:49:36.205766 systemd[1]: Closed iscsid.socket. May 13 00:49:36.207091 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 13 00:49:36.207170 systemd[1]: Stopped initrd-setup-root-after-ignition.service. May 13 00:49:36.227309 kernel: audit: type=1131 audit(1747097376.222:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:36.222000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:36.227382 ignition[874]: INFO : Ignition 2.14.0 May 13 00:49:36.227382 ignition[874]: INFO : Stage: umount May 13 00:49:36.227382 ignition[874]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 00:49:36.227382 ignition[874]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:49:36.227382 ignition[874]: INFO : umount: umount passed May 13 00:49:36.227382 ignition[874]: INFO : Ignition finished successfully May 13 00:49:36.208715 systemd[1]: ignition-files.service: Deactivated successfully. May 13 00:49:36.208812 systemd[1]: Stopped ignition-files.service. May 13 00:49:36.217679 systemd[1]: Stopping ignition-mount.service... May 13 00:49:36.218756 systemd[1]: Stopping iscsiuio.service... May 13 00:49:36.220756 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 13 00:49:36.221445 systemd[1]: Stopped kmod-static-nodes.service. May 13 00:49:36.234287 systemd[1]: Stopping sysroot-boot.service... May 13 00:49:36.235727 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 13 00:49:36.244699 kernel: audit: type=1131 audit(1747097376.239:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:36.239000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:36.236485 systemd[1]: Stopped systemd-udev-trigger.service. May 13 00:49:36.240072 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 13 00:49:36.240221 systemd[1]: Stopped dracut-pre-trigger.service. May 13 00:49:36.251309 kernel: audit: type=1131 audit(1747097376.246:50): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:36.246000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:36.253416 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 13 00:49:36.254810 systemd[1]: iscsiuio.service: Deactivated successfully. May 13 00:49:36.255746 systemd[1]: Stopped iscsiuio.service. May 13 00:49:36.256000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:36.257524 systemd[1]: ignition-mount.service: Deactivated successfully. May 13 00:49:36.260994 kernel: audit: type=1131 audit(1747097376.256:51): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:36.257593 systemd[1]: Stopped ignition-mount.service. May 13 00:49:36.262000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:36.262785 systemd[1]: Stopped target network.target. May 13 00:49:36.264505 systemd[1]: iscsiuio.socket: Deactivated successfully. May 13 00:49:36.264546 systemd[1]: Closed iscsiuio.socket. May 13 00:49:36.266810 systemd[1]: ignition-disks.service: Deactivated successfully. May 13 00:49:36.266846 systemd[1]: Stopped ignition-disks.service. May 13 00:49:36.269000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:36.269264 systemd[1]: ignition-kargs.service: Deactivated successfully. May 13 00:49:36.269295 systemd[1]: Stopped ignition-kargs.service. May 13 00:49:36.271000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:36.271719 systemd[1]: ignition-setup.service: Deactivated successfully. May 13 00:49:36.273000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:36.271749 systemd[1]: Stopped ignition-setup.service. May 13 00:49:36.274385 systemd[1]: Stopping systemd-networkd.service... May 13 00:49:36.275986 systemd[1]: Stopping systemd-resolved.service... May 13 00:49:36.277767 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 13 00:49:36.278006 systemd-networkd[709]: eth0: DHCPv6 lease lost May 13 00:49:36.279723 systemd[1]: Finished initrd-cleanup.service. May 13 00:49:36.281000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:36.281000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:36.281926 systemd[1]: systemd-networkd.service: Deactivated successfully. May 13 00:49:36.283004 systemd[1]: Stopped systemd-networkd.service. May 13 00:49:36.283000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:36.285793 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 13 00:49:36.285822 systemd[1]: Closed systemd-networkd.socket. May 13 00:49:36.288000 audit: BPF prog-id=9 op=UNLOAD May 13 00:49:36.288846 systemd[1]: Stopping network-cleanup.service... May 13 00:49:36.289664 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 13 00:49:36.290479 systemd[1]: Stopped parse-ip-for-networkd.service. May 13 00:49:36.292000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:36.293260 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 00:49:36.293294 systemd[1]: Stopped systemd-sysctl.service. May 13 00:49:36.295000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:36.295784 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 13 00:49:36.295820 systemd[1]: Stopped systemd-modules-load.service. May 13 00:49:36.298000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:36.298699 systemd[1]: Stopping systemd-udevd.service... May 13 00:49:36.301281 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 13 00:49:36.302881 systemd[1]: systemd-resolved.service: Deactivated successfully. May 13 00:49:36.303872 systemd[1]: Stopped systemd-resolved.service. May 13 00:49:36.305000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:36.306195 systemd[1]: sysroot-boot.service: Deactivated successfully. May 13 00:49:36.307371 systemd[1]: Stopped sysroot-boot.service. May 13 00:49:36.308000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:36.309076 systemd[1]: systemd-udevd.service: Deactivated successfully. May 13 00:49:36.310080 systemd[1]: Stopped systemd-udevd.service. May 13 00:49:36.311000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:36.311000 audit: BPF prog-id=6 op=UNLOAD May 13 00:49:36.312657 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 13 00:49:36.313790 systemd[1]: Closed systemd-udevd-control.socket. May 13 00:49:36.315579 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 13 00:49:36.315608 systemd[1]: Closed systemd-udevd-kernel.socket. May 13 00:49:36.318148 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 13 00:49:36.318188 systemd[1]: Stopped dracut-pre-udev.service. May 13 00:49:36.319000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:36.320718 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 13 00:49:36.320749 systemd[1]: Stopped dracut-cmdline.service. May 13 00:49:36.322000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:36.323122 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 00:49:36.323152 systemd[1]: Stopped dracut-cmdline-ask.service. May 13 00:49:36.325000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:36.325741 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 13 00:49:36.325772 systemd[1]: Stopped initrd-setup-root.service. May 13 00:49:36.327000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:36.328848 systemd[1]: Starting initrd-udevadm-cleanup-db.service... May 13 00:49:36.330987 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 00:49:36.332207 systemd[1]: Stopped systemd-vconsole-setup.service. May 13 00:49:36.334000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:36.334610 systemd[1]: network-cleanup.service: Deactivated successfully. May 13 00:49:36.335591 systemd[1]: Stopped network-cleanup.service. May 13 00:49:36.337000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:36.337335 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 13 00:49:36.338441 systemd[1]: Finished initrd-udevadm-cleanup-db.service. May 13 00:49:36.340000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:36.340000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:36.340376 systemd[1]: Reached target initrd-switch-root.target. May 13 00:49:36.342624 systemd[1]: Starting initrd-switch-root.service... May 13 00:49:36.358621 systemd[1]: Switching root. May 13 00:49:36.378754 iscsid[714]: iscsid shutting down. May 13 00:49:36.379535 systemd-journald[196]: Received SIGTERM from PID 1 (systemd). May 13 00:49:36.379569 systemd-journald[196]: Journal stopped May 13 00:49:38.846028 kernel: SELinux: Class mctp_socket not defined in policy. May 13 00:49:38.846071 kernel: SELinux: Class anon_inode not defined in policy. May 13 00:49:38.846081 kernel: SELinux: the above unknown classes and permissions will be allowed May 13 00:49:38.846090 kernel: SELinux: policy capability network_peer_controls=1 May 13 00:49:38.846102 kernel: SELinux: policy capability open_perms=1 May 13 00:49:38.846111 kernel: SELinux: policy capability extended_socket_class=1 May 13 00:49:38.846121 kernel: SELinux: policy capability always_check_network=0 May 13 00:49:38.846129 kernel: SELinux: policy capability cgroup_seclabel=1 May 13 00:49:38.846138 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 13 00:49:38.846147 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 13 00:49:38.846156 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 13 00:49:38.846167 systemd[1]: Successfully loaded SELinux policy in 42.282ms. May 13 00:49:38.846183 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.571ms. May 13 00:49:38.846195 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 13 00:49:38.846206 systemd[1]: Detected virtualization kvm. May 13 00:49:38.846216 systemd[1]: Detected architecture x86-64. May 13 00:49:38.846225 systemd[1]: Detected first boot. May 13 00:49:38.846236 systemd[1]: Initializing machine ID from VM UUID. May 13 00:49:38.846245 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). May 13 00:49:38.846255 systemd[1]: Populated /etc with preset unit settings. May 13 00:49:38.846267 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 13 00:49:38.846288 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 13 00:49:38.846302 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:49:38.846313 systemd[1]: iscsid.service: Deactivated successfully. May 13 00:49:38.846325 systemd[1]: Stopped iscsid.service. May 13 00:49:38.846335 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 13 00:49:38.846345 systemd[1]: Stopped initrd-switch-root.service. May 13 00:49:38.846355 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 13 00:49:38.846367 systemd[1]: Created slice system-addon\x2dconfig.slice. May 13 00:49:38.846376 systemd[1]: Created slice system-addon\x2drun.slice. May 13 00:49:38.846386 systemd[1]: Created slice system-getty.slice. May 13 00:49:38.846397 systemd[1]: Created slice system-modprobe.slice. May 13 00:49:38.846407 systemd[1]: Created slice system-serial\x2dgetty.slice. May 13 00:49:38.846417 systemd[1]: Created slice system-system\x2dcloudinit.slice. May 13 00:49:38.846427 systemd[1]: Created slice system-systemd\x2dfsck.slice. May 13 00:49:38.846437 systemd[1]: Created slice user.slice. May 13 00:49:38.846446 systemd[1]: Started systemd-ask-password-console.path. May 13 00:49:38.846456 systemd[1]: Started systemd-ask-password-wall.path. May 13 00:49:38.846468 systemd[1]: Set up automount boot.automount. May 13 00:49:38.846478 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. May 13 00:49:38.846488 systemd[1]: Stopped target initrd-switch-root.target. May 13 00:49:38.846497 systemd[1]: Stopped target initrd-fs.target. May 13 00:49:38.846507 systemd[1]: Stopped target initrd-root-fs.target. May 13 00:49:38.846521 systemd[1]: Reached target integritysetup.target. May 13 00:49:38.846530 systemd[1]: Reached target remote-cryptsetup.target. May 13 00:49:38.846540 systemd[1]: Reached target remote-fs.target. May 13 00:49:38.846550 systemd[1]: Reached target slices.target. May 13 00:49:38.846560 systemd[1]: Reached target swap.target. May 13 00:49:38.846570 systemd[1]: Reached target torcx.target. May 13 00:49:38.846580 systemd[1]: Reached target veritysetup.target. May 13 00:49:38.846589 systemd[1]: Listening on systemd-coredump.socket. May 13 00:49:38.846601 systemd[1]: Listening on systemd-initctl.socket. May 13 00:49:38.846610 systemd[1]: Listening on systemd-networkd.socket. May 13 00:49:38.846620 systemd[1]: Listening on systemd-udevd-control.socket. May 13 00:49:38.846630 systemd[1]: Listening on systemd-udevd-kernel.socket. May 13 00:49:38.846640 systemd[1]: Listening on systemd-userdbd.socket. May 13 00:49:38.846649 systemd[1]: Mounting dev-hugepages.mount... May 13 00:49:38.846659 systemd[1]: Mounting dev-mqueue.mount... May 13 00:49:38.846669 systemd[1]: Mounting media.mount... May 13 00:49:38.846679 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:49:38.846690 systemd[1]: Mounting sys-kernel-debug.mount... May 13 00:49:38.846700 systemd[1]: Mounting sys-kernel-tracing.mount... May 13 00:49:38.846710 systemd[1]: Mounting tmp.mount... May 13 00:49:38.846720 systemd[1]: Starting flatcar-tmpfiles.service... May 13 00:49:38.846730 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 13 00:49:38.846740 systemd[1]: Starting kmod-static-nodes.service... May 13 00:49:38.846750 systemd[1]: Starting modprobe@configfs.service... May 13 00:49:38.846759 systemd[1]: Starting modprobe@dm_mod.service... May 13 00:49:38.846769 systemd[1]: Starting modprobe@drm.service... May 13 00:49:38.846780 systemd[1]: Starting modprobe@efi_pstore.service... May 13 00:49:38.846789 systemd[1]: Starting modprobe@fuse.service... May 13 00:49:38.846800 systemd[1]: Starting modprobe@loop.service... May 13 00:49:38.846810 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 13 00:49:38.846821 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 13 00:49:38.846831 systemd[1]: Stopped systemd-fsck-root.service. May 13 00:49:38.846841 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 13 00:49:38.846850 systemd[1]: Stopped systemd-fsck-usr.service. May 13 00:49:38.846860 kernel: fuse: init (API version 7.34) May 13 00:49:38.846871 systemd[1]: Stopped systemd-journald.service. May 13 00:49:38.846880 kernel: loop: module loaded May 13 00:49:38.846890 systemd[1]: Starting systemd-journald.service... May 13 00:49:38.846900 systemd[1]: Starting systemd-modules-load.service... May 13 00:49:38.846909 systemd[1]: Starting systemd-network-generator.service... May 13 00:49:38.846919 systemd[1]: Starting systemd-remount-fs.service... May 13 00:49:38.846929 systemd[1]: Starting systemd-udev-trigger.service... May 13 00:49:38.846939 systemd[1]: verity-setup.service: Deactivated successfully. May 13 00:49:38.846952 systemd-journald[989]: Journal started May 13 00:49:38.847003 systemd-journald[989]: Runtime Journal (/run/log/journal/2fa55d76fcb348249f887f7980e13242) is 6.0M, max 48.5M, 42.5M free. May 13 00:49:36.439000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 May 13 00:49:36.622000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 13 00:49:36.622000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 13 00:49:36.622000 audit: BPF prog-id=10 op=LOAD May 13 00:49:36.622000 audit: BPF prog-id=10 op=UNLOAD May 13 00:49:36.622000 audit: BPF prog-id=11 op=LOAD May 13 00:49:36.622000 audit: BPF prog-id=11 op=UNLOAD May 13 00:49:36.652000 audit[908]: AVC avc: denied { associate } for pid=908 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" May 13 00:49:36.652000 audit[908]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001078d2 a1=c00002ae40 a2=c000029100 a3=32 items=0 ppid=891 pid=908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:49:36.652000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 13 00:49:36.654000 audit[908]: AVC avc: denied { associate } for pid=908 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 May 13 00:49:36.654000 audit[908]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001079a9 a2=1ed a3=0 items=2 ppid=891 pid=908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:49:36.654000 audit: CWD cwd="/" May 13 00:49:36.654000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:36.654000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:36.654000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 13 00:49:38.720000 audit: BPF prog-id=12 op=LOAD May 13 00:49:38.720000 audit: BPF prog-id=3 op=UNLOAD May 13 00:49:38.720000 audit: BPF prog-id=13 op=LOAD May 13 00:49:38.720000 audit: BPF prog-id=14 op=LOAD May 13 00:49:38.720000 audit: BPF prog-id=4 op=UNLOAD May 13 00:49:38.720000 audit: BPF prog-id=5 op=UNLOAD May 13 00:49:38.721000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:38.725000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:38.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:38.727000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:38.737000 audit: BPF prog-id=12 op=UNLOAD May 13 00:49:38.821000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:38.825000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:38.828000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:38.828000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:38.829000 audit: BPF prog-id=15 op=LOAD May 13 00:49:38.829000 audit: BPF prog-id=16 op=LOAD May 13 00:49:38.829000 audit: BPF prog-id=17 op=LOAD May 13 00:49:38.829000 audit: BPF prog-id=13 op=UNLOAD May 13 00:49:38.829000 audit: BPF prog-id=14 op=UNLOAD May 13 00:49:38.844000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 13 00:49:38.844000 audit[989]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffd7ad96900 a2=4000 a3=7ffd7ad9699c items=0 ppid=1 pid=989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:49:38.844000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" May 13 00:49:38.719505 systemd[1]: Queued start job for default target multi-user.target. May 13 00:49:36.652245 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-05-13T00:49:36Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 13 00:49:38.719515 systemd[1]: Unnecessary job was removed for dev-vda6.device. May 13 00:49:38.848200 systemd[1]: Stopped verity-setup.service. May 13 00:49:36.652461 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-05-13T00:49:36Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 13 00:49:38.722219 systemd[1]: systemd-journald.service: Deactivated successfully. May 13 00:49:36.652516 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-05-13T00:49:36Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 13 00:49:36.652548 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-05-13T00:49:36Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" May 13 00:49:36.652560 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-05-13T00:49:36Z" level=debug msg="skipped missing lower profile" missing profile=oem May 13 00:49:36.652594 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-05-13T00:49:36Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" May 13 00:49:36.652610 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-05-13T00:49:36Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= May 13 00:49:36.652833 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-05-13T00:49:36Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack May 13 00:49:36.652864 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-05-13T00:49:36Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 13 00:49:36.652875 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-05-13T00:49:36Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 13 00:49:36.653448 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-05-13T00:49:36Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 May 13 00:49:36.653486 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-05-13T00:49:36Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl May 13 00:49:36.653503 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-05-13T00:49:36Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.7: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.7 May 13 00:49:36.653516 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-05-13T00:49:36Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store May 13 00:49:38.848000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:36.653530 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-05-13T00:49:36Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.7: no such file or directory" path=/var/lib/torcx/store/3510.3.7 May 13 00:49:36.653542 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-05-13T00:49:36Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store May 13 00:49:38.467539 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-05-13T00:49:38Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 13 00:49:38.467808 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-05-13T00:49:38Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 13 00:49:38.467913 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-05-13T00:49:38Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 13 00:49:38.468094 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-05-13T00:49:38Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 13 00:49:38.468138 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-05-13T00:49:38Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= May 13 00:49:38.468195 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2025-05-13T00:49:38Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx May 13 00:49:38.851036 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:49:38.854025 systemd[1]: Started systemd-journald.service. May 13 00:49:38.853000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:38.854496 systemd[1]: Mounted dev-hugepages.mount. May 13 00:49:38.855396 systemd[1]: Mounted dev-mqueue.mount. May 13 00:49:38.856241 systemd[1]: Mounted media.mount. May 13 00:49:38.857095 systemd[1]: Mounted sys-kernel-debug.mount. May 13 00:49:38.858029 systemd[1]: Mounted sys-kernel-tracing.mount. May 13 00:49:38.859019 systemd[1]: Mounted tmp.mount. May 13 00:49:38.859945 systemd[1]: Finished flatcar-tmpfiles.service. May 13 00:49:38.859000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:38.861065 systemd[1]: Finished kmod-static-nodes.service. May 13 00:49:38.861000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:38.862151 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 13 00:49:38.862263 systemd[1]: Finished modprobe@configfs.service. May 13 00:49:38.862000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:38.862000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:38.863391 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:49:38.863495 systemd[1]: Finished modprobe@dm_mod.service. May 13 00:49:38.863000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:38.863000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:38.864599 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 00:49:38.864712 systemd[1]: Finished modprobe@drm.service. May 13 00:49:38.864000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:38.864000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:38.865774 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:49:38.865892 systemd[1]: Finished modprobe@efi_pstore.service. May 13 00:49:38.865000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:38.865000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:38.867107 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 13 00:49:38.867226 systemd[1]: Finished modprobe@fuse.service. May 13 00:49:38.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:38.867000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:38.868260 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:49:38.868396 systemd[1]: Finished modprobe@loop.service. May 13 00:49:38.868000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:38.868000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:38.869481 systemd[1]: Finished systemd-modules-load.service. May 13 00:49:38.869000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:38.870723 systemd[1]: Finished systemd-network-generator.service. May 13 00:49:38.870000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:38.871931 systemd[1]: Finished systemd-remount-fs.service. May 13 00:49:38.871000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:38.873181 systemd[1]: Reached target network-pre.target. May 13 00:49:38.875057 systemd[1]: Mounting sys-fs-fuse-connections.mount... May 13 00:49:38.876710 systemd[1]: Mounting sys-kernel-config.mount... May 13 00:49:38.877886 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 13 00:49:38.879352 systemd[1]: Starting systemd-hwdb-update.service... May 13 00:49:38.881349 systemd[1]: Starting systemd-journal-flush.service... May 13 00:49:38.882746 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 00:49:38.884831 systemd-journald[989]: Time spent on flushing to /var/log/journal/2fa55d76fcb348249f887f7980e13242 is 20.113ms for 1071 entries. May 13 00:49:38.884831 systemd-journald[989]: System Journal (/var/log/journal/2fa55d76fcb348249f887f7980e13242) is 8.0M, max 195.6M, 187.6M free. May 13 00:49:38.931551 systemd-journald[989]: Received client request to flush runtime journal. May 13 00:49:38.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:38.899000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:38.908000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:38.914000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:38.883749 systemd[1]: Starting systemd-random-seed.service... May 13 00:49:38.886375 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 13 00:49:38.887122 systemd[1]: Starting systemd-sysctl.service... May 13 00:49:38.888824 systemd[1]: Starting systemd-sysusers.service... May 13 00:49:38.892903 systemd[1]: Finished systemd-udev-trigger.service. May 13 00:49:38.932798 udevadm[1012]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 13 00:49:38.894308 systemd[1]: Mounted sys-fs-fuse-connections.mount. May 13 00:49:38.895686 systemd[1]: Mounted sys-kernel-config.mount. May 13 00:49:38.898628 systemd[1]: Starting systemd-udev-settle.service... May 13 00:49:38.899781 systemd[1]: Finished systemd-random-seed.service. May 13 00:49:38.901071 systemd[1]: Reached target first-boot-complete.target. May 13 00:49:38.908218 systemd[1]: Finished systemd-sysctl.service. May 13 00:49:38.914522 systemd[1]: Finished systemd-sysusers.service. May 13 00:49:38.932444 systemd[1]: Finished systemd-journal-flush.service. May 13 00:49:38.932000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:39.298065 systemd[1]: Finished systemd-hwdb-update.service. May 13 00:49:39.298000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:39.298000 audit: BPF prog-id=18 op=LOAD May 13 00:49:39.298000 audit: BPF prog-id=19 op=LOAD May 13 00:49:39.298000 audit: BPF prog-id=7 op=UNLOAD May 13 00:49:39.298000 audit: BPF prog-id=8 op=UNLOAD May 13 00:49:39.300115 systemd[1]: Starting systemd-udevd.service... May 13 00:49:39.314438 systemd-udevd[1015]: Using default interface naming scheme 'v252'. May 13 00:49:39.325571 systemd[1]: Started systemd-udevd.service. May 13 00:49:39.326000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:39.327000 audit: BPF prog-id=20 op=LOAD May 13 00:49:39.329509 systemd[1]: Starting systemd-networkd.service... May 13 00:49:39.333000 audit: BPF prog-id=21 op=LOAD May 13 00:49:39.333000 audit: BPF prog-id=22 op=LOAD May 13 00:49:39.333000 audit: BPF prog-id=23 op=LOAD May 13 00:49:39.335044 systemd[1]: Starting systemd-userdbd.service... May 13 00:49:39.364394 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. May 13 00:49:39.370738 systemd[1]: Started systemd-userdbd.service. May 13 00:49:39.370000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:39.380836 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 13 00:49:39.394983 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 13 00:49:39.404987 kernel: ACPI: button: Power Button [PWRF] May 13 00:49:39.418235 systemd-networkd[1025]: lo: Link UP May 13 00:49:39.418525 systemd-networkd[1025]: lo: Gained carrier May 13 00:49:39.414000 audit[1024]: AVC avc: denied { confidentiality } for pid=1024 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 May 13 00:49:39.419225 systemd-networkd[1025]: Enumeration completed May 13 00:49:39.419465 systemd[1]: Started systemd-networkd.service. May 13 00:49:39.419742 systemd-networkd[1025]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 00:49:39.414000 audit[1024]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55d2de8d1b40 a1=338ac a2=7f10907c6bc5 a3=5 items=110 ppid=1015 pid=1024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:49:39.414000 audit: CWD cwd="/" May 13 00:49:39.414000 audit: PATH item=0 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=1 name=(null) inode=13893 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=2 name=(null) inode=13893 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=3 name=(null) inode=13894 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=4 name=(null) inode=13893 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=5 name=(null) inode=13895 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=6 name=(null) inode=13893 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=7 name=(null) inode=13896 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=8 name=(null) inode=13896 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=9 name=(null) inode=13897 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=10 name=(null) inode=13896 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=11 name=(null) inode=13898 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=12 name=(null) inode=13896 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=13 name=(null) inode=13899 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=14 name=(null) inode=13896 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=15 name=(null) inode=13900 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=16 name=(null) inode=13896 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=17 name=(null) inode=13901 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=18 name=(null) inode=13893 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=19 name=(null) inode=13902 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=20 name=(null) inode=13902 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=21 name=(null) inode=13903 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=22 name=(null) inode=13902 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=23 name=(null) inode=13904 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=24 name=(null) inode=13902 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=25 name=(null) inode=13905 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=26 name=(null) inode=13902 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=27 name=(null) inode=13906 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=28 name=(null) inode=13902 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=29 name=(null) inode=13907 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=30 name=(null) inode=13893 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=31 name=(null) inode=13908 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=32 name=(null) inode=13908 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=33 name=(null) inode=13909 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=34 name=(null) inode=13908 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=35 name=(null) inode=13910 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=36 name=(null) inode=13908 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=37 name=(null) inode=13911 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=38 name=(null) inode=13908 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=39 name=(null) inode=13912 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=40 name=(null) inode=13908 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=41 name=(null) inode=13913 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=42 name=(null) inode=13893 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=43 name=(null) inode=13914 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=44 name=(null) inode=13914 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=45 name=(null) inode=13915 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=46 name=(null) inode=13914 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=47 name=(null) inode=13916 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=48 name=(null) inode=13914 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=49 name=(null) inode=13917 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=50 name=(null) inode=13914 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=51 name=(null) inode=13918 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=52 name=(null) inode=13914 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=53 name=(null) inode=13919 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=54 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=55 name=(null) inode=13920 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=56 name=(null) inode=13920 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=57 name=(null) inode=13921 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=58 name=(null) inode=13920 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=59 name=(null) inode=13922 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=60 name=(null) inode=13920 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=61 name=(null) inode=13923 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=62 name=(null) inode=13923 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.421000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:39.414000 audit: PATH item=63 name=(null) inode=13924 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=64 name=(null) inode=13923 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=65 name=(null) inode=13925 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=66 name=(null) inode=13923 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=67 name=(null) inode=13926 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=68 name=(null) inode=13923 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=69 name=(null) inode=13927 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=70 name=(null) inode=13923 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=71 name=(null) inode=13928 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=72 name=(null) inode=13920 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=73 name=(null) inode=13929 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=74 name=(null) inode=13929 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=75 name=(null) inode=13930 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=76 name=(null) inode=13929 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=77 name=(null) inode=13931 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=78 name=(null) inode=13929 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=79 name=(null) inode=13932 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=80 name=(null) inode=13929 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=81 name=(null) inode=13933 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=82 name=(null) inode=13929 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=83 name=(null) inode=13934 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=84 name=(null) inode=13920 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=85 name=(null) inode=13935 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=86 name=(null) inode=13935 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=87 name=(null) inode=13936 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=88 name=(null) inode=13935 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=89 name=(null) inode=13937 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=90 name=(null) inode=13935 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=91 name=(null) inode=13938 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=92 name=(null) inode=13935 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=93 name=(null) inode=13939 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=94 name=(null) inode=13935 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=95 name=(null) inode=13940 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=96 name=(null) inode=13920 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=97 name=(null) inode=13941 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=98 name=(null) inode=13941 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=99 name=(null) inode=13942 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=100 name=(null) inode=13941 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=101 name=(null) inode=13943 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=102 name=(null) inode=13941 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=103 name=(null) inode=13944 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=104 name=(null) inode=13941 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=105 name=(null) inode=13945 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=106 name=(null) inode=13941 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=107 name=(null) inode=13946 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PATH item=109 name=(null) inode=13947 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:49:39.414000 audit: PROCTITLE proctitle="(udev-worker)" May 13 00:49:39.424474 systemd-networkd[1025]: eth0: Link UP May 13 00:49:39.424479 systemd-networkd[1025]: eth0: Gained carrier May 13 00:49:39.434998 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 13 00:49:39.438339 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 13 00:49:39.441014 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) May 13 00:49:39.441177 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 13 00:49:39.440113 systemd-networkd[1025]: eth0: DHCPv4 address 10.0.0.142/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 00:49:39.452988 kernel: mousedev: PS/2 mouse device common for all mice May 13 00:49:39.513217 kernel: kvm: Nested Virtualization enabled May 13 00:49:39.513313 kernel: SVM: kvm: Nested Paging enabled May 13 00:49:39.514577 kernel: SVM: Virtual VMLOAD VMSAVE supported May 13 00:49:39.514608 kernel: SVM: Virtual GIF supported May 13 00:49:39.530984 kernel: EDAC MC: Ver: 3.0.0 May 13 00:49:39.555401 systemd[1]: Finished systemd-udev-settle.service. May 13 00:49:39.555000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:39.557557 systemd[1]: Starting lvm2-activation-early.service... May 13 00:49:39.565377 lvm[1050]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 00:49:39.589891 systemd[1]: Finished lvm2-activation-early.service. May 13 00:49:39.589000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:39.590985 systemd[1]: Reached target cryptsetup.target. May 13 00:49:39.592887 systemd[1]: Starting lvm2-activation.service... May 13 00:49:39.596152 lvm[1051]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 00:49:39.621881 systemd[1]: Finished lvm2-activation.service. May 13 00:49:39.621000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:39.622831 systemd[1]: Reached target local-fs-pre.target. May 13 00:49:39.623679 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 13 00:49:39.623696 systemd[1]: Reached target local-fs.target. May 13 00:49:39.624483 systemd[1]: Reached target machines.target. May 13 00:49:39.626299 systemd[1]: Starting ldconfig.service... May 13 00:49:39.627261 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 13 00:49:39.627305 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:49:39.628129 systemd[1]: Starting systemd-boot-update.service... May 13 00:49:39.629798 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... May 13 00:49:39.638000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:39.631687 systemd[1]: Starting systemd-machine-id-commit.service... May 13 00:49:39.633476 systemd[1]: Starting systemd-sysext.service... May 13 00:49:39.634647 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1053 (bootctl) May 13 00:49:39.635518 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... May 13 00:49:39.638004 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. May 13 00:49:39.642119 systemd[1]: Unmounting usr-share-oem.mount... May 13 00:49:39.646570 systemd[1]: usr-share-oem.mount: Deactivated successfully. May 13 00:49:39.646703 systemd[1]: Unmounted usr-share-oem.mount. May 13 00:49:39.654993 kernel: loop0: detected capacity change from 0 to 205544 May 13 00:49:39.678012 systemd-fsck[1061]: fsck.fat 4.2 (2021-01-31) May 13 00:49:39.678012 systemd-fsck[1061]: /dev/vda1: 790 files, 120692/258078 clusters May 13 00:49:39.679594 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. May 13 00:49:39.680000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:39.682784 systemd[1]: Mounting boot.mount... May 13 00:49:39.846079 systemd[1]: Mounted boot.mount. May 13 00:49:39.853020 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 13 00:49:39.856091 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 13 00:49:39.857230 systemd[1]: Finished systemd-machine-id-commit.service. May 13 00:49:39.858000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:39.859888 systemd[1]: Finished systemd-boot-update.service. May 13 00:49:39.859000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:39.864999 kernel: loop1: detected capacity change from 0 to 205544 May 13 00:49:39.868665 (sd-sysext)[1066]: Using extensions 'kubernetes'. May 13 00:49:39.869053 (sd-sysext)[1066]: Merged extensions into '/usr'. May 13 00:49:39.883556 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:49:39.884664 systemd[1]: Mounting usr-share-oem.mount... May 13 00:49:39.885899 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 13 00:49:39.887492 systemd[1]: Starting modprobe@dm_mod.service... May 13 00:49:39.889645 systemd[1]: Starting modprobe@efi_pstore.service... May 13 00:49:39.891914 systemd[1]: Starting modprobe@loop.service... May 13 00:49:39.892762 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 13 00:49:39.892912 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:49:39.893103 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:49:39.895862 systemd[1]: Mounted usr-share-oem.mount. May 13 00:49:39.897107 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:49:39.897237 systemd[1]: Finished modprobe@dm_mod.service. May 13 00:49:39.897000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:39.897000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:39.898617 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:49:39.898739 systemd[1]: Finished modprobe@efi_pstore.service. May 13 00:49:39.898000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:39.898000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:39.900165 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:49:39.900307 systemd[1]: Finished modprobe@loop.service. May 13 00:49:39.900000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:39.900000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:39.901781 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 00:49:39.901904 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 13 00:49:39.903040 systemd[1]: Finished systemd-sysext.service. May 13 00:49:39.903000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:39.905218 systemd[1]: Starting ensure-sysext.service... May 13 00:49:39.907099 systemd[1]: Starting systemd-tmpfiles-setup.service... May 13 00:49:39.913503 systemd[1]: Reloading. May 13 00:49:39.919455 systemd-tmpfiles[1073]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. May 13 00:49:39.921438 systemd-tmpfiles[1073]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 13 00:49:39.922245 ldconfig[1052]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 13 00:49:39.924763 systemd-tmpfiles[1073]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 13 00:49:39.969473 /usr/lib/systemd/system-generators/torcx-generator[1093]: time="2025-05-13T00:49:39Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 13 00:49:39.969772 /usr/lib/systemd/system-generators/torcx-generator[1093]: time="2025-05-13T00:49:39Z" level=info msg="torcx already run" May 13 00:49:40.028375 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 13 00:49:40.028391 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 13 00:49:40.045195 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:49:40.095000 audit: BPF prog-id=24 op=LOAD May 13 00:49:40.095000 audit: BPF prog-id=20 op=UNLOAD May 13 00:49:40.097000 audit: BPF prog-id=25 op=LOAD May 13 00:49:40.097000 audit: BPF prog-id=15 op=UNLOAD May 13 00:49:40.097000 audit: BPF prog-id=26 op=LOAD May 13 00:49:40.097000 audit: BPF prog-id=27 op=LOAD May 13 00:49:40.097000 audit: BPF prog-id=16 op=UNLOAD May 13 00:49:40.097000 audit: BPF prog-id=17 op=UNLOAD May 13 00:49:40.097000 audit: BPF prog-id=28 op=LOAD May 13 00:49:40.097000 audit: BPF prog-id=21 op=UNLOAD May 13 00:49:40.097000 audit: BPF prog-id=29 op=LOAD May 13 00:49:40.098000 audit: BPF prog-id=30 op=LOAD May 13 00:49:40.098000 audit: BPF prog-id=22 op=UNLOAD May 13 00:49:40.098000 audit: BPF prog-id=23 op=UNLOAD May 13 00:49:40.099000 audit: BPF prog-id=31 op=LOAD May 13 00:49:40.099000 audit: BPF prog-id=32 op=LOAD May 13 00:49:40.099000 audit: BPF prog-id=18 op=UNLOAD May 13 00:49:40.099000 audit: BPF prog-id=19 op=UNLOAD May 13 00:49:40.102297 systemd[1]: Finished ldconfig.service. May 13 00:49:40.103000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:40.104172 systemd[1]: Finished systemd-tmpfiles-setup.service. May 13 00:49:40.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:40.108146 systemd[1]: Starting audit-rules.service... May 13 00:49:40.109657 systemd[1]: Starting clean-ca-certificates.service... May 13 00:49:40.111416 systemd[1]: Starting systemd-journal-catalog-update.service... May 13 00:49:40.112000 audit: BPF prog-id=33 op=LOAD May 13 00:49:40.114000 audit: BPF prog-id=34 op=LOAD May 13 00:49:40.113585 systemd[1]: Starting systemd-resolved.service... May 13 00:49:40.121000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:40.115881 systemd[1]: Starting systemd-timesyncd.service... May 13 00:49:40.117664 systemd[1]: Starting systemd-update-utmp.service... May 13 00:49:40.120609 systemd[1]: Finished clean-ca-certificates.service. May 13 00:49:40.121000 audit[1146]: SYSTEM_BOOT pid=1146 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' May 13 00:49:40.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:40.121892 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 00:49:40.124870 systemd[1]: Finished systemd-update-utmp.service. May 13 00:49:40.127463 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:49:40.127638 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 13 00:49:40.128765 systemd[1]: Starting modprobe@dm_mod.service... May 13 00:49:40.130707 systemd[1]: Starting modprobe@efi_pstore.service... May 13 00:49:40.132465 systemd[1]: Starting modprobe@loop.service... May 13 00:49:40.133249 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 13 00:49:40.133354 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:49:40.133439 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 00:49:40.133500 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:49:40.135000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:40.136000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:40.136000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:40.134618 systemd[1]: Finished systemd-journal-catalog-update.service. May 13 00:49:40.136040 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:49:40.137000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:40.137000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:40.136147 systemd[1]: Finished modprobe@dm_mod.service. May 13 00:49:40.137322 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:49:40.137422 systemd[1]: Finished modprobe@efi_pstore.service. May 13 00:49:40.138679 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:49:40.138784 systemd[1]: Finished modprobe@loop.service. May 13 00:49:40.138000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:40.138000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:49:40.139928 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 00:49:40.140031 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 13 00:49:40.141189 systemd[1]: Starting systemd-update-done.service... May 13 00:49:40.145436 augenrules[1159]: No rules May 13 00:49:40.144000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 May 13 00:49:40.144000 audit[1159]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffcf2b1abf0 a2=420 a3=0 items=0 ppid=1135 pid=1159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:49:40.144000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 May 13 00:49:40.147023 systemd[1]: Finished audit-rules.service. May 13 00:49:40.148168 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:49:40.148410 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 13 00:49:40.149611 systemd[1]: Starting modprobe@dm_mod.service... May 13 00:49:40.151456 systemd[1]: Starting modprobe@efi_pstore.service... May 13 00:49:40.153198 systemd[1]: Starting modprobe@loop.service... May 13 00:49:40.153948 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 13 00:49:40.154056 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:49:40.154133 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 00:49:40.154191 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:49:40.155015 systemd[1]: Finished systemd-update-done.service. May 13 00:49:40.156189 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:49:40.156290 systemd[1]: Finished modprobe@dm_mod.service. May 13 00:49:40.157469 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:49:40.157574 systemd[1]: Finished modprobe@efi_pstore.service. May 13 00:49:40.158930 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:49:40.159053 systemd[1]: Finished modprobe@loop.service. May 13 00:49:40.163014 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:49:40.163275 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 13 00:49:40.164517 systemd[1]: Starting modprobe@dm_mod.service... May 13 00:49:40.166348 systemd[1]: Starting modprobe@drm.service... May 13 00:49:40.168343 systemd[1]: Starting modprobe@efi_pstore.service... May 13 00:49:40.170116 systemd[1]: Starting modprobe@loop.service... May 13 00:49:40.170922 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 13 00:49:40.171045 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:49:40.171892 systemd[1]: Starting systemd-networkd-wait-online.service... May 13 00:49:40.172887 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 00:49:40.173006 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:49:40.173884 systemd[1]: Started systemd-timesyncd.service. May 13 00:49:40.174109 systemd-timesyncd[1145]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 13 00:49:40.174352 systemd-timesyncd[1145]: Initial clock synchronization to Tue 2025-05-13 00:49:40.001532 UTC. May 13 00:49:40.175336 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:49:40.175441 systemd[1]: Finished modprobe@dm_mod.service. May 13 00:49:40.176536 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 00:49:40.176630 systemd[1]: Finished modprobe@drm.service. May 13 00:49:40.177694 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:49:40.177786 systemd[1]: Finished modprobe@efi_pstore.service. May 13 00:49:40.178899 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:49:40.179052 systemd[1]: Finished modprobe@loop.service. May 13 00:49:40.180608 systemd[1]: Reached target time-set.target. May 13 00:49:40.181512 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 00:49:40.181546 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 13 00:49:40.181806 systemd[1]: Finished ensure-sysext.service. May 13 00:49:40.187795 systemd-resolved[1139]: Positive Trust Anchors: May 13 00:49:40.187808 systemd-resolved[1139]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 00:49:40.187834 systemd-resolved[1139]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 13 00:49:40.194448 systemd-resolved[1139]: Defaulting to hostname 'linux'. May 13 00:49:40.195750 systemd[1]: Started systemd-resolved.service. May 13 00:49:40.196671 systemd[1]: Reached target network.target. May 13 00:49:40.197436 systemd[1]: Reached target nss-lookup.target. May 13 00:49:40.198225 systemd[1]: Reached target sysinit.target. May 13 00:49:40.199061 systemd[1]: Started motdgen.path. May 13 00:49:40.199746 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. May 13 00:49:40.200931 systemd[1]: Started logrotate.timer. May 13 00:49:40.201718 systemd[1]: Started mdadm.timer. May 13 00:49:40.202377 systemd[1]: Started systemd-tmpfiles-clean.timer. May 13 00:49:40.203198 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 13 00:49:40.203222 systemd[1]: Reached target paths.target. May 13 00:49:40.203934 systemd[1]: Reached target timers.target. May 13 00:49:40.204982 systemd[1]: Listening on dbus.socket. May 13 00:49:40.206596 systemd[1]: Starting docker.socket... May 13 00:49:40.209187 systemd[1]: Listening on sshd.socket. May 13 00:49:40.210010 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:49:40.210342 systemd[1]: Listening on docker.socket. May 13 00:49:40.211133 systemd[1]: Reached target sockets.target. May 13 00:49:40.211931 systemd[1]: Reached target basic.target. May 13 00:49:40.212686 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. May 13 00:49:40.212710 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. May 13 00:49:40.213482 systemd[1]: Starting containerd.service... May 13 00:49:40.214935 systemd[1]: Starting dbus.service... May 13 00:49:40.216356 systemd[1]: Starting enable-oem-cloudinit.service... May 13 00:49:40.217998 systemd[1]: Starting extend-filesystems.service... May 13 00:49:40.218837 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). May 13 00:49:40.219653 jq[1177]: false May 13 00:49:40.219674 systemd[1]: Starting motdgen.service... May 13 00:49:40.221164 systemd[1]: Starting ssh-key-proc-cmdline.service... May 13 00:49:40.222969 systemd[1]: Starting sshd-keygen.service... May 13 00:49:40.225731 systemd[1]: Starting systemd-logind.service... May 13 00:49:40.226470 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:49:40.226523 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 13 00:49:40.226850 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 13 00:49:40.227410 systemd[1]: Starting update-engine.service... May 13 00:49:40.229019 systemd[1]: Starting update-ssh-keys-after-ignition.service... May 13 00:49:40.231956 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 13 00:49:40.234107 jq[1191]: true May 13 00:49:40.236082 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. May 13 00:49:40.236459 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 13 00:49:40.236606 systemd[1]: Finished ssh-key-proc-cmdline.service. May 13 00:49:40.243472 extend-filesystems[1178]: Found loop1 May 13 00:49:40.243472 extend-filesystems[1178]: Found sr0 May 13 00:49:40.243472 extend-filesystems[1178]: Found vda May 13 00:49:40.243472 extend-filesystems[1178]: Found vda1 May 13 00:49:40.243472 extend-filesystems[1178]: Found vda2 May 13 00:49:40.243472 extend-filesystems[1178]: Found vda3 May 13 00:49:40.243472 extend-filesystems[1178]: Found usr May 13 00:49:40.243472 extend-filesystems[1178]: Found vda4 May 13 00:49:40.243472 extend-filesystems[1178]: Found vda6 May 13 00:49:40.243472 extend-filesystems[1178]: Found vda7 May 13 00:49:40.243472 extend-filesystems[1178]: Found vda9 May 13 00:49:40.243472 extend-filesystems[1178]: Checking size of /dev/vda9 May 13 00:49:40.267473 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 13 00:49:40.242895 systemd[1]: motdgen.service: Deactivated successfully. May 13 00:49:40.267569 jq[1197]: true May 13 00:49:40.267641 update_engine[1188]: I0513 00:49:40.258013 1188 main.cc:92] Flatcar Update Engine starting May 13 00:49:40.253169 dbus-daemon[1176]: [system] SELinux support is enabled May 13 00:49:40.267938 extend-filesystems[1178]: Resized partition /dev/vda9 May 13 00:49:40.243061 systemd[1]: Finished motdgen.service. May 13 00:49:40.269124 extend-filesystems[1219]: resize2fs 1.46.5 (30-Dec-2021) May 13 00:49:40.253309 systemd[1]: Started dbus.service. May 13 00:49:40.255939 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 13 00:49:40.256007 systemd[1]: Reached target system-config.target. May 13 00:49:40.256534 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 13 00:49:40.256548 systemd[1]: Reached target user-config.target. May 13 00:49:40.274508 update_engine[1188]: I0513 00:49:40.274003 1188 update_check_scheduler.cc:74] Next update check in 11m45s May 13 00:49:40.274145 systemd[1]: Started update-engine.service. May 13 00:49:40.276615 systemd[1]: Started locksmithd.service. May 13 00:49:40.288843 systemd-logind[1185]: Watching system buttons on /dev/input/event1 (Power Button) May 13 00:49:40.288870 systemd-logind[1185]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 13 00:49:40.289429 systemd-logind[1185]: New seat seat0. May 13 00:49:40.291637 systemd[1]: Started systemd-logind.service. May 13 00:49:40.302881 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 13 00:49:40.302960 env[1198]: time="2025-05-13T00:49:40.300576411Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 May 13 00:49:40.324866 env[1198]: time="2025-05-13T00:49:40.316181352Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 13 00:49:40.324929 extend-filesystems[1219]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 13 00:49:40.324929 extend-filesystems[1219]: old_desc_blocks = 1, new_desc_blocks = 1 May 13 00:49:40.324929 extend-filesystems[1219]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 13 00:49:40.331050 extend-filesystems[1178]: Resized filesystem in /dev/vda9 May 13 00:49:40.330057 systemd[1]: extend-filesystems.service: Deactivated successfully. May 13 00:49:40.333594 env[1198]: time="2025-05-13T00:49:40.325121969Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 13 00:49:40.333594 env[1198]: time="2025-05-13T00:49:40.326567641Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.181-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 13 00:49:40.333594 env[1198]: time="2025-05-13T00:49:40.326588140Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 13 00:49:40.333594 env[1198]: time="2025-05-13T00:49:40.326945440Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 13 00:49:40.333594 env[1198]: time="2025-05-13T00:49:40.326978162Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 13 00:49:40.333594 env[1198]: time="2025-05-13T00:49:40.326989894Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" May 13 00:49:40.333594 env[1198]: time="2025-05-13T00:49:40.326998540Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 13 00:49:40.333594 env[1198]: time="2025-05-13T00:49:40.327098968Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 13 00:49:40.333594 env[1198]: time="2025-05-13T00:49:40.327422285Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 13 00:49:40.333594 env[1198]: time="2025-05-13T00:49:40.327574050Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 13 00:49:40.333771 bash[1223]: Updated "/home/core/.ssh/authorized_keys" May 13 00:49:40.330258 systemd[1]: Finished extend-filesystems.service. May 13 00:49:40.333896 env[1198]: time="2025-05-13T00:49:40.327591473Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 13 00:49:40.333896 env[1198]: time="2025-05-13T00:49:40.327799313Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" May 13 00:49:40.333896 env[1198]: time="2025-05-13T00:49:40.327815202Z" level=info msg="metadata content store policy set" policy=shared May 13 00:49:40.334136 systemd[1]: Finished update-ssh-keys-after-ignition.service. May 13 00:49:40.334218 env[1198]: time="2025-05-13T00:49:40.334200624Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 13 00:49:40.334241 env[1198]: time="2025-05-13T00:49:40.334226663Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 13 00:49:40.334278 env[1198]: time="2025-05-13T00:49:40.334242563Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 13 00:49:40.334298 env[1198]: time="2025-05-13T00:49:40.334276787Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 13 00:49:40.334298 env[1198]: time="2025-05-13T00:49:40.334289190Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 13 00:49:40.334334 env[1198]: time="2025-05-13T00:49:40.334300842Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 13 00:49:40.334334 env[1198]: time="2025-05-13T00:49:40.334312063Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 13 00:49:40.334334 env[1198]: time="2025-05-13T00:49:40.334325328Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 13 00:49:40.334391 env[1198]: time="2025-05-13T00:49:40.334337010Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 May 13 00:49:40.334391 env[1198]: time="2025-05-13T00:49:40.334349183Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 13 00:49:40.334391 env[1198]: time="2025-05-13T00:49:40.334359542Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 13 00:49:40.334391 env[1198]: time="2025-05-13T00:49:40.334370263Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 13 00:49:40.334463 env[1198]: time="2025-05-13T00:49:40.334440384Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 13 00:49:40.334513 env[1198]: time="2025-05-13T00:49:40.334497601Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 13 00:49:40.334715 env[1198]: time="2025-05-13T00:49:40.334694701Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 13 00:49:40.334740 env[1198]: time="2025-05-13T00:49:40.334718005Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 13 00:49:40.334740 env[1198]: time="2025-05-13T00:49:40.334730559Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 13 00:49:40.334781 env[1198]: time="2025-05-13T00:49:40.334767809Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 13 00:49:40.334803 env[1198]: time="2025-05-13T00:49:40.334779400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 13 00:49:40.334803 env[1198]: time="2025-05-13T00:49:40.334790521Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 13 00:49:40.334803 env[1198]: time="2025-05-13T00:49:40.334799979Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 13 00:49:40.334856 env[1198]: time="2025-05-13T00:49:40.334810749Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 13 00:49:40.334856 env[1198]: time="2025-05-13T00:49:40.334820968Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 13 00:49:40.334856 env[1198]: time="2025-05-13T00:49:40.334831608Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 13 00:49:40.334856 env[1198]: time="2025-05-13T00:49:40.334841136Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 13 00:49:40.334856 env[1198]: time="2025-05-13T00:49:40.334851816Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 13 00:49:40.334949 env[1198]: time="2025-05-13T00:49:40.334940893Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 13 00:49:40.334991 env[1198]: time="2025-05-13T00:49:40.334953707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 13 00:49:40.334991 env[1198]: time="2025-05-13T00:49:40.334976500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 13 00:49:40.334991 env[1198]: time="2025-05-13T00:49:40.334986990Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 13 00:49:40.335048 env[1198]: time="2025-05-13T00:49:40.334999724Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 May 13 00:49:40.335048 env[1198]: time="2025-05-13T00:49:40.335009171Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 13 00:49:40.335048 env[1198]: time="2025-05-13T00:49:40.335024390Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" May 13 00:49:40.335105 env[1198]: time="2025-05-13T00:49:40.335056961Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 13 00:49:40.335277 env[1198]: time="2025-05-13T00:49:40.335220127Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 13 00:49:40.335791 env[1198]: time="2025-05-13T00:49:40.335281152Z" level=info msg="Connect containerd service" May 13 00:49:40.335791 env[1198]: time="2025-05-13T00:49:40.335309976Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 13 00:49:40.335844 env[1198]: time="2025-05-13T00:49:40.335801107Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 00:49:40.338673 env[1198]: time="2025-05-13T00:49:40.335951650Z" level=info msg="Start subscribing containerd event" May 13 00:49:40.338673 env[1198]: time="2025-05-13T00:49:40.336022062Z" level=info msg="Start recovering state" May 13 00:49:40.338673 env[1198]: time="2025-05-13T00:49:40.336064211Z" level=info msg="Start event monitor" May 13 00:49:40.338673 env[1198]: time="2025-05-13T00:49:40.336077105Z" level=info msg="Start snapshots syncer" May 13 00:49:40.338673 env[1198]: time="2025-05-13T00:49:40.336083958Z" level=info msg="Start cni network conf syncer for default" May 13 00:49:40.338673 env[1198]: time="2025-05-13T00:49:40.336090240Z" level=info msg="Start streaming server" May 13 00:49:40.338673 env[1198]: time="2025-05-13T00:49:40.336389722Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 13 00:49:40.338673 env[1198]: time="2025-05-13T00:49:40.336437952Z" level=info msg=serving... address=/run/containerd/containerd.sock May 13 00:49:40.338673 env[1198]: time="2025-05-13T00:49:40.336531528Z" level=info msg="containerd successfully booted in 0.044303s" May 13 00:49:40.336590 systemd[1]: Started containerd.service. May 13 00:49:40.342915 locksmithd[1224]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 13 00:49:40.850123 systemd-networkd[1025]: eth0: Gained IPv6LL May 13 00:49:40.851686 systemd[1]: Finished systemd-networkd-wait-online.service. May 13 00:49:40.853034 systemd[1]: Reached target network-online.target. May 13 00:49:40.855298 systemd[1]: Starting kubelet.service... May 13 00:49:41.419626 systemd[1]: Started kubelet.service. May 13 00:49:41.666038 sshd_keygen[1196]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 13 00:49:41.684239 systemd[1]: Finished sshd-keygen.service. May 13 00:49:41.686629 systemd[1]: Starting issuegen.service... May 13 00:49:41.692403 systemd[1]: issuegen.service: Deactivated successfully. May 13 00:49:41.692527 systemd[1]: Finished issuegen.service. May 13 00:49:41.694556 systemd[1]: Starting systemd-user-sessions.service... May 13 00:49:41.701822 systemd[1]: Finished systemd-user-sessions.service. May 13 00:49:41.704025 systemd[1]: Started getty@tty1.service. May 13 00:49:41.705905 systemd[1]: Started serial-getty@ttyS0.service. May 13 00:49:41.707180 systemd[1]: Reached target getty.target. May 13 00:49:41.708177 systemd[1]: Reached target multi-user.target. May 13 00:49:41.710698 systemd[1]: Starting systemd-update-utmp-runlevel.service... May 13 00:49:41.716784 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. May 13 00:49:41.716938 systemd[1]: Finished systemd-update-utmp-runlevel.service. May 13 00:49:41.718138 systemd[1]: Startup finished in 666ms (kernel) + 4.674s (initrd) + 5.321s (userspace) = 10.663s. May 13 00:49:41.809763 kubelet[1240]: E0513 00:49:41.809636 1240 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 00:49:41.811372 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 00:49:41.811504 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 00:49:44.441264 systemd[1]: Created slice system-sshd.slice. May 13 00:49:44.442087 systemd[1]: Started sshd@0-10.0.0.142:22-10.0.0.1:40326.service. May 13 00:49:44.482125 sshd[1262]: Accepted publickey for core from 10.0.0.1 port 40326 ssh2: RSA SHA256:rB6W9bZE2VLaM16OfY/13txyT/mKzB4zHBxc/zNPaeA May 13 00:49:44.483385 sshd[1262]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:49:44.491450 systemd-logind[1185]: New session 1 of user core. May 13 00:49:44.492254 systemd[1]: Created slice user-500.slice. May 13 00:49:44.493218 systemd[1]: Starting user-runtime-dir@500.service... May 13 00:49:44.500578 systemd[1]: Finished user-runtime-dir@500.service. May 13 00:49:44.501803 systemd[1]: Starting user@500.service... May 13 00:49:44.504155 (systemd)[1265]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 13 00:49:44.568320 systemd[1265]: Queued start job for default target default.target. May 13 00:49:44.568676 systemd[1265]: Reached target paths.target. May 13 00:49:44.568694 systemd[1265]: Reached target sockets.target. May 13 00:49:44.568705 systemd[1265]: Reached target timers.target. May 13 00:49:44.568715 systemd[1265]: Reached target basic.target. May 13 00:49:44.568745 systemd[1265]: Reached target default.target. May 13 00:49:44.568765 systemd[1265]: Startup finished in 59ms. May 13 00:49:44.568810 systemd[1]: Started user@500.service. May 13 00:49:44.569605 systemd[1]: Started session-1.scope. May 13 00:49:44.618486 systemd[1]: Started sshd@1-10.0.0.142:22-10.0.0.1:40328.service. May 13 00:49:44.656559 sshd[1274]: Accepted publickey for core from 10.0.0.1 port 40328 ssh2: RSA SHA256:rB6W9bZE2VLaM16OfY/13txyT/mKzB4zHBxc/zNPaeA May 13 00:49:44.657879 sshd[1274]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:49:44.661318 systemd-logind[1185]: New session 2 of user core. May 13 00:49:44.662203 systemd[1]: Started session-2.scope. May 13 00:49:44.713138 sshd[1274]: pam_unix(sshd:session): session closed for user core May 13 00:49:44.715683 systemd[1]: sshd@1-10.0.0.142:22-10.0.0.1:40328.service: Deactivated successfully. May 13 00:49:44.716226 systemd[1]: session-2.scope: Deactivated successfully. May 13 00:49:44.716725 systemd-logind[1185]: Session 2 logged out. Waiting for processes to exit. May 13 00:49:44.717656 systemd[1]: Started sshd@2-10.0.0.142:22-10.0.0.1:40340.service. May 13 00:49:44.718373 systemd-logind[1185]: Removed session 2. May 13 00:49:44.753460 sshd[1280]: Accepted publickey for core from 10.0.0.1 port 40340 ssh2: RSA SHA256:rB6W9bZE2VLaM16OfY/13txyT/mKzB4zHBxc/zNPaeA May 13 00:49:44.754328 sshd[1280]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:49:44.757450 systemd-logind[1185]: New session 3 of user core. May 13 00:49:44.758141 systemd[1]: Started session-3.scope. May 13 00:49:44.805860 sshd[1280]: pam_unix(sshd:session): session closed for user core May 13 00:49:44.808726 systemd[1]: sshd@2-10.0.0.142:22-10.0.0.1:40340.service: Deactivated successfully. May 13 00:49:44.809286 systemd[1]: session-3.scope: Deactivated successfully. May 13 00:49:44.809768 systemd-logind[1185]: Session 3 logged out. Waiting for processes to exit. May 13 00:49:44.810726 systemd[1]: Started sshd@3-10.0.0.142:22-10.0.0.1:40348.service. May 13 00:49:44.811404 systemd-logind[1185]: Removed session 3. May 13 00:49:44.846083 sshd[1286]: Accepted publickey for core from 10.0.0.1 port 40348 ssh2: RSA SHA256:rB6W9bZE2VLaM16OfY/13txyT/mKzB4zHBxc/zNPaeA May 13 00:49:44.847241 sshd[1286]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:49:44.850408 systemd-logind[1185]: New session 4 of user core. May 13 00:49:44.851312 systemd[1]: Started session-4.scope. May 13 00:49:44.901903 sshd[1286]: pam_unix(sshd:session): session closed for user core May 13 00:49:44.904377 systemd[1]: sshd@3-10.0.0.142:22-10.0.0.1:40348.service: Deactivated successfully. May 13 00:49:44.904939 systemd[1]: session-4.scope: Deactivated successfully. May 13 00:49:44.905414 systemd-logind[1185]: Session 4 logged out. Waiting for processes to exit. May 13 00:49:44.906409 systemd[1]: Started sshd@4-10.0.0.142:22-10.0.0.1:40356.service. May 13 00:49:44.907052 systemd-logind[1185]: Removed session 4. May 13 00:49:44.941956 sshd[1292]: Accepted publickey for core from 10.0.0.1 port 40356 ssh2: RSA SHA256:rB6W9bZE2VLaM16OfY/13txyT/mKzB4zHBxc/zNPaeA May 13 00:49:44.942968 sshd[1292]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:49:44.945901 systemd-logind[1185]: New session 5 of user core. May 13 00:49:44.946767 systemd[1]: Started session-5.scope. May 13 00:49:44.999430 sudo[1295]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 13 00:49:44.999600 sudo[1295]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 13 00:49:45.009944 systemd[1]: Starting coreos-metadata.service... May 13 00:49:45.015749 systemd[1]: coreos-metadata.service: Deactivated successfully. May 13 00:49:45.015884 systemd[1]: Finished coreos-metadata.service. May 13 00:49:45.404914 systemd[1]: Stopped kubelet.service. May 13 00:49:45.406592 systemd[1]: Starting kubelet.service... May 13 00:49:45.426560 systemd[1]: Reloading. May 13 00:49:45.499748 /usr/lib/systemd/system-generators/torcx-generator[1355]: time="2025-05-13T00:49:45Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 13 00:49:45.499777 /usr/lib/systemd/system-generators/torcx-generator[1355]: time="2025-05-13T00:49:45Z" level=info msg="torcx already run" May 13 00:49:45.746719 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 13 00:49:45.746734 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 13 00:49:45.763151 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:49:45.835506 systemd[1]: Started kubelet.service. May 13 00:49:45.836670 systemd[1]: Stopping kubelet.service... May 13 00:49:45.837003 systemd[1]: kubelet.service: Deactivated successfully. May 13 00:49:45.837130 systemd[1]: Stopped kubelet.service. May 13 00:49:45.838221 systemd[1]: Starting kubelet.service... May 13 00:49:45.910118 systemd[1]: Started kubelet.service. May 13 00:49:45.943140 kubelet[1401]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:49:45.943140 kubelet[1401]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 13 00:49:45.943140 kubelet[1401]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:49:45.944085 kubelet[1401]: I0513 00:49:45.944040 1401 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 00:49:46.243001 kubelet[1401]: I0513 00:49:46.242880 1401 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 13 00:49:46.243001 kubelet[1401]: I0513 00:49:46.242916 1401 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 00:49:46.243202 kubelet[1401]: I0513 00:49:46.243165 1401 server.go:929] "Client rotation is on, will bootstrap in background" May 13 00:49:46.259709 kubelet[1401]: I0513 00:49:46.259684 1401 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 00:49:46.271904 kubelet[1401]: E0513 00:49:46.271851 1401 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 13 00:49:46.271904 kubelet[1401]: I0513 00:49:46.271889 1401 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 13 00:49:46.276228 kubelet[1401]: I0513 00:49:46.276189 1401 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 00:49:46.277186 kubelet[1401]: I0513 00:49:46.277163 1401 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 13 00:49:46.277361 kubelet[1401]: I0513 00:49:46.277322 1401 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 00:49:46.277537 kubelet[1401]: I0513 00:49:46.277357 1401 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.142","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 13 00:49:46.277619 kubelet[1401]: I0513 00:49:46.277538 1401 topology_manager.go:138] "Creating topology manager with none policy" May 13 00:49:46.277619 kubelet[1401]: I0513 00:49:46.277546 1401 container_manager_linux.go:300] "Creating device plugin manager" May 13 00:49:46.277664 kubelet[1401]: I0513 00:49:46.277653 1401 state_mem.go:36] "Initialized new in-memory state store" May 13 00:49:46.280357 kubelet[1401]: I0513 00:49:46.280333 1401 kubelet.go:408] "Attempting to sync node with API server" May 13 00:49:46.280400 kubelet[1401]: I0513 00:49:46.280361 1401 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 00:49:46.280400 kubelet[1401]: I0513 00:49:46.280398 1401 kubelet.go:314] "Adding apiserver pod source" May 13 00:49:46.280439 kubelet[1401]: I0513 00:49:46.280411 1401 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 00:49:46.280468 kubelet[1401]: E0513 00:49:46.280434 1401 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:49:46.280490 kubelet[1401]: E0513 00:49:46.280478 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:49:46.295261 kubelet[1401]: I0513 00:49:46.295219 1401 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 13 00:49:46.296347 kubelet[1401]: W0513 00:49:46.296019 1401 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.0.0.142" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope May 13 00:49:46.296347 kubelet[1401]: W0513 00:49:46.296062 1401 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope May 13 00:49:46.296347 kubelet[1401]: E0513 00:49:46.296087 1401 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" May 13 00:49:46.296347 kubelet[1401]: E0513 00:49:46.296112 1401 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"10.0.0.142\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" May 13 00:49:46.297033 kubelet[1401]: I0513 00:49:46.296988 1401 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 00:49:46.297513 kubelet[1401]: W0513 00:49:46.297488 1401 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 13 00:49:46.298070 kubelet[1401]: I0513 00:49:46.298046 1401 server.go:1269] "Started kubelet" May 13 00:49:46.299265 kubelet[1401]: I0513 00:49:46.298226 1401 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 00:49:46.299265 kubelet[1401]: I0513 00:49:46.298548 1401 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 00:49:46.299265 kubelet[1401]: I0513 00:49:46.298595 1401 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 13 00:49:46.300566 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). May 13 00:49:46.300677 kubelet[1401]: I0513 00:49:46.300633 1401 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 00:49:46.300984 kubelet[1401]: I0513 00:49:46.300932 1401 server.go:460] "Adding debug handlers to kubelet server" May 13 00:49:46.302303 kubelet[1401]: I0513 00:49:46.302280 1401 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 13 00:49:46.304011 kubelet[1401]: I0513 00:49:46.303987 1401 volume_manager.go:289] "Starting Kubelet Volume Manager" May 13 00:49:46.304095 kubelet[1401]: I0513 00:49:46.304061 1401 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 13 00:49:46.304095 kubelet[1401]: I0513 00:49:46.304092 1401 reconciler.go:26] "Reconciler: start to sync state" May 13 00:49:46.304827 kubelet[1401]: E0513 00:49:46.304799 1401 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.142\" not found" May 13 00:49:46.305460 kubelet[1401]: I0513 00:49:46.304903 1401 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 00:49:46.306994 kubelet[1401]: I0513 00:49:46.306810 1401 factory.go:221] Registration of the containerd container factory successfully May 13 00:49:46.306994 kubelet[1401]: I0513 00:49:46.306824 1401 factory.go:221] Registration of the systemd container factory successfully May 13 00:49:46.310712 kubelet[1401]: E0513 00:49:46.307716 1401 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.142\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" May 13 00:49:46.310712 kubelet[1401]: E0513 00:49:46.307982 1401 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.142.183eefd0fa4e766c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.142,UID:10.0.0.142,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.0.0.142,},FirstTimestamp:2025-05-13 00:49:46.298013292 +0000 UTC m=+0.384611396,LastTimestamp:2025-05-13 00:49:46.298013292 +0000 UTC m=+0.384611396,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.142,}" May 13 00:49:46.310712 kubelet[1401]: W0513 00:49:46.309381 1401 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope May 13 00:49:46.310712 kubelet[1401]: E0513 00:49:46.309402 1401 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" May 13 00:49:46.310883 kubelet[1401]: E0513 00:49:46.310793 1401 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 00:49:46.316783 kubelet[1401]: E0513 00:49:46.316660 1401 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.142.183eefd0fb11433a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.142,UID:10.0.0.142,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:10.0.0.142,},FirstTimestamp:2025-05-13 00:49:46.310779706 +0000 UTC m=+0.397377791,LastTimestamp:2025-05-13 00:49:46.310779706 +0000 UTC m=+0.397377791,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.142,}" May 13 00:49:46.319991 kubelet[1401]: I0513 00:49:46.319954 1401 cpu_manager.go:214] "Starting CPU manager" policy="none" May 13 00:49:46.319991 kubelet[1401]: I0513 00:49:46.319983 1401 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 13 00:49:46.320057 kubelet[1401]: I0513 00:49:46.320004 1401 state_mem.go:36] "Initialized new in-memory state store" May 13 00:49:46.322728 kubelet[1401]: E0513 00:49:46.322638 1401 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.142.183eefd0fb8b1598 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.142,UID:10.0.0.142,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 10.0.0.142 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:10.0.0.142,},FirstTimestamp:2025-05-13 00:49:46.318763416 +0000 UTC m=+0.405361501,LastTimestamp:2025-05-13 00:49:46.318763416 +0000 UTC m=+0.405361501,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.142,}" May 13 00:49:46.326110 kubelet[1401]: E0513 00:49:46.326036 1401 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.142.183eefd0fb8b2e0e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.142,UID:10.0.0.142,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node 10.0.0.142 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:10.0.0.142,},FirstTimestamp:2025-05-13 00:49:46.318769678 +0000 UTC m=+0.405367753,LastTimestamp:2025-05-13 00:49:46.318769678 +0000 UTC m=+0.405367753,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.142,}" May 13 00:49:46.329784 kubelet[1401]: E0513 00:49:46.329690 1401 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.142.183eefd0fb8b472d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.142,UID:10.0.0.142,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node 10.0.0.142 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:10.0.0.142,},FirstTimestamp:2025-05-13 00:49:46.318776109 +0000 UTC m=+0.405374193,LastTimestamp:2025-05-13 00:49:46.318776109 +0000 UTC m=+0.405374193,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.142,}" May 13 00:49:46.405488 kubelet[1401]: E0513 00:49:46.405440 1401 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.142\" not found" May 13 00:49:46.506331 kubelet[1401]: E0513 00:49:46.506206 1401 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.142\" not found" May 13 00:49:46.512451 kubelet[1401]: E0513 00:49:46.512394 1401 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.142\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="400ms" May 13 00:49:46.606685 kubelet[1401]: E0513 00:49:46.606646 1401 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.142\" not found" May 13 00:49:46.707183 kubelet[1401]: E0513 00:49:46.707136 1401 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.142\" not found" May 13 00:49:46.766935 kubelet[1401]: I0513 00:49:46.766830 1401 policy_none.go:49] "None policy: Start" May 13 00:49:46.767937 kubelet[1401]: I0513 00:49:46.767908 1401 memory_manager.go:170] "Starting memorymanager" policy="None" May 13 00:49:46.767937 kubelet[1401]: I0513 00:49:46.767936 1401 state_mem.go:35] "Initializing new in-memory state store" May 13 00:49:46.774905 systemd[1]: Created slice kubepods.slice. May 13 00:49:46.778937 systemd[1]: Created slice kubepods-burstable.slice. May 13 00:49:46.781767 systemd[1]: Created slice kubepods-besteffort.slice. May 13 00:49:46.787718 kubelet[1401]: I0513 00:49:46.787679 1401 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 00:49:46.787832 kubelet[1401]: I0513 00:49:46.787821 1401 eviction_manager.go:189] "Eviction manager: starting control loop" May 13 00:49:46.787873 kubelet[1401]: I0513 00:49:46.787832 1401 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 00:49:46.788396 kubelet[1401]: I0513 00:49:46.788119 1401 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 00:49:46.791018 kubelet[1401]: E0513 00:49:46.790995 1401 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.142\" not found" May 13 00:49:46.810180 kubelet[1401]: I0513 00:49:46.810145 1401 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 00:49:46.811023 kubelet[1401]: I0513 00:49:46.810991 1401 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 00:49:46.811080 kubelet[1401]: I0513 00:49:46.811034 1401 status_manager.go:217] "Starting to sync pod status with apiserver" May 13 00:49:46.811080 kubelet[1401]: I0513 00:49:46.811054 1401 kubelet.go:2321] "Starting kubelet main sync loop" May 13 00:49:46.811124 kubelet[1401]: E0513 00:49:46.811103 1401 kubelet.go:2345] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" May 13 00:49:46.889463 kubelet[1401]: I0513 00:49:46.889412 1401 kubelet_node_status.go:72] "Attempting to register node" node="10.0.0.142" May 13 00:49:46.908741 kubelet[1401]: I0513 00:49:46.908710 1401 kubelet_node_status.go:75] "Successfully registered node" node="10.0.0.142" May 13 00:49:46.908741 kubelet[1401]: E0513 00:49:46.908736 1401 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"10.0.0.142\": node \"10.0.0.142\" not found" May 13 00:49:46.916539 kubelet[1401]: E0513 00:49:46.916482 1401 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.142\" not found" May 13 00:49:47.017573 kubelet[1401]: E0513 00:49:47.017451 1401 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.142\" not found" May 13 00:49:47.118020 kubelet[1401]: E0513 00:49:47.117981 1401 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.142\" not found" May 13 00:49:47.218435 kubelet[1401]: E0513 00:49:47.218394 1401 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.142\" not found" May 13 00:49:47.244563 kubelet[1401]: I0513 00:49:47.244534 1401 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" May 13 00:49:47.244683 kubelet[1401]: W0513 00:49:47.244659 1401 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received May 13 00:49:47.280958 kubelet[1401]: E0513 00:49:47.280857 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:49:47.319455 kubelet[1401]: E0513 00:49:47.319428 1401 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.142\" not found" May 13 00:49:47.420369 kubelet[1401]: E0513 00:49:47.420344 1401 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.142\" not found" May 13 00:49:47.483587 sudo[1295]: pam_unix(sudo:session): session closed for user root May 13 00:49:47.484992 sshd[1292]: pam_unix(sshd:session): session closed for user core May 13 00:49:47.487265 systemd[1]: sshd@4-10.0.0.142:22-10.0.0.1:40356.service: Deactivated successfully. May 13 00:49:47.487877 systemd[1]: session-5.scope: Deactivated successfully. May 13 00:49:47.488353 systemd-logind[1185]: Session 5 logged out. Waiting for processes to exit. May 13 00:49:47.488944 systemd-logind[1185]: Removed session 5. May 13 00:49:47.521286 kubelet[1401]: I0513 00:49:47.521267 1401 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" May 13 00:49:47.521578 env[1198]: time="2025-05-13T00:49:47.521534911Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 13 00:49:47.521770 kubelet[1401]: I0513 00:49:47.521749 1401 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" May 13 00:49:48.281304 kubelet[1401]: E0513 00:49:48.281271 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:49:48.281304 kubelet[1401]: I0513 00:49:48.281284 1401 apiserver.go:52] "Watching apiserver" May 13 00:49:48.289745 systemd[1]: Created slice kubepods-besteffort-podb8e46ce5_bbb9_40e8_8566_cda0d7f00c6d.slice. May 13 00:49:48.296601 systemd[1]: Created slice kubepods-burstable-pod09fa62f7_1b5d_4b97_9f90_f9c08f150e9e.slice. May 13 00:49:48.304591 kubelet[1401]: I0513 00:49:48.304561 1401 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 13 00:49:48.312547 kubelet[1401]: I0513 00:49:48.312524 1401 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/09fa62f7-1b5d-4b97-9f90-f9c08f150e9e-host-proc-sys-net\") pod \"cilium-jczmd\" (UID: \"09fa62f7-1b5d-4b97-9f90-f9c08f150e9e\") " pod="kube-system/cilium-jczmd" May 13 00:49:48.312666 kubelet[1401]: I0513 00:49:48.312551 1401 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b8e46ce5-bbb9-40e8-8566-cda0d7f00c6d-kube-proxy\") pod \"kube-proxy-pg9rl\" (UID: \"b8e46ce5-bbb9-40e8-8566-cda0d7f00c6d\") " pod="kube-system/kube-proxy-pg9rl" May 13 00:49:48.312666 kubelet[1401]: I0513 00:49:48.312570 1401 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5swm\" (UniqueName: \"kubernetes.io/projected/b8e46ce5-bbb9-40e8-8566-cda0d7f00c6d-kube-api-access-g5swm\") pod \"kube-proxy-pg9rl\" (UID: \"b8e46ce5-bbb9-40e8-8566-cda0d7f00c6d\") " pod="kube-system/kube-proxy-pg9rl" May 13 00:49:48.312666 kubelet[1401]: I0513 00:49:48.312583 1401 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/09fa62f7-1b5d-4b97-9f90-f9c08f150e9e-cni-path\") pod \"cilium-jczmd\" (UID: \"09fa62f7-1b5d-4b97-9f90-f9c08f150e9e\") " pod="kube-system/cilium-jczmd" May 13 00:49:48.312666 kubelet[1401]: I0513 00:49:48.312596 1401 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/09fa62f7-1b5d-4b97-9f90-f9c08f150e9e-lib-modules\") pod \"cilium-jczmd\" (UID: \"09fa62f7-1b5d-4b97-9f90-f9c08f150e9e\") " pod="kube-system/cilium-jczmd" May 13 00:49:48.312666 kubelet[1401]: I0513 00:49:48.312609 1401 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/09fa62f7-1b5d-4b97-9f90-f9c08f150e9e-xtables-lock\") pod \"cilium-jczmd\" (UID: \"09fa62f7-1b5d-4b97-9f90-f9c08f150e9e\") " pod="kube-system/cilium-jczmd" May 13 00:49:48.312666 kubelet[1401]: I0513 00:49:48.312660 1401 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/09fa62f7-1b5d-4b97-9f90-f9c08f150e9e-clustermesh-secrets\") pod \"cilium-jczmd\" (UID: \"09fa62f7-1b5d-4b97-9f90-f9c08f150e9e\") " pod="kube-system/cilium-jczmd" May 13 00:49:48.312793 kubelet[1401]: I0513 00:49:48.312676 1401 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/09fa62f7-1b5d-4b97-9f90-f9c08f150e9e-cilium-run\") pod \"cilium-jczmd\" (UID: \"09fa62f7-1b5d-4b97-9f90-f9c08f150e9e\") " pod="kube-system/cilium-jczmd" May 13 00:49:48.312793 kubelet[1401]: I0513 00:49:48.312698 1401 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/09fa62f7-1b5d-4b97-9f90-f9c08f150e9e-bpf-maps\") pod \"cilium-jczmd\" (UID: \"09fa62f7-1b5d-4b97-9f90-f9c08f150e9e\") " pod="kube-system/cilium-jczmd" May 13 00:49:48.312793 kubelet[1401]: I0513 00:49:48.312722 1401 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/09fa62f7-1b5d-4b97-9f90-f9c08f150e9e-host-proc-sys-kernel\") pod \"cilium-jczmd\" (UID: \"09fa62f7-1b5d-4b97-9f90-f9c08f150e9e\") " pod="kube-system/cilium-jczmd" May 13 00:49:48.312793 kubelet[1401]: I0513 00:49:48.312738 1401 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/09fa62f7-1b5d-4b97-9f90-f9c08f150e9e-hubble-tls\") pod \"cilium-jczmd\" (UID: \"09fa62f7-1b5d-4b97-9f90-f9c08f150e9e\") " pod="kube-system/cilium-jczmd" May 13 00:49:48.312793 kubelet[1401]: I0513 00:49:48.312753 1401 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppvb8\" (UniqueName: \"kubernetes.io/projected/09fa62f7-1b5d-4b97-9f90-f9c08f150e9e-kube-api-access-ppvb8\") pod \"cilium-jczmd\" (UID: \"09fa62f7-1b5d-4b97-9f90-f9c08f150e9e\") " pod="kube-system/cilium-jczmd" May 13 00:49:48.312793 kubelet[1401]: I0513 00:49:48.312768 1401 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b8e46ce5-bbb9-40e8-8566-cda0d7f00c6d-xtables-lock\") pod \"kube-proxy-pg9rl\" (UID: \"b8e46ce5-bbb9-40e8-8566-cda0d7f00c6d\") " pod="kube-system/kube-proxy-pg9rl" May 13 00:49:48.312913 kubelet[1401]: I0513 00:49:48.312791 1401 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/09fa62f7-1b5d-4b97-9f90-f9c08f150e9e-hostproc\") pod \"cilium-jczmd\" (UID: \"09fa62f7-1b5d-4b97-9f90-f9c08f150e9e\") " pod="kube-system/cilium-jczmd" May 13 00:49:48.312913 kubelet[1401]: I0513 00:49:48.312806 1401 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/09fa62f7-1b5d-4b97-9f90-f9c08f150e9e-cilium-cgroup\") pod \"cilium-jczmd\" (UID: \"09fa62f7-1b5d-4b97-9f90-f9c08f150e9e\") " pod="kube-system/cilium-jczmd" May 13 00:49:48.312913 kubelet[1401]: I0513 00:49:48.312820 1401 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/09fa62f7-1b5d-4b97-9f90-f9c08f150e9e-etc-cni-netd\") pod \"cilium-jczmd\" (UID: \"09fa62f7-1b5d-4b97-9f90-f9c08f150e9e\") " pod="kube-system/cilium-jczmd" May 13 00:49:48.312913 kubelet[1401]: I0513 00:49:48.312832 1401 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/09fa62f7-1b5d-4b97-9f90-f9c08f150e9e-cilium-config-path\") pod \"cilium-jczmd\" (UID: \"09fa62f7-1b5d-4b97-9f90-f9c08f150e9e\") " pod="kube-system/cilium-jczmd" May 13 00:49:48.312913 kubelet[1401]: I0513 00:49:48.312846 1401 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b8e46ce5-bbb9-40e8-8566-cda0d7f00c6d-lib-modules\") pod \"kube-proxy-pg9rl\" (UID: \"b8e46ce5-bbb9-40e8-8566-cda0d7f00c6d\") " pod="kube-system/kube-proxy-pg9rl" May 13 00:49:48.413504 kubelet[1401]: I0513 00:49:48.413451 1401 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" May 13 00:49:48.596861 kubelet[1401]: E0513 00:49:48.596790 1401 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:49:48.598027 env[1198]: time="2025-05-13T00:49:48.597723239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pg9rl,Uid:b8e46ce5-bbb9-40e8-8566-cda0d7f00c6d,Namespace:kube-system,Attempt:0,}" May 13 00:49:48.605803 kubelet[1401]: E0513 00:49:48.605778 1401 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:49:48.606245 env[1198]: time="2025-05-13T00:49:48.606210070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jczmd,Uid:09fa62f7-1b5d-4b97-9f90-f9c08f150e9e,Namespace:kube-system,Attempt:0,}" May 13 00:49:49.282269 kubelet[1401]: E0513 00:49:49.282229 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:49:49.967048 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2988507412.mount: Deactivated successfully. May 13 00:49:49.974438 env[1198]: time="2025-05-13T00:49:49.974402719Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:49:49.977381 env[1198]: time="2025-05-13T00:49:49.977345428Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:49:49.978395 env[1198]: time="2025-05-13T00:49:49.978358880Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:49:49.981504 env[1198]: time="2025-05-13T00:49:49.981471701Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:49:49.985158 env[1198]: time="2025-05-13T00:49:49.985131305Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:49:49.986439 env[1198]: time="2025-05-13T00:49:49.986418471Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:49:49.988016 env[1198]: time="2025-05-13T00:49:49.987991534Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:49:49.990539 env[1198]: time="2025-05-13T00:49:49.990508217Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:49:50.006560 env[1198]: time="2025-05-13T00:49:50.006487406Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:49:50.006560 env[1198]: time="2025-05-13T00:49:50.006522952Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:49:50.006560 env[1198]: time="2025-05-13T00:49:50.006534657Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:49:50.006790 env[1198]: time="2025-05-13T00:49:50.006678182Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/96b9b9e5164f22beeb14f7dbcc839b8dd6a4403e0f00cd425022d8af15103e3c pid=1456 runtime=io.containerd.runc.v2 May 13 00:49:50.019820 systemd[1]: Started cri-containerd-96b9b9e5164f22beeb14f7dbcc839b8dd6a4403e0f00cd425022d8af15103e3c.scope. May 13 00:49:50.025503 env[1198]: time="2025-05-13T00:49:50.025440250Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:49:50.025503 env[1198]: time="2025-05-13T00:49:50.025516028Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:49:50.025712 env[1198]: time="2025-05-13T00:49:50.025536394Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:49:50.025896 env[1198]: time="2025-05-13T00:49:50.025856589Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3b41b2e4617fc791781e0d5a444a94a5d2bec7e735e74812389e327492105acf pid=1485 runtime=io.containerd.runc.v2 May 13 00:49:50.037615 systemd[1]: Started cri-containerd-3b41b2e4617fc791781e0d5a444a94a5d2bec7e735e74812389e327492105acf.scope. May 13 00:49:50.040561 env[1198]: time="2025-05-13T00:49:50.040522875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pg9rl,Uid:b8e46ce5-bbb9-40e8-8566-cda0d7f00c6d,Namespace:kube-system,Attempt:0,} returns sandbox id \"96b9b9e5164f22beeb14f7dbcc839b8dd6a4403e0f00cd425022d8af15103e3c\"" May 13 00:49:50.041366 kubelet[1401]: E0513 00:49:50.041339 1401 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:49:50.042595 env[1198]: time="2025-05-13T00:49:50.042558727Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" May 13 00:49:50.058740 env[1198]: time="2025-05-13T00:49:50.058684487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jczmd,Uid:09fa62f7-1b5d-4b97-9f90-f9c08f150e9e,Namespace:kube-system,Attempt:0,} returns sandbox id \"3b41b2e4617fc791781e0d5a444a94a5d2bec7e735e74812389e327492105acf\"" May 13 00:49:50.059302 kubelet[1401]: E0513 00:49:50.059271 1401 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:49:50.282658 kubelet[1401]: E0513 00:49:50.282550 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:49:51.044428 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1794600718.mount: Deactivated successfully. May 13 00:49:51.283129 kubelet[1401]: E0513 00:49:51.283091 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:49:51.790557 env[1198]: time="2025-05-13T00:49:51.790506586Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:49:51.792486 env[1198]: time="2025-05-13T00:49:51.792446580Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:49:51.793874 env[1198]: time="2025-05-13T00:49:51.793840998Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:49:51.795183 env[1198]: time="2025-05-13T00:49:51.795115313Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:49:51.795515 env[1198]: time="2025-05-13T00:49:51.795483458Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\"" May 13 00:49:51.796710 env[1198]: time="2025-05-13T00:49:51.796685091Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 13 00:49:51.797304 env[1198]: time="2025-05-13T00:49:51.797279512Z" level=info msg="CreateContainer within sandbox \"96b9b9e5164f22beeb14f7dbcc839b8dd6a4403e0f00cd425022d8af15103e3c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 13 00:49:51.810812 env[1198]: time="2025-05-13T00:49:51.810746343Z" level=info msg="CreateContainer within sandbox \"96b9b9e5164f22beeb14f7dbcc839b8dd6a4403e0f00cd425022d8af15103e3c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e216554b262fc02ecc283eb291cf7674fe230804216cb58cbf6e4171c1af1368\"" May 13 00:49:51.811352 env[1198]: time="2025-05-13T00:49:51.811330603Z" level=info msg="StartContainer for \"e216554b262fc02ecc283eb291cf7674fe230804216cb58cbf6e4171c1af1368\"" May 13 00:49:51.825020 systemd[1]: Started cri-containerd-e216554b262fc02ecc283eb291cf7674fe230804216cb58cbf6e4171c1af1368.scope. May 13 00:49:51.849053 env[1198]: time="2025-05-13T00:49:51.849015446Z" level=info msg="StartContainer for \"e216554b262fc02ecc283eb291cf7674fe230804216cb58cbf6e4171c1af1368\" returns successfully" May 13 00:49:52.283526 kubelet[1401]: E0513 00:49:52.283429 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:49:52.822394 kubelet[1401]: E0513 00:49:52.822364 1401 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:49:52.829342 kubelet[1401]: I0513 00:49:52.829278 1401 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-pg9rl" podStartSLOduration=5.075057853 podStartE2EDuration="6.8292626s" podCreationTimestamp="2025-05-13 00:49:46 +0000 UTC" firstStartedPulling="2025-05-13 00:49:50.042072193 +0000 UTC m=+4.128670277" lastFinishedPulling="2025-05-13 00:49:51.796276929 +0000 UTC m=+5.882875024" observedRunningTime="2025-05-13 00:49:52.829012934 +0000 UTC m=+6.915611029" watchObservedRunningTime="2025-05-13 00:49:52.8292626 +0000 UTC m=+6.915860684" May 13 00:49:53.284336 kubelet[1401]: E0513 00:49:53.284234 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:49:53.828079 kubelet[1401]: E0513 00:49:53.828040 1401 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:49:54.285412 kubelet[1401]: E0513 00:49:54.285261 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:49:55.286171 kubelet[1401]: E0513 00:49:55.286111 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:49:56.287212 kubelet[1401]: E0513 00:49:56.287136 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:49:57.128875 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3986524804.mount: Deactivated successfully. May 13 00:49:57.288032 kubelet[1401]: E0513 00:49:57.287983 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:49:58.288901 kubelet[1401]: E0513 00:49:58.288836 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:49:59.289491 kubelet[1401]: E0513 00:49:59.289411 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:50:00.289831 kubelet[1401]: E0513 00:50:00.289756 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:50:00.828671 env[1198]: time="2025-05-13T00:50:00.828609891Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:50:00.830982 env[1198]: time="2025-05-13T00:50:00.830942031Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:50:00.833005 env[1198]: time="2025-05-13T00:50:00.832935052Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:50:00.833725 env[1198]: time="2025-05-13T00:50:00.833657716Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 13 00:50:00.835870 env[1198]: time="2025-05-13T00:50:00.835838280Z" level=info msg="CreateContainer within sandbox \"3b41b2e4617fc791781e0d5a444a94a5d2bec7e735e74812389e327492105acf\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 13 00:50:00.849683 env[1198]: time="2025-05-13T00:50:00.849625092Z" level=info msg="CreateContainer within sandbox \"3b41b2e4617fc791781e0d5a444a94a5d2bec7e735e74812389e327492105acf\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6c2e53555ffd45adebc0655c6b1c244096fda6ced70ddce16b686086427607a9\"" May 13 00:50:00.850360 env[1198]: time="2025-05-13T00:50:00.850317471Z" level=info msg="StartContainer for \"6c2e53555ffd45adebc0655c6b1c244096fda6ced70ddce16b686086427607a9\"" May 13 00:50:00.867657 systemd[1]: Started cri-containerd-6c2e53555ffd45adebc0655c6b1c244096fda6ced70ddce16b686086427607a9.scope. May 13 00:50:00.889793 env[1198]: time="2025-05-13T00:50:00.889725134Z" level=info msg="StartContainer for \"6c2e53555ffd45adebc0655c6b1c244096fda6ced70ddce16b686086427607a9\" returns successfully" May 13 00:50:00.899704 systemd[1]: cri-containerd-6c2e53555ffd45adebc0655c6b1c244096fda6ced70ddce16b686086427607a9.scope: Deactivated successfully. May 13 00:50:01.291016 kubelet[1401]: E0513 00:50:01.290876 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:50:01.577872 env[1198]: time="2025-05-13T00:50:01.577822759Z" level=info msg="shim disconnected" id=6c2e53555ffd45adebc0655c6b1c244096fda6ced70ddce16b686086427607a9 May 13 00:50:01.578051 env[1198]: time="2025-05-13T00:50:01.577876670Z" level=warning msg="cleaning up after shim disconnected" id=6c2e53555ffd45adebc0655c6b1c244096fda6ced70ddce16b686086427607a9 namespace=k8s.io May 13 00:50:01.578051 env[1198]: time="2025-05-13T00:50:01.577892156Z" level=info msg="cleaning up dead shim" May 13 00:50:01.584607 env[1198]: time="2025-05-13T00:50:01.584565615Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:50:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1761 runtime=io.containerd.runc.v2\n" May 13 00:50:01.839598 kubelet[1401]: E0513 00:50:01.839184 1401 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:50:01.841060 env[1198]: time="2025-05-13T00:50:01.841003081Z" level=info msg="CreateContainer within sandbox \"3b41b2e4617fc791781e0d5a444a94a5d2bec7e735e74812389e327492105acf\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 13 00:50:01.844275 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6c2e53555ffd45adebc0655c6b1c244096fda6ced70ddce16b686086427607a9-rootfs.mount: Deactivated successfully. May 13 00:50:01.861664 env[1198]: time="2025-05-13T00:50:01.861606230Z" level=info msg="CreateContainer within sandbox \"3b41b2e4617fc791781e0d5a444a94a5d2bec7e735e74812389e327492105acf\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"7bb30aa936ab814db0418bdd7517ae6be0f0ed8e3a463fa32d89a7fd3e6bf2ff\"" May 13 00:50:01.862114 env[1198]: time="2025-05-13T00:50:01.862083423Z" level=info msg="StartContainer for \"7bb30aa936ab814db0418bdd7517ae6be0f0ed8e3a463fa32d89a7fd3e6bf2ff\"" May 13 00:50:01.876487 systemd[1]: Started cri-containerd-7bb30aa936ab814db0418bdd7517ae6be0f0ed8e3a463fa32d89a7fd3e6bf2ff.scope. May 13 00:50:01.902143 env[1198]: time="2025-05-13T00:50:01.902098701Z" level=info msg="StartContainer for \"7bb30aa936ab814db0418bdd7517ae6be0f0ed8e3a463fa32d89a7fd3e6bf2ff\" returns successfully" May 13 00:50:01.906645 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 00:50:01.906833 systemd[1]: Stopped systemd-sysctl.service. May 13 00:50:01.907043 systemd[1]: Stopping systemd-sysctl.service... May 13 00:50:01.908316 systemd[1]: Starting systemd-sysctl.service... May 13 00:50:01.911143 systemd[1]: cri-containerd-7bb30aa936ab814db0418bdd7517ae6be0f0ed8e3a463fa32d89a7fd3e6bf2ff.scope: Deactivated successfully. May 13 00:50:01.914927 systemd[1]: Finished systemd-sysctl.service. May 13 00:50:01.931990 env[1198]: time="2025-05-13T00:50:01.931933902Z" level=info msg="shim disconnected" id=7bb30aa936ab814db0418bdd7517ae6be0f0ed8e3a463fa32d89a7fd3e6bf2ff May 13 00:50:01.931990 env[1198]: time="2025-05-13T00:50:01.931990484Z" level=warning msg="cleaning up after shim disconnected" id=7bb30aa936ab814db0418bdd7517ae6be0f0ed8e3a463fa32d89a7fd3e6bf2ff namespace=k8s.io May 13 00:50:01.932185 env[1198]: time="2025-05-13T00:50:01.931999216Z" level=info msg="cleaning up dead shim" May 13 00:50:01.938432 env[1198]: time="2025-05-13T00:50:01.938383304Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:50:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1823 runtime=io.containerd.runc.v2\n" May 13 00:50:02.291289 kubelet[1401]: E0513 00:50:02.291161 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:50:02.841873 kubelet[1401]: E0513 00:50:02.841841 1401 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:50:02.844103 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7bb30aa936ab814db0418bdd7517ae6be0f0ed8e3a463fa32d89a7fd3e6bf2ff-rootfs.mount: Deactivated successfully. May 13 00:50:02.846370 env[1198]: time="2025-05-13T00:50:02.846333529Z" level=info msg="CreateContainer within sandbox \"3b41b2e4617fc791781e0d5a444a94a5d2bec7e735e74812389e327492105acf\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 13 00:50:02.864437 env[1198]: time="2025-05-13T00:50:02.864389768Z" level=info msg="CreateContainer within sandbox \"3b41b2e4617fc791781e0d5a444a94a5d2bec7e735e74812389e327492105acf\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b063bda88d38ff1140466c205ad50f3d315d3d71c2841f7a90cfe47ef007b9df\"" May 13 00:50:02.864940 env[1198]: time="2025-05-13T00:50:02.864898279Z" level=info msg="StartContainer for \"b063bda88d38ff1140466c205ad50f3d315d3d71c2841f7a90cfe47ef007b9df\"" May 13 00:50:02.879495 systemd[1]: Started cri-containerd-b063bda88d38ff1140466c205ad50f3d315d3d71c2841f7a90cfe47ef007b9df.scope. May 13 00:50:02.905024 env[1198]: time="2025-05-13T00:50:02.904536992Z" level=info msg="StartContainer for \"b063bda88d38ff1140466c205ad50f3d315d3d71c2841f7a90cfe47ef007b9df\" returns successfully" May 13 00:50:02.904607 systemd[1]: cri-containerd-b063bda88d38ff1140466c205ad50f3d315d3d71c2841f7a90cfe47ef007b9df.scope: Deactivated successfully. May 13 00:50:02.926059 env[1198]: time="2025-05-13T00:50:02.926006871Z" level=info msg="shim disconnected" id=b063bda88d38ff1140466c205ad50f3d315d3d71c2841f7a90cfe47ef007b9df May 13 00:50:02.926059 env[1198]: time="2025-05-13T00:50:02.926053168Z" level=warning msg="cleaning up after shim disconnected" id=b063bda88d38ff1140466c205ad50f3d315d3d71c2841f7a90cfe47ef007b9df namespace=k8s.io May 13 00:50:02.926059 env[1198]: time="2025-05-13T00:50:02.926062543Z" level=info msg="cleaning up dead shim" May 13 00:50:02.932090 env[1198]: time="2025-05-13T00:50:02.932050075Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:50:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1880 runtime=io.containerd.runc.v2\n" May 13 00:50:03.291768 kubelet[1401]: E0513 00:50:03.291639 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:50:03.844118 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b063bda88d38ff1140466c205ad50f3d315d3d71c2841f7a90cfe47ef007b9df-rootfs.mount: Deactivated successfully. May 13 00:50:03.844743 kubelet[1401]: E0513 00:50:03.844721 1401 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:50:03.846346 env[1198]: time="2025-05-13T00:50:03.846309348Z" level=info msg="CreateContainer within sandbox \"3b41b2e4617fc791781e0d5a444a94a5d2bec7e735e74812389e327492105acf\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 13 00:50:03.860822 env[1198]: time="2025-05-13T00:50:03.860767961Z" level=info msg="CreateContainer within sandbox \"3b41b2e4617fc791781e0d5a444a94a5d2bec7e735e74812389e327492105acf\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"cb64878f445a9605869d062a93dd799836500875bc97bbeed569a2771f5cbf9d\"" May 13 00:50:03.861367 env[1198]: time="2025-05-13T00:50:03.861341683Z" level=info msg="StartContainer for \"cb64878f445a9605869d062a93dd799836500875bc97bbeed569a2771f5cbf9d\"" May 13 00:50:03.875403 systemd[1]: Started cri-containerd-cb64878f445a9605869d062a93dd799836500875bc97bbeed569a2771f5cbf9d.scope. May 13 00:50:03.894201 systemd[1]: cri-containerd-cb64878f445a9605869d062a93dd799836500875bc97bbeed569a2771f5cbf9d.scope: Deactivated successfully. May 13 00:50:03.895811 env[1198]: time="2025-05-13T00:50:03.895767719Z" level=info msg="StartContainer for \"cb64878f445a9605869d062a93dd799836500875bc97bbeed569a2771f5cbf9d\" returns successfully" May 13 00:50:03.915358 env[1198]: time="2025-05-13T00:50:03.915313672Z" level=info msg="shim disconnected" id=cb64878f445a9605869d062a93dd799836500875bc97bbeed569a2771f5cbf9d May 13 00:50:03.915358 env[1198]: time="2025-05-13T00:50:03.915356693Z" level=warning msg="cleaning up after shim disconnected" id=cb64878f445a9605869d062a93dd799836500875bc97bbeed569a2771f5cbf9d namespace=k8s.io May 13 00:50:03.915544 env[1198]: time="2025-05-13T00:50:03.915364609Z" level=info msg="cleaning up dead shim" May 13 00:50:03.921112 env[1198]: time="2025-05-13T00:50:03.921082891Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:50:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1935 runtime=io.containerd.runc.v2\n" May 13 00:50:04.292636 kubelet[1401]: E0513 00:50:04.292534 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:50:04.844300 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cb64878f445a9605869d062a93dd799836500875bc97bbeed569a2771f5cbf9d-rootfs.mount: Deactivated successfully. May 13 00:50:04.848565 kubelet[1401]: E0513 00:50:04.848544 1401 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:50:04.850047 env[1198]: time="2025-05-13T00:50:04.849985653Z" level=info msg="CreateContainer within sandbox \"3b41b2e4617fc791781e0d5a444a94a5d2bec7e735e74812389e327492105acf\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 13 00:50:04.867542 env[1198]: time="2025-05-13T00:50:04.867490246Z" level=info msg="CreateContainer within sandbox \"3b41b2e4617fc791781e0d5a444a94a5d2bec7e735e74812389e327492105acf\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6bbadf52ad2f22b715d370c9c62352053ea889aaf238b9e94b28b04ad501cfe1\"" May 13 00:50:04.868024 env[1198]: time="2025-05-13T00:50:04.867991355Z" level=info msg="StartContainer for \"6bbadf52ad2f22b715d370c9c62352053ea889aaf238b9e94b28b04ad501cfe1\"" May 13 00:50:04.883852 systemd[1]: Started cri-containerd-6bbadf52ad2f22b715d370c9c62352053ea889aaf238b9e94b28b04ad501cfe1.scope. May 13 00:50:04.905473 env[1198]: time="2025-05-13T00:50:04.905437630Z" level=info msg="StartContainer for \"6bbadf52ad2f22b715d370c9c62352053ea889aaf238b9e94b28b04ad501cfe1\" returns successfully" May 13 00:50:04.998860 kubelet[1401]: I0513 00:50:04.998813 1401 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 13 00:50:05.196987 kernel: Initializing XFRM netlink socket May 13 00:50:05.293644 kubelet[1401]: E0513 00:50:05.293596 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:50:05.853248 kubelet[1401]: E0513 00:50:05.853215 1401 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:50:06.281025 kubelet[1401]: E0513 00:50:06.280859 1401 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:50:06.294386 kubelet[1401]: E0513 00:50:06.294312 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:50:06.811600 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready May 13 00:50:06.811708 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready May 13 00:50:06.812298 systemd-networkd[1025]: cilium_host: Link UP May 13 00:50:06.812437 systemd-networkd[1025]: cilium_net: Link UP May 13 00:50:06.812564 systemd-networkd[1025]: cilium_net: Gained carrier May 13 00:50:06.812674 systemd-networkd[1025]: cilium_host: Gained carrier May 13 00:50:06.855148 kubelet[1401]: E0513 00:50:06.854829 1401 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:50:06.893945 systemd-networkd[1025]: cilium_vxlan: Link UP May 13 00:50:06.893954 systemd-networkd[1025]: cilium_vxlan: Gained carrier May 13 00:50:07.010122 systemd-networkd[1025]: cilium_host: Gained IPv6LL May 13 00:50:07.095998 kernel: NET: Registered PF_ALG protocol family May 13 00:50:07.295072 kubelet[1401]: E0513 00:50:07.295005 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:50:07.474112 systemd-networkd[1025]: cilium_net: Gained IPv6LL May 13 00:50:07.599721 systemd-networkd[1025]: lxc_health: Link UP May 13 00:50:07.612649 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 13 00:50:07.611203 systemd-networkd[1025]: lxc_health: Gained carrier May 13 00:50:07.856931 kubelet[1401]: E0513 00:50:07.856705 1401 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:50:08.295541 kubelet[1401]: E0513 00:50:08.295365 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:50:08.625771 kubelet[1401]: I0513 00:50:08.625709 1401 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-jczmd" podStartSLOduration=11.850877222 podStartE2EDuration="22.625691565s" podCreationTimestamp="2025-05-13 00:49:46 +0000 UTC" firstStartedPulling="2025-05-13 00:49:50.059814229 +0000 UTC m=+4.146412314" lastFinishedPulling="2025-05-13 00:50:00.834628572 +0000 UTC m=+14.921226657" observedRunningTime="2025-05-13 00:50:05.871165194 +0000 UTC m=+19.957763309" watchObservedRunningTime="2025-05-13 00:50:08.625691565 +0000 UTC m=+22.712289650" May 13 00:50:08.627118 systemd-networkd[1025]: cilium_vxlan: Gained IPv6LL May 13 00:50:08.690185 systemd-networkd[1025]: lxc_health: Gained IPv6LL May 13 00:50:08.815744 systemd[1]: Created slice kubepods-besteffort-pod3f48ee75_dbcf_4793_95a6_e430f839e0d8.slice. May 13 00:50:08.835815 kubelet[1401]: I0513 00:50:08.835742 1401 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkkfk\" (UniqueName: \"kubernetes.io/projected/3f48ee75-dbcf-4793-95a6-e430f839e0d8-kube-api-access-kkkfk\") pod \"nginx-deployment-8587fbcb89-sgm8b\" (UID: \"3f48ee75-dbcf-4793-95a6-e430f839e0d8\") " pod="default/nginx-deployment-8587fbcb89-sgm8b" May 13 00:50:08.858510 kubelet[1401]: E0513 00:50:08.858471 1401 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:50:09.119192 env[1198]: time="2025-05-13T00:50:09.119119508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-sgm8b,Uid:3f48ee75-dbcf-4793-95a6-e430f839e0d8,Namespace:default,Attempt:0,}" May 13 00:50:09.155478 systemd-networkd[1025]: lxcd3caddc50719: Link UP May 13 00:50:09.162996 kernel: eth0: renamed from tmpd4cbb May 13 00:50:09.169862 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 13 00:50:09.169907 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcd3caddc50719: link becomes ready May 13 00:50:09.170587 systemd-networkd[1025]: lxcd3caddc50719: Gained carrier May 13 00:50:09.296238 kubelet[1401]: E0513 00:50:09.296169 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:50:10.296646 kubelet[1401]: E0513 00:50:10.296594 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:50:10.738371 systemd-networkd[1025]: lxcd3caddc50719: Gained IPv6LL May 13 00:50:10.929186 env[1198]: time="2025-05-13T00:50:10.929104521Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:50:10.929186 env[1198]: time="2025-05-13T00:50:10.929150056Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:50:10.929186 env[1198]: time="2025-05-13T00:50:10.929160151Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:50:10.929576 env[1198]: time="2025-05-13T00:50:10.929305688Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d4cbbc9844d167f6bd3f9c61f7bd521a08d130a44d37948a24e755ae8cc1a43b pid=2494 runtime=io.containerd.runc.v2 May 13 00:50:10.941706 systemd[1]: Started cri-containerd-d4cbbc9844d167f6bd3f9c61f7bd521a08d130a44d37948a24e755ae8cc1a43b.scope. May 13 00:50:10.951141 systemd-resolved[1139]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:50:10.970880 env[1198]: time="2025-05-13T00:50:10.970826708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-sgm8b,Uid:3f48ee75-dbcf-4793-95a6-e430f839e0d8,Namespace:default,Attempt:0,} returns sandbox id \"d4cbbc9844d167f6bd3f9c61f7bd521a08d130a44d37948a24e755ae8cc1a43b\"" May 13 00:50:10.972364 env[1198]: time="2025-05-13T00:50:10.972333230Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" May 13 00:50:11.297576 kubelet[1401]: E0513 00:50:11.297515 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:50:12.126378 kubelet[1401]: I0513 00:50:12.126335 1401 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 00:50:12.126860 kubelet[1401]: E0513 00:50:12.126841 1401 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:50:12.298463 kubelet[1401]: E0513 00:50:12.298404 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:50:12.865456 kubelet[1401]: E0513 00:50:12.865407 1401 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:50:13.299146 kubelet[1401]: E0513 00:50:13.298989 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:50:13.817359 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3168201821.mount: Deactivated successfully. May 13 00:50:14.299889 kubelet[1401]: E0513 00:50:14.299786 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:50:15.300842 kubelet[1401]: E0513 00:50:15.300797 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:50:15.690720 env[1198]: time="2025-05-13T00:50:15.690626555Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:50:15.693349 env[1198]: time="2025-05-13T00:50:15.693318088Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7e2dd24abce21cd256091445aca4b7eb00774264c2b0a8714701dd7091509efa,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:50:15.695220 env[1198]: time="2025-05-13T00:50:15.695190765Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:50:15.696695 env[1198]: time="2025-05-13T00:50:15.696657750Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:50:15.697193 env[1198]: time="2025-05-13T00:50:15.697169603Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:7e2dd24abce21cd256091445aca4b7eb00774264c2b0a8714701dd7091509efa\"" May 13 00:50:15.699233 env[1198]: time="2025-05-13T00:50:15.699206963Z" level=info msg="CreateContainer within sandbox \"d4cbbc9844d167f6bd3f9c61f7bd521a08d130a44d37948a24e755ae8cc1a43b\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" May 13 00:50:15.710778 env[1198]: time="2025-05-13T00:50:15.710732442Z" level=info msg="CreateContainer within sandbox \"d4cbbc9844d167f6bd3f9c61f7bd521a08d130a44d37948a24e755ae8cc1a43b\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"2babc69c7340248ec66c8c4940f71f44898f22fb124ef4604cd6beafebcedf91\"" May 13 00:50:15.711129 env[1198]: time="2025-05-13T00:50:15.711108677Z" level=info msg="StartContainer for \"2babc69c7340248ec66c8c4940f71f44898f22fb124ef4604cd6beafebcedf91\"" May 13 00:50:15.724688 systemd[1]: Started cri-containerd-2babc69c7340248ec66c8c4940f71f44898f22fb124ef4604cd6beafebcedf91.scope. May 13 00:50:15.744033 env[1198]: time="2025-05-13T00:50:15.743999035Z" level=info msg="StartContainer for \"2babc69c7340248ec66c8c4940f71f44898f22fb124ef4604cd6beafebcedf91\" returns successfully" May 13 00:50:15.877722 kubelet[1401]: I0513 00:50:15.877653 1401 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-8587fbcb89-sgm8b" podStartSLOduration=3.151433103 podStartE2EDuration="7.877636361s" podCreationTimestamp="2025-05-13 00:50:08 +0000 UTC" firstStartedPulling="2025-05-13 00:50:10.971930195 +0000 UTC m=+25.058528270" lastFinishedPulling="2025-05-13 00:50:15.698133443 +0000 UTC m=+29.784731528" observedRunningTime="2025-05-13 00:50:15.877434328 +0000 UTC m=+29.964032413" watchObservedRunningTime="2025-05-13 00:50:15.877636361 +0000 UTC m=+29.964234446" May 13 00:50:16.301993 kubelet[1401]: E0513 00:50:16.301916 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:50:17.302944 kubelet[1401]: E0513 00:50:17.302865 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:50:18.303509 kubelet[1401]: E0513 00:50:18.303473 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:50:19.304199 kubelet[1401]: E0513 00:50:19.304066 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:50:20.304476 kubelet[1401]: E0513 00:50:20.304387 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:50:21.028388 systemd[1]: Created slice kubepods-besteffort-pode697fb0e_b3a0_454f_9cae_0824991d5223.slice. May 13 00:50:21.102276 kubelet[1401]: I0513 00:50:21.102199 1401 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gr9lv\" (UniqueName: \"kubernetes.io/projected/e697fb0e-b3a0-454f-9cae-0824991d5223-kube-api-access-gr9lv\") pod \"nfs-server-provisioner-0\" (UID: \"e697fb0e-b3a0-454f-9cae-0824991d5223\") " pod="default/nfs-server-provisioner-0" May 13 00:50:21.102276 kubelet[1401]: I0513 00:50:21.102259 1401 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/e697fb0e-b3a0-454f-9cae-0824991d5223-data\") pod \"nfs-server-provisioner-0\" (UID: \"e697fb0e-b3a0-454f-9cae-0824991d5223\") " pod="default/nfs-server-provisioner-0" May 13 00:50:21.305176 kubelet[1401]: E0513 00:50:21.305109 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:50:21.331177 env[1198]: time="2025-05-13T00:50:21.331112694Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:e697fb0e-b3a0-454f-9cae-0824991d5223,Namespace:default,Attempt:0,}" May 13 00:50:21.359977 systemd-networkd[1025]: lxc6c2785d7afac: Link UP May 13 00:50:21.369082 kernel: eth0: renamed from tmp6818d May 13 00:50:21.374439 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 13 00:50:21.374776 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc6c2785d7afac: link becomes ready May 13 00:50:21.374551 systemd-networkd[1025]: lxc6c2785d7afac: Gained carrier May 13 00:50:21.545031 env[1198]: time="2025-05-13T00:50:21.544952803Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:50:21.545031 env[1198]: time="2025-05-13T00:50:21.544999061Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:50:21.545031 env[1198]: time="2025-05-13T00:50:21.545009020Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:50:21.545248 env[1198]: time="2025-05-13T00:50:21.545195222Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6818dbc2be41e780a659da9b1dd23d4bdba13064edd26419d8fdb6e5a116defc pid=2622 runtime=io.containerd.runc.v2 May 13 00:50:21.557427 systemd[1]: Started cri-containerd-6818dbc2be41e780a659da9b1dd23d4bdba13064edd26419d8fdb6e5a116defc.scope. May 13 00:50:21.567039 systemd-resolved[1139]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:50:21.586483 env[1198]: time="2025-05-13T00:50:21.586432458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:e697fb0e-b3a0-454f-9cae-0824991d5223,Namespace:default,Attempt:0,} returns sandbox id \"6818dbc2be41e780a659da9b1dd23d4bdba13064edd26419d8fdb6e5a116defc\"" May 13 00:50:21.588079 env[1198]: time="2025-05-13T00:50:21.588035083Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" May 13 00:50:22.305280 kubelet[1401]: E0513 00:50:22.305237 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:50:22.706108 systemd-networkd[1025]: lxc6c2785d7afac: Gained IPv6LL May 13 00:50:23.305437 kubelet[1401]: E0513 00:50:23.305386 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:50:23.913565 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3439874493.mount: Deactivated successfully. May 13 00:50:24.305668 kubelet[1401]: E0513 00:50:24.305628 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:50:25.059103 update_engine[1188]: I0513 00:50:25.059026 1188 update_attempter.cc:509] Updating boot flags... May 13 00:50:25.306510 kubelet[1401]: E0513 00:50:25.306448 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:50:26.280849 kubelet[1401]: E0513 00:50:26.280788 1401 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:50:26.307527 kubelet[1401]: E0513 00:50:26.307459 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:50:26.327151 env[1198]: time="2025-05-13T00:50:26.327078342Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:50:26.329525 env[1198]: time="2025-05-13T00:50:26.329485028Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:50:26.331638 env[1198]: time="2025-05-13T00:50:26.331613239Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:50:26.333726 env[1198]: time="2025-05-13T00:50:26.333672620Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:50:26.334651 env[1198]: time="2025-05-13T00:50:26.334597666Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" May 13 00:50:26.336840 env[1198]: time="2025-05-13T00:50:26.336795038Z" level=info msg="CreateContainer within sandbox \"6818dbc2be41e780a659da9b1dd23d4bdba13064edd26419d8fdb6e5a116defc\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" May 13 00:50:26.352615 env[1198]: time="2025-05-13T00:50:26.352556964Z" level=info msg="CreateContainer within sandbox \"6818dbc2be41e780a659da9b1dd23d4bdba13064edd26419d8fdb6e5a116defc\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"7d24c81d1d9de27120540ad84e236d09c3930d2daa07ad2130ce6af0006d482c\"" May 13 00:50:26.353153 env[1198]: time="2025-05-13T00:50:26.353104198Z" level=info msg="StartContainer for \"7d24c81d1d9de27120540ad84e236d09c3930d2daa07ad2130ce6af0006d482c\"" May 13 00:50:26.370142 systemd[1]: run-containerd-runc-k8s.io-7d24c81d1d9de27120540ad84e236d09c3930d2daa07ad2130ce6af0006d482c-runc.B49liQ.mount: Deactivated successfully. May 13 00:50:26.372896 systemd[1]: Started cri-containerd-7d24c81d1d9de27120540ad84e236d09c3930d2daa07ad2130ce6af0006d482c.scope. May 13 00:50:26.426641 env[1198]: time="2025-05-13T00:50:26.426570283Z" level=info msg="StartContainer for \"7d24c81d1d9de27120540ad84e236d09c3930d2daa07ad2130ce6af0006d482c\" returns successfully" May 13 00:50:26.904512 kubelet[1401]: I0513 00:50:26.904429 1401 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.156435925 podStartE2EDuration="5.904409096s" podCreationTimestamp="2025-05-13 00:50:21 +0000 UTC" firstStartedPulling="2025-05-13 00:50:21.587571264 +0000 UTC m=+35.674169339" lastFinishedPulling="2025-05-13 00:50:26.335544425 +0000 UTC m=+40.422142510" observedRunningTime="2025-05-13 00:50:26.904091095 +0000 UTC m=+40.990689191" watchObservedRunningTime="2025-05-13 00:50:26.904409096 +0000 UTC m=+40.991007181" May 13 00:50:27.308100 kubelet[1401]: E0513 00:50:27.308018 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:50:28.309184 kubelet[1401]: E0513 00:50:28.309134 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:50:29.309414 kubelet[1401]: E0513 00:50:29.309355 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:50:30.309695 kubelet[1401]: E0513 00:50:30.309643 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:50:31.310703 kubelet[1401]: E0513 00:50:31.310675 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:50:32.310988 kubelet[1401]: E0513 00:50:32.310929 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:50:33.311529 kubelet[1401]: E0513 00:50:33.311477 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:50:34.311904 kubelet[1401]: E0513 00:50:34.311847 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:50:35.312922 kubelet[1401]: E0513 00:50:35.312853 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:50:36.235047 systemd[1]: Created slice kubepods-besteffort-pod8bf2f3a0_e336_4470_8f37_b3bc5a994ddb.slice. May 13 00:50:36.285386 kubelet[1401]: I0513 00:50:36.285337 1401 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3c2c9091-0a3d-4162-9bc7-20a1c1a2e4e1\" (UniqueName: \"kubernetes.io/nfs/8bf2f3a0-e336-4470-8f37-b3bc5a994ddb-pvc-3c2c9091-0a3d-4162-9bc7-20a1c1a2e4e1\") pod \"test-pod-1\" (UID: \"8bf2f3a0-e336-4470-8f37-b3bc5a994ddb\") " pod="default/test-pod-1" May 13 00:50:36.285386 kubelet[1401]: I0513 00:50:36.285388 1401 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r955t\" (UniqueName: \"kubernetes.io/projected/8bf2f3a0-e336-4470-8f37-b3bc5a994ddb-kube-api-access-r955t\") pod \"test-pod-1\" (UID: \"8bf2f3a0-e336-4470-8f37-b3bc5a994ddb\") " pod="default/test-pod-1" May 13 00:50:36.313602 kubelet[1401]: E0513 00:50:36.313577 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:50:36.406007 kernel: FS-Cache: Loaded May 13 00:50:36.663008 kernel: RPC: Registered named UNIX socket transport module. May 13 00:50:36.663163 kernel: RPC: Registered udp transport module. May 13 00:50:36.664361 kernel: RPC: Registered tcp transport module. May 13 00:50:36.664412 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. May 13 00:50:36.718992 kernel: FS-Cache: Netfs 'nfs' registered for caching May 13 00:50:36.899568 kernel: NFS: Registering the id_resolver key type May 13 00:50:36.899689 kernel: Key type id_resolver registered May 13 00:50:36.899713 kernel: Key type id_legacy registered May 13 00:50:36.922222 nfsidmap[2757]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' May 13 00:50:36.925106 nfsidmap[2760]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' May 13 00:50:37.138193 env[1198]: time="2025-05-13T00:50:37.138142645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:8bf2f3a0-e336-4470-8f37-b3bc5a994ddb,Namespace:default,Attempt:0,}" May 13 00:50:37.171626 systemd-networkd[1025]: lxc99aa47f64a0f: Link UP May 13 00:50:37.182005 kernel: eth0: renamed from tmpf5f86 May 13 00:50:37.188420 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 13 00:50:37.188633 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc99aa47f64a0f: link becomes ready May 13 00:50:37.188530 systemd-networkd[1025]: lxc99aa47f64a0f: Gained carrier May 13 00:50:37.313936 kubelet[1401]: E0513 00:50:37.313878 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:50:37.415455 env[1198]: time="2025-05-13T00:50:37.415355786Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:50:37.415455 env[1198]: time="2025-05-13T00:50:37.415410439Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:50:37.415682 env[1198]: time="2025-05-13T00:50:37.415427922Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:50:37.415682 env[1198]: time="2025-05-13T00:50:37.415621496Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f5f866732b569d5b4707f4e21ccb6ec8f9f746653e396159db896b6f7213559b pid=2795 runtime=io.containerd.runc.v2 May 13 00:50:37.433401 systemd[1]: run-containerd-runc-k8s.io-f5f866732b569d5b4707f4e21ccb6ec8f9f746653e396159db896b6f7213559b-runc.0ilstd.mount: Deactivated successfully. May 13 00:50:37.435783 systemd[1]: Started cri-containerd-f5f866732b569d5b4707f4e21ccb6ec8f9f746653e396159db896b6f7213559b.scope. May 13 00:50:37.451136 systemd-resolved[1139]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:50:37.478292 env[1198]: time="2025-05-13T00:50:37.478227143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:8bf2f3a0-e336-4470-8f37-b3bc5a994ddb,Namespace:default,Attempt:0,} returns sandbox id \"f5f866732b569d5b4707f4e21ccb6ec8f9f746653e396159db896b6f7213559b\"" May 13 00:50:37.479660 env[1198]: time="2025-05-13T00:50:37.479619035Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" May 13 00:50:37.934864 env[1198]: time="2025-05-13T00:50:37.934821388Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:50:37.936558 env[1198]: time="2025-05-13T00:50:37.936529887Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7e2dd24abce21cd256091445aca4b7eb00774264c2b0a8714701dd7091509efa,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:50:37.938261 env[1198]: time="2025-05-13T00:50:37.938238266Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:50:37.940016 env[1198]: time="2025-05-13T00:50:37.939979646Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:50:37.940685 env[1198]: time="2025-05-13T00:50:37.940649448Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:7e2dd24abce21cd256091445aca4b7eb00774264c2b0a8714701dd7091509efa\"" May 13 00:50:37.942590 env[1198]: time="2025-05-13T00:50:37.942560639Z" level=info msg="CreateContainer within sandbox \"f5f866732b569d5b4707f4e21ccb6ec8f9f746653e396159db896b6f7213559b\" for container &ContainerMetadata{Name:test,Attempt:0,}" May 13 00:50:37.956469 env[1198]: time="2025-05-13T00:50:37.956435131Z" level=info msg="CreateContainer within sandbox \"f5f866732b569d5b4707f4e21ccb6ec8f9f746653e396159db896b6f7213559b\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"dc727c1ca8b2fdcc8c37bd6845eb2cc68660b85f3bc3ca4e04275e87fcd50176\"" May 13 00:50:37.956796 env[1198]: time="2025-05-13T00:50:37.956772286Z" level=info msg="StartContainer for \"dc727c1ca8b2fdcc8c37bd6845eb2cc68660b85f3bc3ca4e04275e87fcd50176\"" May 13 00:50:37.971252 systemd[1]: Started cri-containerd-dc727c1ca8b2fdcc8c37bd6845eb2cc68660b85f3bc3ca4e04275e87fcd50176.scope. May 13 00:50:37.996003 env[1198]: time="2025-05-13T00:50:37.995932368Z" level=info msg="StartContainer for \"dc727c1ca8b2fdcc8c37bd6845eb2cc68660b85f3bc3ca4e04275e87fcd50176\" returns successfully" May 13 00:50:38.258079 systemd-networkd[1025]: lxc99aa47f64a0f: Gained IPv6LL May 13 00:50:38.315021 kubelet[1401]: E0513 00:50:38.314990 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:50:38.918107 kubelet[1401]: I0513 00:50:38.918055 1401 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=17.455808382 podStartE2EDuration="17.918040349s" podCreationTimestamp="2025-05-13 00:50:21 +0000 UTC" firstStartedPulling="2025-05-13 00:50:37.479337916 +0000 UTC m=+51.565936001" lastFinishedPulling="2025-05-13 00:50:37.941569883 +0000 UTC m=+52.028167968" observedRunningTime="2025-05-13 00:50:38.917750193 +0000 UTC m=+53.004348278" watchObservedRunningTime="2025-05-13 00:50:38.918040349 +0000 UTC m=+53.004638435" May 13 00:50:39.315741 kubelet[1401]: E0513 00:50:39.315707 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:50:40.316820 kubelet[1401]: E0513 00:50:40.316791 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:50:41.317051 kubelet[1401]: E0513 00:50:41.317019 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:50:42.318030 kubelet[1401]: E0513 00:50:42.318006 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:50:43.318491 kubelet[1401]: E0513 00:50:43.318453 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:50:43.381197 env[1198]: time="2025-05-13T00:50:43.381132589Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 00:50:43.385869 env[1198]: time="2025-05-13T00:50:43.385847365Z" level=info msg="StopContainer for \"6bbadf52ad2f22b715d370c9c62352053ea889aaf238b9e94b28b04ad501cfe1\" with timeout 2 (s)" May 13 00:50:43.386049 env[1198]: time="2025-05-13T00:50:43.386034256Z" level=info msg="Stop container \"6bbadf52ad2f22b715d370c9c62352053ea889aaf238b9e94b28b04ad501cfe1\" with signal terminated" May 13 00:50:43.390904 systemd-networkd[1025]: lxc_health: Link DOWN May 13 00:50:43.390913 systemd-networkd[1025]: lxc_health: Lost carrier May 13 00:50:43.433263 systemd[1]: cri-containerd-6bbadf52ad2f22b715d370c9c62352053ea889aaf238b9e94b28b04ad501cfe1.scope: Deactivated successfully. May 13 00:50:43.433513 systemd[1]: cri-containerd-6bbadf52ad2f22b715d370c9c62352053ea889aaf238b9e94b28b04ad501cfe1.scope: Consumed 5.996s CPU time. May 13 00:50:43.447085 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6bbadf52ad2f22b715d370c9c62352053ea889aaf238b9e94b28b04ad501cfe1-rootfs.mount: Deactivated successfully. May 13 00:50:43.452813 env[1198]: time="2025-05-13T00:50:43.452763579Z" level=info msg="shim disconnected" id=6bbadf52ad2f22b715d370c9c62352053ea889aaf238b9e94b28b04ad501cfe1 May 13 00:50:43.452813 env[1198]: time="2025-05-13T00:50:43.452811849Z" level=warning msg="cleaning up after shim disconnected" id=6bbadf52ad2f22b715d370c9c62352053ea889aaf238b9e94b28b04ad501cfe1 namespace=k8s.io May 13 00:50:43.453000 env[1198]: time="2025-05-13T00:50:43.452820385Z" level=info msg="cleaning up dead shim" May 13 00:50:43.458243 env[1198]: time="2025-05-13T00:50:43.458201325Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:50:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2928 runtime=io.containerd.runc.v2\n" May 13 00:50:43.461077 env[1198]: time="2025-05-13T00:50:43.461046381Z" level=info msg="StopContainer for \"6bbadf52ad2f22b715d370c9c62352053ea889aaf238b9e94b28b04ad501cfe1\" returns successfully" May 13 00:50:43.461688 env[1198]: time="2025-05-13T00:50:43.461633347Z" level=info msg="StopPodSandbox for \"3b41b2e4617fc791781e0d5a444a94a5d2bec7e735e74812389e327492105acf\"" May 13 00:50:43.461825 env[1198]: time="2025-05-13T00:50:43.461697107Z" level=info msg="Container to stop \"7bb30aa936ab814db0418bdd7517ae6be0f0ed8e3a463fa32d89a7fd3e6bf2ff\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:50:43.461825 env[1198]: time="2025-05-13T00:50:43.461709730Z" level=info msg="Container to stop \"6bbadf52ad2f22b715d370c9c62352053ea889aaf238b9e94b28b04ad501cfe1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:50:43.461825 env[1198]: time="2025-05-13T00:50:43.461719859Z" level=info msg="Container to stop \"6c2e53555ffd45adebc0655c6b1c244096fda6ced70ddce16b686086427607a9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:50:43.461825 env[1198]: time="2025-05-13T00:50:43.461728886Z" level=info msg="Container to stop \"b063bda88d38ff1140466c205ad50f3d315d3d71c2841f7a90cfe47ef007b9df\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:50:43.461825 env[1198]: time="2025-05-13T00:50:43.461739005Z" level=info msg="Container to stop \"cb64878f445a9605869d062a93dd799836500875bc97bbeed569a2771f5cbf9d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:50:43.463472 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3b41b2e4617fc791781e0d5a444a94a5d2bec7e735e74812389e327492105acf-shm.mount: Deactivated successfully. May 13 00:50:43.466216 systemd[1]: cri-containerd-3b41b2e4617fc791781e0d5a444a94a5d2bec7e735e74812389e327492105acf.scope: Deactivated successfully. May 13 00:50:43.484099 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3b41b2e4617fc791781e0d5a444a94a5d2bec7e735e74812389e327492105acf-rootfs.mount: Deactivated successfully. May 13 00:50:43.487249 env[1198]: time="2025-05-13T00:50:43.487208764Z" level=info msg="shim disconnected" id=3b41b2e4617fc791781e0d5a444a94a5d2bec7e735e74812389e327492105acf May 13 00:50:43.487350 env[1198]: time="2025-05-13T00:50:43.487249420Z" level=warning msg="cleaning up after shim disconnected" id=3b41b2e4617fc791781e0d5a444a94a5d2bec7e735e74812389e327492105acf namespace=k8s.io May 13 00:50:43.487350 env[1198]: time="2025-05-13T00:50:43.487257946Z" level=info msg="cleaning up dead shim" May 13 00:50:43.493247 env[1198]: time="2025-05-13T00:50:43.493208528Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:50:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2958 runtime=io.containerd.runc.v2\n" May 13 00:50:43.493547 env[1198]: time="2025-05-13T00:50:43.493513082Z" level=info msg="TearDown network for sandbox \"3b41b2e4617fc791781e0d5a444a94a5d2bec7e735e74812389e327492105acf\" successfully" May 13 00:50:43.493547 env[1198]: time="2025-05-13T00:50:43.493538159Z" level=info msg="StopPodSandbox for \"3b41b2e4617fc791781e0d5a444a94a5d2bec7e735e74812389e327492105acf\" returns successfully" May 13 00:50:43.522334 kubelet[1401]: I0513 00:50:43.522305 1401 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/09fa62f7-1b5d-4b97-9f90-f9c08f150e9e-cilium-config-path\") pod \"09fa62f7-1b5d-4b97-9f90-f9c08f150e9e\" (UID: \"09fa62f7-1b5d-4b97-9f90-f9c08f150e9e\") " May 13 00:50:43.522481 kubelet[1401]: I0513 00:50:43.522356 1401 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/09fa62f7-1b5d-4b97-9f90-f9c08f150e9e-cilium-run\") pod \"09fa62f7-1b5d-4b97-9f90-f9c08f150e9e\" (UID: \"09fa62f7-1b5d-4b97-9f90-f9c08f150e9e\") " May 13 00:50:43.522481 kubelet[1401]: I0513 00:50:43.522377 1401 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/09fa62f7-1b5d-4b97-9f90-f9c08f150e9e-clustermesh-secrets\") pod \"09fa62f7-1b5d-4b97-9f90-f9c08f150e9e\" (UID: \"09fa62f7-1b5d-4b97-9f90-f9c08f150e9e\") " May 13 00:50:43.522481 kubelet[1401]: I0513 00:50:43.522391 1401 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/09fa62f7-1b5d-4b97-9f90-f9c08f150e9e-cni-path\") pod \"09fa62f7-1b5d-4b97-9f90-f9c08f150e9e\" (UID: \"09fa62f7-1b5d-4b97-9f90-f9c08f150e9e\") " May 13 00:50:43.522481 kubelet[1401]: I0513 00:50:43.522407 1401 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/09fa62f7-1b5d-4b97-9f90-f9c08f150e9e-hubble-tls\") pod \"09fa62f7-1b5d-4b97-9f90-f9c08f150e9e\" (UID: \"09fa62f7-1b5d-4b97-9f90-f9c08f150e9e\") " May 13 00:50:43.522481 kubelet[1401]: I0513 00:50:43.522421 1401 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/09fa62f7-1b5d-4b97-9f90-f9c08f150e9e-cilium-cgroup\") pod \"09fa62f7-1b5d-4b97-9f90-f9c08f150e9e\" (UID: \"09fa62f7-1b5d-4b97-9f90-f9c08f150e9e\") " May 13 00:50:43.522481 kubelet[1401]: I0513 00:50:43.522433 1401 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/09fa62f7-1b5d-4b97-9f90-f9c08f150e9e-host-proc-sys-net\") pod \"09fa62f7-1b5d-4b97-9f90-f9c08f150e9e\" (UID: \"09fa62f7-1b5d-4b97-9f90-f9c08f150e9e\") " May 13 00:50:43.522639 kubelet[1401]: I0513 00:50:43.522444 1401 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/09fa62f7-1b5d-4b97-9f90-f9c08f150e9e-host-proc-sys-kernel\") pod \"09fa62f7-1b5d-4b97-9f90-f9c08f150e9e\" (UID: \"09fa62f7-1b5d-4b97-9f90-f9c08f150e9e\") " May 13 00:50:43.522639 kubelet[1401]: I0513 00:50:43.522456 1401 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/09fa62f7-1b5d-4b97-9f90-f9c08f150e9e-bpf-maps\") pod \"09fa62f7-1b5d-4b97-9f90-f9c08f150e9e\" (UID: \"09fa62f7-1b5d-4b97-9f90-f9c08f150e9e\") " May 13 00:50:43.522639 kubelet[1401]: I0513 00:50:43.522470 1401 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ppvb8\" (UniqueName: \"kubernetes.io/projected/09fa62f7-1b5d-4b97-9f90-f9c08f150e9e-kube-api-access-ppvb8\") pod \"09fa62f7-1b5d-4b97-9f90-f9c08f150e9e\" (UID: \"09fa62f7-1b5d-4b97-9f90-f9c08f150e9e\") " May 13 00:50:43.522639 kubelet[1401]: I0513 00:50:43.522457 1401 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/09fa62f7-1b5d-4b97-9f90-f9c08f150e9e-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "09fa62f7-1b5d-4b97-9f90-f9c08f150e9e" (UID: "09fa62f7-1b5d-4b97-9f90-f9c08f150e9e"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:50:43.522639 kubelet[1401]: I0513 00:50:43.522483 1401 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/09fa62f7-1b5d-4b97-9f90-f9c08f150e9e-hostproc\") pod \"09fa62f7-1b5d-4b97-9f90-f9c08f150e9e\" (UID: \"09fa62f7-1b5d-4b97-9f90-f9c08f150e9e\") " May 13 00:50:43.522639 kubelet[1401]: I0513 00:50:43.522497 1401 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/09fa62f7-1b5d-4b97-9f90-f9c08f150e9e-etc-cni-netd\") pod \"09fa62f7-1b5d-4b97-9f90-f9c08f150e9e\" (UID: \"09fa62f7-1b5d-4b97-9f90-f9c08f150e9e\") " May 13 00:50:43.522769 kubelet[1401]: I0513 00:50:43.522510 1401 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/09fa62f7-1b5d-4b97-9f90-f9c08f150e9e-lib-modules\") pod \"09fa62f7-1b5d-4b97-9f90-f9c08f150e9e\" (UID: \"09fa62f7-1b5d-4b97-9f90-f9c08f150e9e\") " May 13 00:50:43.522769 kubelet[1401]: I0513 00:50:43.522522 1401 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/09fa62f7-1b5d-4b97-9f90-f9c08f150e9e-xtables-lock\") pod \"09fa62f7-1b5d-4b97-9f90-f9c08f150e9e\" (UID: \"09fa62f7-1b5d-4b97-9f90-f9c08f150e9e\") " May 13 00:50:43.522769 kubelet[1401]: I0513 00:50:43.522521 1401 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/09fa62f7-1b5d-4b97-9f90-f9c08f150e9e-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "09fa62f7-1b5d-4b97-9f90-f9c08f150e9e" (UID: "09fa62f7-1b5d-4b97-9f90-f9c08f150e9e"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:50:43.522769 kubelet[1401]: I0513 00:50:43.522547 1401 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/09fa62f7-1b5d-4b97-9f90-f9c08f150e9e-cilium-run\") on node \"10.0.0.142\" DevicePath \"\"" May 13 00:50:43.522769 kubelet[1401]: I0513 00:50:43.522556 1401 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/09fa62f7-1b5d-4b97-9f90-f9c08f150e9e-host-proc-sys-net\") on node \"10.0.0.142\" DevicePath \"\"" May 13 00:50:43.522769 kubelet[1401]: I0513 00:50:43.522579 1401 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/09fa62f7-1b5d-4b97-9f90-f9c08f150e9e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "09fa62f7-1b5d-4b97-9f90-f9c08f150e9e" (UID: "09fa62f7-1b5d-4b97-9f90-f9c08f150e9e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:50:43.522912 kubelet[1401]: I0513 00:50:43.522597 1401 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/09fa62f7-1b5d-4b97-9f90-f9c08f150e9e-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "09fa62f7-1b5d-4b97-9f90-f9c08f150e9e" (UID: "09fa62f7-1b5d-4b97-9f90-f9c08f150e9e"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:50:43.522912 kubelet[1401]: I0513 00:50:43.522609 1401 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/09fa62f7-1b5d-4b97-9f90-f9c08f150e9e-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "09fa62f7-1b5d-4b97-9f90-f9c08f150e9e" (UID: "09fa62f7-1b5d-4b97-9f90-f9c08f150e9e"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:50:43.522959 kubelet[1401]: I0513 00:50:43.522918 1401 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/09fa62f7-1b5d-4b97-9f90-f9c08f150e9e-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "09fa62f7-1b5d-4b97-9f90-f9c08f150e9e" (UID: "09fa62f7-1b5d-4b97-9f90-f9c08f150e9e"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:50:43.522959 kubelet[1401]: I0513 00:50:43.522939 1401 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/09fa62f7-1b5d-4b97-9f90-f9c08f150e9e-hostproc" (OuterVolumeSpecName: "hostproc") pod "09fa62f7-1b5d-4b97-9f90-f9c08f150e9e" (UID: "09fa62f7-1b5d-4b97-9f90-f9c08f150e9e"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:50:43.524165 kubelet[1401]: I0513 00:50:43.523245 1401 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/09fa62f7-1b5d-4b97-9f90-f9c08f150e9e-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "09fa62f7-1b5d-4b97-9f90-f9c08f150e9e" (UID: "09fa62f7-1b5d-4b97-9f90-f9c08f150e9e"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:50:43.524165 kubelet[1401]: I0513 00:50:43.523247 1401 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/09fa62f7-1b5d-4b97-9f90-f9c08f150e9e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "09fa62f7-1b5d-4b97-9f90-f9c08f150e9e" (UID: "09fa62f7-1b5d-4b97-9f90-f9c08f150e9e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:50:43.524165 kubelet[1401]: I0513 00:50:43.523266 1401 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/09fa62f7-1b5d-4b97-9f90-f9c08f150e9e-cni-path" (OuterVolumeSpecName: "cni-path") pod "09fa62f7-1b5d-4b97-9f90-f9c08f150e9e" (UID: "09fa62f7-1b5d-4b97-9f90-f9c08f150e9e"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:50:43.524165 kubelet[1401]: I0513 00:50:43.524117 1401 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09fa62f7-1b5d-4b97-9f90-f9c08f150e9e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "09fa62f7-1b5d-4b97-9f90-f9c08f150e9e" (UID: "09fa62f7-1b5d-4b97-9f90-f9c08f150e9e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 13 00:50:43.525275 kubelet[1401]: I0513 00:50:43.525249 1401 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09fa62f7-1b5d-4b97-9f90-f9c08f150e9e-kube-api-access-ppvb8" (OuterVolumeSpecName: "kube-api-access-ppvb8") pod "09fa62f7-1b5d-4b97-9f90-f9c08f150e9e" (UID: "09fa62f7-1b5d-4b97-9f90-f9c08f150e9e"). InnerVolumeSpecName "kube-api-access-ppvb8". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 00:50:43.525343 kubelet[1401]: I0513 00:50:43.525313 1401 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09fa62f7-1b5d-4b97-9f90-f9c08f150e9e-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "09fa62f7-1b5d-4b97-9f90-f9c08f150e9e" (UID: "09fa62f7-1b5d-4b97-9f90-f9c08f150e9e"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 00:50:43.525751 kubelet[1401]: I0513 00:50:43.525721 1401 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09fa62f7-1b5d-4b97-9f90-f9c08f150e9e-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "09fa62f7-1b5d-4b97-9f90-f9c08f150e9e" (UID: "09fa62f7-1b5d-4b97-9f90-f9c08f150e9e"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 13 00:50:43.526764 systemd[1]: var-lib-kubelet-pods-09fa62f7\x2d1b5d\x2d4b97\x2d9f90\x2df9c08f150e9e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dppvb8.mount: Deactivated successfully. May 13 00:50:43.526871 systemd[1]: var-lib-kubelet-pods-09fa62f7\x2d1b5d\x2d4b97\x2d9f90\x2df9c08f150e9e-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 13 00:50:43.526924 systemd[1]: var-lib-kubelet-pods-09fa62f7\x2d1b5d\x2d4b97\x2d9f90\x2df9c08f150e9e-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 13 00:50:43.623191 kubelet[1401]: I0513 00:50:43.623124 1401 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/09fa62f7-1b5d-4b97-9f90-f9c08f150e9e-cni-path\") on node \"10.0.0.142\" DevicePath \"\"" May 13 00:50:43.623191 kubelet[1401]: I0513 00:50:43.623152 1401 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/09fa62f7-1b5d-4b97-9f90-f9c08f150e9e-clustermesh-secrets\") on node \"10.0.0.142\" DevicePath \"\"" May 13 00:50:43.623191 kubelet[1401]: I0513 00:50:43.623162 1401 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/09fa62f7-1b5d-4b97-9f90-f9c08f150e9e-host-proc-sys-kernel\") on node \"10.0.0.142\" DevicePath \"\"" May 13 00:50:43.623191 kubelet[1401]: I0513 00:50:43.623170 1401 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/09fa62f7-1b5d-4b97-9f90-f9c08f150e9e-hubble-tls\") on node \"10.0.0.142\" DevicePath \"\"" May 13 00:50:43.623191 kubelet[1401]: I0513 00:50:43.623178 1401 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/09fa62f7-1b5d-4b97-9f90-f9c08f150e9e-cilium-cgroup\") on node \"10.0.0.142\" DevicePath \"\"" May 13 00:50:43.623191 kubelet[1401]: I0513 00:50:43.623185 1401 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/09fa62f7-1b5d-4b97-9f90-f9c08f150e9e-etc-cni-netd\") on node \"10.0.0.142\" DevicePath \"\"" May 13 00:50:43.623191 kubelet[1401]: I0513 00:50:43.623192 1401 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/09fa62f7-1b5d-4b97-9f90-f9c08f150e9e-lib-modules\") on node \"10.0.0.142\" DevicePath \"\"" May 13 00:50:43.623378 kubelet[1401]: I0513 00:50:43.623199 1401 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/09fa62f7-1b5d-4b97-9f90-f9c08f150e9e-xtables-lock\") on node \"10.0.0.142\" DevicePath \"\"" May 13 00:50:43.623378 kubelet[1401]: I0513 00:50:43.623207 1401 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/09fa62f7-1b5d-4b97-9f90-f9c08f150e9e-bpf-maps\") on node \"10.0.0.142\" DevicePath \"\"" May 13 00:50:43.623378 kubelet[1401]: I0513 00:50:43.623214 1401 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-ppvb8\" (UniqueName: \"kubernetes.io/projected/09fa62f7-1b5d-4b97-9f90-f9c08f150e9e-kube-api-access-ppvb8\") on node \"10.0.0.142\" DevicePath \"\"" May 13 00:50:43.623378 kubelet[1401]: I0513 00:50:43.623221 1401 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/09fa62f7-1b5d-4b97-9f90-f9c08f150e9e-hostproc\") on node \"10.0.0.142\" DevicePath \"\"" May 13 00:50:43.623378 kubelet[1401]: I0513 00:50:43.623228 1401 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/09fa62f7-1b5d-4b97-9f90-f9c08f150e9e-cilium-config-path\") on node \"10.0.0.142\" DevicePath \"\"" May 13 00:50:43.920974 kubelet[1401]: I0513 00:50:43.920914 1401 scope.go:117] "RemoveContainer" containerID="6bbadf52ad2f22b715d370c9c62352053ea889aaf238b9e94b28b04ad501cfe1" May 13 00:50:43.921971 env[1198]: time="2025-05-13T00:50:43.921931985Z" level=info msg="RemoveContainer for \"6bbadf52ad2f22b715d370c9c62352053ea889aaf238b9e94b28b04ad501cfe1\"" May 13 00:50:43.923909 systemd[1]: Removed slice kubepods-burstable-pod09fa62f7_1b5d_4b97_9f90_f9c08f150e9e.slice. May 13 00:50:43.924005 systemd[1]: kubepods-burstable-pod09fa62f7_1b5d_4b97_9f90_f9c08f150e9e.slice: Consumed 6.084s CPU time. May 13 00:50:43.925200 env[1198]: time="2025-05-13T00:50:43.925168859Z" level=info msg="RemoveContainer for \"6bbadf52ad2f22b715d370c9c62352053ea889aaf238b9e94b28b04ad501cfe1\" returns successfully" May 13 00:50:43.925333 kubelet[1401]: I0513 00:50:43.925319 1401 scope.go:117] "RemoveContainer" containerID="cb64878f445a9605869d062a93dd799836500875bc97bbeed569a2771f5cbf9d" May 13 00:50:43.926404 env[1198]: time="2025-05-13T00:50:43.926372634Z" level=info msg="RemoveContainer for \"cb64878f445a9605869d062a93dd799836500875bc97bbeed569a2771f5cbf9d\"" May 13 00:50:43.928885 env[1198]: time="2025-05-13T00:50:43.928854677Z" level=info msg="RemoveContainer for \"cb64878f445a9605869d062a93dd799836500875bc97bbeed569a2771f5cbf9d\" returns successfully" May 13 00:50:43.929162 kubelet[1401]: I0513 00:50:43.929148 1401 scope.go:117] "RemoveContainer" containerID="b063bda88d38ff1140466c205ad50f3d315d3d71c2841f7a90cfe47ef007b9df" May 13 00:50:43.929924 env[1198]: time="2025-05-13T00:50:43.929902470Z" level=info msg="RemoveContainer for \"b063bda88d38ff1140466c205ad50f3d315d3d71c2841f7a90cfe47ef007b9df\"" May 13 00:50:43.932693 env[1198]: time="2025-05-13T00:50:43.932663468Z" level=info msg="RemoveContainer for \"b063bda88d38ff1140466c205ad50f3d315d3d71c2841f7a90cfe47ef007b9df\" returns successfully" May 13 00:50:43.932809 kubelet[1401]: I0513 00:50:43.932783 1401 scope.go:117] "RemoveContainer" containerID="7bb30aa936ab814db0418bdd7517ae6be0f0ed8e3a463fa32d89a7fd3e6bf2ff" May 13 00:50:43.933565 env[1198]: time="2025-05-13T00:50:43.933537673Z" level=info msg="RemoveContainer for \"7bb30aa936ab814db0418bdd7517ae6be0f0ed8e3a463fa32d89a7fd3e6bf2ff\"" May 13 00:50:43.936222 env[1198]: time="2025-05-13T00:50:43.936198994Z" level=info msg="RemoveContainer for \"7bb30aa936ab814db0418bdd7517ae6be0f0ed8e3a463fa32d89a7fd3e6bf2ff\" returns successfully" May 13 00:50:43.936317 kubelet[1401]: I0513 00:50:43.936301 1401 scope.go:117] "RemoveContainer" containerID="6c2e53555ffd45adebc0655c6b1c244096fda6ced70ddce16b686086427607a9" May 13 00:50:43.937083 env[1198]: time="2025-05-13T00:50:43.937044205Z" level=info msg="RemoveContainer for \"6c2e53555ffd45adebc0655c6b1c244096fda6ced70ddce16b686086427607a9\"" May 13 00:50:43.939748 env[1198]: time="2025-05-13T00:50:43.939719382Z" level=info msg="RemoveContainer for \"6c2e53555ffd45adebc0655c6b1c244096fda6ced70ddce16b686086427607a9\" returns successfully" May 13 00:50:43.939844 kubelet[1401]: I0513 00:50:43.939827 1401 scope.go:117] "RemoveContainer" containerID="6bbadf52ad2f22b715d370c9c62352053ea889aaf238b9e94b28b04ad501cfe1" May 13 00:50:43.940052 env[1198]: time="2025-05-13T00:50:43.939992605Z" level=error msg="ContainerStatus for \"6bbadf52ad2f22b715d370c9c62352053ea889aaf238b9e94b28b04ad501cfe1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6bbadf52ad2f22b715d370c9c62352053ea889aaf238b9e94b28b04ad501cfe1\": not found" May 13 00:50:43.940166 kubelet[1401]: E0513 00:50:43.940147 1401 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6bbadf52ad2f22b715d370c9c62352053ea889aaf238b9e94b28b04ad501cfe1\": not found" containerID="6bbadf52ad2f22b715d370c9c62352053ea889aaf238b9e94b28b04ad501cfe1" May 13 00:50:43.940232 kubelet[1401]: I0513 00:50:43.940170 1401 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6bbadf52ad2f22b715d370c9c62352053ea889aaf238b9e94b28b04ad501cfe1"} err="failed to get container status \"6bbadf52ad2f22b715d370c9c62352053ea889aaf238b9e94b28b04ad501cfe1\": rpc error: code = NotFound desc = an error occurred when try to find container \"6bbadf52ad2f22b715d370c9c62352053ea889aaf238b9e94b28b04ad501cfe1\": not found" May 13 00:50:43.940266 kubelet[1401]: I0513 00:50:43.940232 1401 scope.go:117] "RemoveContainer" containerID="cb64878f445a9605869d062a93dd799836500875bc97bbeed569a2771f5cbf9d" May 13 00:50:43.940396 env[1198]: time="2025-05-13T00:50:43.940358425Z" level=error msg="ContainerStatus for \"cb64878f445a9605869d062a93dd799836500875bc97bbeed569a2771f5cbf9d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cb64878f445a9605869d062a93dd799836500875bc97bbeed569a2771f5cbf9d\": not found" May 13 00:50:43.940480 kubelet[1401]: E0513 00:50:43.940461 1401 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cb64878f445a9605869d062a93dd799836500875bc97bbeed569a2771f5cbf9d\": not found" containerID="cb64878f445a9605869d062a93dd799836500875bc97bbeed569a2771f5cbf9d" May 13 00:50:43.940539 kubelet[1401]: I0513 00:50:43.940478 1401 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cb64878f445a9605869d062a93dd799836500875bc97bbeed569a2771f5cbf9d"} err="failed to get container status \"cb64878f445a9605869d062a93dd799836500875bc97bbeed569a2771f5cbf9d\": rpc error: code = NotFound desc = an error occurred when try to find container \"cb64878f445a9605869d062a93dd799836500875bc97bbeed569a2771f5cbf9d\": not found" May 13 00:50:43.940539 kubelet[1401]: I0513 00:50:43.940491 1401 scope.go:117] "RemoveContainer" containerID="b063bda88d38ff1140466c205ad50f3d315d3d71c2841f7a90cfe47ef007b9df" May 13 00:50:43.940668 env[1198]: time="2025-05-13T00:50:43.940618353Z" level=error msg="ContainerStatus for \"b063bda88d38ff1140466c205ad50f3d315d3d71c2841f7a90cfe47ef007b9df\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b063bda88d38ff1140466c205ad50f3d315d3d71c2841f7a90cfe47ef007b9df\": not found" May 13 00:50:43.940758 kubelet[1401]: E0513 00:50:43.940743 1401 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b063bda88d38ff1140466c205ad50f3d315d3d71c2841f7a90cfe47ef007b9df\": not found" containerID="b063bda88d38ff1140466c205ad50f3d315d3d71c2841f7a90cfe47ef007b9df" May 13 00:50:43.940798 kubelet[1401]: I0513 00:50:43.940758 1401 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b063bda88d38ff1140466c205ad50f3d315d3d71c2841f7a90cfe47ef007b9df"} err="failed to get container status \"b063bda88d38ff1140466c205ad50f3d315d3d71c2841f7a90cfe47ef007b9df\": rpc error: code = NotFound desc = an error occurred when try to find container \"b063bda88d38ff1140466c205ad50f3d315d3d71c2841f7a90cfe47ef007b9df\": not found" May 13 00:50:43.940798 kubelet[1401]: I0513 00:50:43.940768 1401 scope.go:117] "RemoveContainer" containerID="7bb30aa936ab814db0418bdd7517ae6be0f0ed8e3a463fa32d89a7fd3e6bf2ff" May 13 00:50:43.940934 env[1198]: time="2025-05-13T00:50:43.940901927Z" level=error msg="ContainerStatus for \"7bb30aa936ab814db0418bdd7517ae6be0f0ed8e3a463fa32d89a7fd3e6bf2ff\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7bb30aa936ab814db0418bdd7517ae6be0f0ed8e3a463fa32d89a7fd3e6bf2ff\": not found" May 13 00:50:43.941040 kubelet[1401]: E0513 00:50:43.940997 1401 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7bb30aa936ab814db0418bdd7517ae6be0f0ed8e3a463fa32d89a7fd3e6bf2ff\": not found" containerID="7bb30aa936ab814db0418bdd7517ae6be0f0ed8e3a463fa32d89a7fd3e6bf2ff" May 13 00:50:43.941040 kubelet[1401]: I0513 00:50:43.941011 1401 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7bb30aa936ab814db0418bdd7517ae6be0f0ed8e3a463fa32d89a7fd3e6bf2ff"} err="failed to get container status \"7bb30aa936ab814db0418bdd7517ae6be0f0ed8e3a463fa32d89a7fd3e6bf2ff\": rpc error: code = NotFound desc = an error occurred when try to find container \"7bb30aa936ab814db0418bdd7517ae6be0f0ed8e3a463fa32d89a7fd3e6bf2ff\": not found" May 13 00:50:43.941040 kubelet[1401]: I0513 00:50:43.941020 1401 scope.go:117] "RemoveContainer" containerID="6c2e53555ffd45adebc0655c6b1c244096fda6ced70ddce16b686086427607a9" May 13 00:50:43.941179 env[1198]: time="2025-05-13T00:50:43.941128004Z" level=error msg="ContainerStatus for \"6c2e53555ffd45adebc0655c6b1c244096fda6ced70ddce16b686086427607a9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6c2e53555ffd45adebc0655c6b1c244096fda6ced70ddce16b686086427607a9\": not found" May 13 00:50:43.941291 kubelet[1401]: E0513 00:50:43.941263 1401 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6c2e53555ffd45adebc0655c6b1c244096fda6ced70ddce16b686086427607a9\": not found" containerID="6c2e53555ffd45adebc0655c6b1c244096fda6ced70ddce16b686086427607a9" May 13 00:50:43.941355 kubelet[1401]: I0513 00:50:43.941301 1401 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6c2e53555ffd45adebc0655c6b1c244096fda6ced70ddce16b686086427607a9"} err="failed to get container status \"6c2e53555ffd45adebc0655c6b1c244096fda6ced70ddce16b686086427607a9\": rpc error: code = NotFound desc = an error occurred when try to find container \"6c2e53555ffd45adebc0655c6b1c244096fda6ced70ddce16b686086427607a9\": not found" May 13 00:50:44.319456 kubelet[1401]: E0513 00:50:44.319439 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:50:44.813869 kubelet[1401]: I0513 00:50:44.813836 1401 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09fa62f7-1b5d-4b97-9f90-f9c08f150e9e" path="/var/lib/kubelet/pods/09fa62f7-1b5d-4b97-9f90-f9c08f150e9e/volumes" May 13 00:50:45.319579 kubelet[1401]: E0513 00:50:45.319543 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:50:46.100691 kubelet[1401]: E0513 00:50:46.100659 1401 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="09fa62f7-1b5d-4b97-9f90-f9c08f150e9e" containerName="mount-bpf-fs" May 13 00:50:46.100691 kubelet[1401]: E0513 00:50:46.100685 1401 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="09fa62f7-1b5d-4b97-9f90-f9c08f150e9e" containerName="clean-cilium-state" May 13 00:50:46.100691 kubelet[1401]: E0513 00:50:46.100691 1401 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="09fa62f7-1b5d-4b97-9f90-f9c08f150e9e" containerName="mount-cgroup" May 13 00:50:46.100691 kubelet[1401]: E0513 00:50:46.100696 1401 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="09fa62f7-1b5d-4b97-9f90-f9c08f150e9e" containerName="apply-sysctl-overwrites" May 13 00:50:46.100883 kubelet[1401]: E0513 00:50:46.100702 1401 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="09fa62f7-1b5d-4b97-9f90-f9c08f150e9e" containerName="cilium-agent" May 13 00:50:46.100883 kubelet[1401]: I0513 00:50:46.100721 1401 memory_manager.go:354] "RemoveStaleState removing state" podUID="09fa62f7-1b5d-4b97-9f90-f9c08f150e9e" containerName="cilium-agent" May 13 00:50:46.105333 systemd[1]: Created slice kubepods-besteffort-podd8a6386d_01f0_444e_8daa_c3bbdcbf0034.slice. May 13 00:50:46.108682 systemd[1]: Created slice kubepods-burstable-podbbdf58ad_59f5_418a_b9e1_af89618424e3.slice. May 13 00:50:46.137529 kubelet[1401]: I0513 00:50:46.137485 1401 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bbdf58ad-59f5-418a-b9e1-af89618424e3-hostproc\") pod \"cilium-8nzcb\" (UID: \"bbdf58ad-59f5-418a-b9e1-af89618424e3\") " pod="kube-system/cilium-8nzcb" May 13 00:50:46.137529 kubelet[1401]: I0513 00:50:46.137514 1401 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwjpn\" (UniqueName: \"kubernetes.io/projected/bbdf58ad-59f5-418a-b9e1-af89618424e3-kube-api-access-fwjpn\") pod \"cilium-8nzcb\" (UID: \"bbdf58ad-59f5-418a-b9e1-af89618424e3\") " pod="kube-system/cilium-8nzcb" May 13 00:50:46.137529 kubelet[1401]: I0513 00:50:46.137533 1401 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bbdf58ad-59f5-418a-b9e1-af89618424e3-xtables-lock\") pod \"cilium-8nzcb\" (UID: \"bbdf58ad-59f5-418a-b9e1-af89618424e3\") " pod="kube-system/cilium-8nzcb" May 13 00:50:46.137773 kubelet[1401]: I0513 00:50:46.137547 1401 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bbdf58ad-59f5-418a-b9e1-af89618424e3-cilium-run\") pod \"cilium-8nzcb\" (UID: \"bbdf58ad-59f5-418a-b9e1-af89618424e3\") " pod="kube-system/cilium-8nzcb" May 13 00:50:46.137773 kubelet[1401]: I0513 00:50:46.137560 1401 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bbdf58ad-59f5-418a-b9e1-af89618424e3-bpf-maps\") pod \"cilium-8nzcb\" (UID: \"bbdf58ad-59f5-418a-b9e1-af89618424e3\") " pod="kube-system/cilium-8nzcb" May 13 00:50:46.137773 kubelet[1401]: I0513 00:50:46.137573 1401 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bbdf58ad-59f5-418a-b9e1-af89618424e3-cni-path\") pod \"cilium-8nzcb\" (UID: \"bbdf58ad-59f5-418a-b9e1-af89618424e3\") " pod="kube-system/cilium-8nzcb" May 13 00:50:46.137773 kubelet[1401]: I0513 00:50:46.137584 1401 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bbdf58ad-59f5-418a-b9e1-af89618424e3-etc-cni-netd\") pod \"cilium-8nzcb\" (UID: \"bbdf58ad-59f5-418a-b9e1-af89618424e3\") " pod="kube-system/cilium-8nzcb" May 13 00:50:46.137773 kubelet[1401]: I0513 00:50:46.137651 1401 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bbdf58ad-59f5-418a-b9e1-af89618424e3-hubble-tls\") pod \"cilium-8nzcb\" (UID: \"bbdf58ad-59f5-418a-b9e1-af89618424e3\") " pod="kube-system/cilium-8nzcb" May 13 00:50:46.137773 kubelet[1401]: I0513 00:50:46.137692 1401 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bbdf58ad-59f5-418a-b9e1-af89618424e3-lib-modules\") pod \"cilium-8nzcb\" (UID: \"bbdf58ad-59f5-418a-b9e1-af89618424e3\") " pod="kube-system/cilium-8nzcb" May 13 00:50:46.137909 kubelet[1401]: I0513 00:50:46.137712 1401 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bbdf58ad-59f5-418a-b9e1-af89618424e3-host-proc-sys-kernel\") pod \"cilium-8nzcb\" (UID: \"bbdf58ad-59f5-418a-b9e1-af89618424e3\") " pod="kube-system/cilium-8nzcb" May 13 00:50:46.137909 kubelet[1401]: I0513 00:50:46.137730 1401 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d8a6386d-01f0-444e-8daa-c3bbdcbf0034-cilium-config-path\") pod \"cilium-operator-5d85765b45-9z2kd\" (UID: \"d8a6386d-01f0-444e-8daa-c3bbdcbf0034\") " pod="kube-system/cilium-operator-5d85765b45-9z2kd" May 13 00:50:46.137909 kubelet[1401]: I0513 00:50:46.137754 1401 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpphc\" (UniqueName: \"kubernetes.io/projected/d8a6386d-01f0-444e-8daa-c3bbdcbf0034-kube-api-access-cpphc\") pod \"cilium-operator-5d85765b45-9z2kd\" (UID: \"d8a6386d-01f0-444e-8daa-c3bbdcbf0034\") " pod="kube-system/cilium-operator-5d85765b45-9z2kd" May 13 00:50:46.137909 kubelet[1401]: I0513 00:50:46.137792 1401 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bbdf58ad-59f5-418a-b9e1-af89618424e3-cilium-cgroup\") pod \"cilium-8nzcb\" (UID: \"bbdf58ad-59f5-418a-b9e1-af89618424e3\") " pod="kube-system/cilium-8nzcb" May 13 00:50:46.137909 kubelet[1401]: I0513 00:50:46.137816 1401 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bbdf58ad-59f5-418a-b9e1-af89618424e3-clustermesh-secrets\") pod \"cilium-8nzcb\" (UID: \"bbdf58ad-59f5-418a-b9e1-af89618424e3\") " pod="kube-system/cilium-8nzcb" May 13 00:50:46.138042 kubelet[1401]: I0513 00:50:46.137829 1401 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bbdf58ad-59f5-418a-b9e1-af89618424e3-cilium-config-path\") pod \"cilium-8nzcb\" (UID: \"bbdf58ad-59f5-418a-b9e1-af89618424e3\") " pod="kube-system/cilium-8nzcb" May 13 00:50:46.138042 kubelet[1401]: I0513 00:50:46.137847 1401 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/bbdf58ad-59f5-418a-b9e1-af89618424e3-cilium-ipsec-secrets\") pod \"cilium-8nzcb\" (UID: \"bbdf58ad-59f5-418a-b9e1-af89618424e3\") " pod="kube-system/cilium-8nzcb" May 13 00:50:46.138042 kubelet[1401]: I0513 00:50:46.137859 1401 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bbdf58ad-59f5-418a-b9e1-af89618424e3-host-proc-sys-net\") pod \"cilium-8nzcb\" (UID: \"bbdf58ad-59f5-418a-b9e1-af89618424e3\") " pod="kube-system/cilium-8nzcb" May 13 00:50:46.246043 kubelet[1401]: E0513 00:50:46.246001 1401 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[cilium-ipsec-secrets kube-api-access-fwjpn], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-8nzcb" podUID="bbdf58ad-59f5-418a-b9e1-af89618424e3" May 13 00:50:46.280875 kubelet[1401]: E0513 00:50:46.280819 1401 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:50:46.305960 env[1198]: time="2025-05-13T00:50:46.305916280Z" level=info msg="StopPodSandbox for \"3b41b2e4617fc791781e0d5a444a94a5d2bec7e735e74812389e327492105acf\"" May 13 00:50:46.306167 env[1198]: time="2025-05-13T00:50:46.306021358Z" level=info msg="TearDown network for sandbox \"3b41b2e4617fc791781e0d5a444a94a5d2bec7e735e74812389e327492105acf\" successfully" May 13 00:50:46.306167 env[1198]: time="2025-05-13T00:50:46.306054460Z" level=info msg="StopPodSandbox for \"3b41b2e4617fc791781e0d5a444a94a5d2bec7e735e74812389e327492105acf\" returns successfully" May 13 00:50:46.306433 env[1198]: time="2025-05-13T00:50:46.306403276Z" level=info msg="RemovePodSandbox for \"3b41b2e4617fc791781e0d5a444a94a5d2bec7e735e74812389e327492105acf\"" May 13 00:50:46.306579 env[1198]: time="2025-05-13T00:50:46.306429056Z" level=info msg="Forcibly stopping sandbox \"3b41b2e4617fc791781e0d5a444a94a5d2bec7e735e74812389e327492105acf\"" May 13 00:50:46.306579 env[1198]: time="2025-05-13T00:50:46.306478068Z" level=info msg="TearDown network for sandbox \"3b41b2e4617fc791781e0d5a444a94a5d2bec7e735e74812389e327492105acf\" successfully" May 13 00:50:46.310450 env[1198]: time="2025-05-13T00:50:46.310400760Z" level=info msg="RemovePodSandbox \"3b41b2e4617fc791781e0d5a444a94a5d2bec7e735e74812389e327492105acf\" returns successfully" May 13 00:50:46.319685 kubelet[1401]: E0513 00:50:46.319654 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:50:46.407668 kubelet[1401]: E0513 00:50:46.407538 1401 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:50:46.407928 env[1198]: time="2025-05-13T00:50:46.407878641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-9z2kd,Uid:d8a6386d-01f0-444e-8daa-c3bbdcbf0034,Namespace:kube-system,Attempt:0,}" May 13 00:50:46.419215 env[1198]: time="2025-05-13T00:50:46.419166034Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:50:46.419215 env[1198]: time="2025-05-13T00:50:46.419196311Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:50:46.419215 env[1198]: time="2025-05-13T00:50:46.419205559Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:50:46.419344 env[1198]: time="2025-05-13T00:50:46.419291450Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/29c3dfbc9b4ce60271fc9b0cbfc5538c3abc62ce9ee44d4d85472eb04ddd9a44 pid=2986 runtime=io.containerd.runc.v2 May 13 00:50:46.428923 systemd[1]: Started cri-containerd-29c3dfbc9b4ce60271fc9b0cbfc5538c3abc62ce9ee44d4d85472eb04ddd9a44.scope. May 13 00:50:46.457763 env[1198]: time="2025-05-13T00:50:46.457711285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-9z2kd,Uid:d8a6386d-01f0-444e-8daa-c3bbdcbf0034,Namespace:kube-system,Attempt:0,} returns sandbox id \"29c3dfbc9b4ce60271fc9b0cbfc5538c3abc62ce9ee44d4d85472eb04ddd9a44\"" May 13 00:50:46.458251 kubelet[1401]: E0513 00:50:46.458229 1401 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:50:46.459086 env[1198]: time="2025-05-13T00:50:46.459064853Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 13 00:50:46.799451 kubelet[1401]: E0513 00:50:46.799346 1401 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 13 00:50:46.945432 kubelet[1401]: I0513 00:50:46.945387 1401 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwjpn\" (UniqueName: \"kubernetes.io/projected/bbdf58ad-59f5-418a-b9e1-af89618424e3-kube-api-access-fwjpn\") pod \"bbdf58ad-59f5-418a-b9e1-af89618424e3\" (UID: \"bbdf58ad-59f5-418a-b9e1-af89618424e3\") " May 13 00:50:46.945432 kubelet[1401]: I0513 00:50:46.945414 1401 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bbdf58ad-59f5-418a-b9e1-af89618424e3-host-proc-sys-net\") pod \"bbdf58ad-59f5-418a-b9e1-af89618424e3\" (UID: \"bbdf58ad-59f5-418a-b9e1-af89618424e3\") " May 13 00:50:46.945432 kubelet[1401]: I0513 00:50:46.945429 1401 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bbdf58ad-59f5-418a-b9e1-af89618424e3-bpf-maps\") pod \"bbdf58ad-59f5-418a-b9e1-af89618424e3\" (UID: \"bbdf58ad-59f5-418a-b9e1-af89618424e3\") " May 13 00:50:46.945432 kubelet[1401]: I0513 00:50:46.945442 1401 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bbdf58ad-59f5-418a-b9e1-af89618424e3-cni-path\") pod \"bbdf58ad-59f5-418a-b9e1-af89618424e3\" (UID: \"bbdf58ad-59f5-418a-b9e1-af89618424e3\") " May 13 00:50:46.945675 kubelet[1401]: I0513 00:50:46.945459 1401 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bbdf58ad-59f5-418a-b9e1-af89618424e3-clustermesh-secrets\") pod \"bbdf58ad-59f5-418a-b9e1-af89618424e3\" (UID: \"bbdf58ad-59f5-418a-b9e1-af89618424e3\") " May 13 00:50:46.945675 kubelet[1401]: I0513 00:50:46.945473 1401 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bbdf58ad-59f5-418a-b9e1-af89618424e3-cilium-cgroup\") pod \"bbdf58ad-59f5-418a-b9e1-af89618424e3\" (UID: \"bbdf58ad-59f5-418a-b9e1-af89618424e3\") " May 13 00:50:46.945675 kubelet[1401]: I0513 00:50:46.945485 1401 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bbdf58ad-59f5-418a-b9e1-af89618424e3-hostproc\") pod \"bbdf58ad-59f5-418a-b9e1-af89618424e3\" (UID: \"bbdf58ad-59f5-418a-b9e1-af89618424e3\") " May 13 00:50:46.945675 kubelet[1401]: I0513 00:50:46.945486 1401 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bbdf58ad-59f5-418a-b9e1-af89618424e3-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "bbdf58ad-59f5-418a-b9e1-af89618424e3" (UID: "bbdf58ad-59f5-418a-b9e1-af89618424e3"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:50:46.945675 kubelet[1401]: I0513 00:50:46.945498 1401 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bbdf58ad-59f5-418a-b9e1-af89618424e3-host-proc-sys-kernel\") pod \"bbdf58ad-59f5-418a-b9e1-af89618424e3\" (UID: \"bbdf58ad-59f5-418a-b9e1-af89618424e3\") " May 13 00:50:46.945803 kubelet[1401]: I0513 00:50:46.945486 1401 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bbdf58ad-59f5-418a-b9e1-af89618424e3-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "bbdf58ad-59f5-418a-b9e1-af89618424e3" (UID: "bbdf58ad-59f5-418a-b9e1-af89618424e3"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:50:46.945803 kubelet[1401]: I0513 00:50:46.945512 1401 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/bbdf58ad-59f5-418a-b9e1-af89618424e3-cilium-ipsec-secrets\") pod \"bbdf58ad-59f5-418a-b9e1-af89618424e3\" (UID: \"bbdf58ad-59f5-418a-b9e1-af89618424e3\") " May 13 00:50:46.945803 kubelet[1401]: I0513 00:50:46.945525 1401 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bbdf58ad-59f5-418a-b9e1-af89618424e3-cilium-run\") pod \"bbdf58ad-59f5-418a-b9e1-af89618424e3\" (UID: \"bbdf58ad-59f5-418a-b9e1-af89618424e3\") " May 13 00:50:46.945803 kubelet[1401]: I0513 00:50:46.945538 1401 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bbdf58ad-59f5-418a-b9e1-af89618424e3-xtables-lock\") pod \"bbdf58ad-59f5-418a-b9e1-af89618424e3\" (UID: \"bbdf58ad-59f5-418a-b9e1-af89618424e3\") " May 13 00:50:46.945803 kubelet[1401]: I0513 00:50:46.945551 1401 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bbdf58ad-59f5-418a-b9e1-af89618424e3-etc-cni-netd\") pod \"bbdf58ad-59f5-418a-b9e1-af89618424e3\" (UID: \"bbdf58ad-59f5-418a-b9e1-af89618424e3\") " May 13 00:50:46.945803 kubelet[1401]: I0513 00:50:46.945565 1401 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bbdf58ad-59f5-418a-b9e1-af89618424e3-hubble-tls\") pod \"bbdf58ad-59f5-418a-b9e1-af89618424e3\" (UID: \"bbdf58ad-59f5-418a-b9e1-af89618424e3\") " May 13 00:50:46.945935 kubelet[1401]: I0513 00:50:46.945579 1401 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bbdf58ad-59f5-418a-b9e1-af89618424e3-lib-modules\") pod \"bbdf58ad-59f5-418a-b9e1-af89618424e3\" (UID: \"bbdf58ad-59f5-418a-b9e1-af89618424e3\") " May 13 00:50:46.945935 kubelet[1401]: I0513 00:50:46.945596 1401 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bbdf58ad-59f5-418a-b9e1-af89618424e3-cilium-config-path\") pod \"bbdf58ad-59f5-418a-b9e1-af89618424e3\" (UID: \"bbdf58ad-59f5-418a-b9e1-af89618424e3\") " May 13 00:50:46.945935 kubelet[1401]: I0513 00:50:46.945616 1401 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bbdf58ad-59f5-418a-b9e1-af89618424e3-host-proc-sys-net\") on node \"10.0.0.142\" DevicePath \"\"" May 13 00:50:46.945935 kubelet[1401]: I0513 00:50:46.945625 1401 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bbdf58ad-59f5-418a-b9e1-af89618424e3-bpf-maps\") on node \"10.0.0.142\" DevicePath \"\"" May 13 00:50:46.945935 kubelet[1401]: I0513 00:50:46.945825 1401 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bbdf58ad-59f5-418a-b9e1-af89618424e3-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "bbdf58ad-59f5-418a-b9e1-af89618424e3" (UID: "bbdf58ad-59f5-418a-b9e1-af89618424e3"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:50:46.945935 kubelet[1401]: I0513 00:50:46.945844 1401 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bbdf58ad-59f5-418a-b9e1-af89618424e3-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "bbdf58ad-59f5-418a-b9e1-af89618424e3" (UID: "bbdf58ad-59f5-418a-b9e1-af89618424e3"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:50:46.946092 kubelet[1401]: I0513 00:50:46.945856 1401 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bbdf58ad-59f5-418a-b9e1-af89618424e3-hostproc" (OuterVolumeSpecName: "hostproc") pod "bbdf58ad-59f5-418a-b9e1-af89618424e3" (UID: "bbdf58ad-59f5-418a-b9e1-af89618424e3"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:50:46.946092 kubelet[1401]: I0513 00:50:46.945866 1401 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bbdf58ad-59f5-418a-b9e1-af89618424e3-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "bbdf58ad-59f5-418a-b9e1-af89618424e3" (UID: "bbdf58ad-59f5-418a-b9e1-af89618424e3"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:50:46.947143 kubelet[1401]: I0513 00:50:46.947112 1401 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbdf58ad-59f5-418a-b9e1-af89618424e3-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "bbdf58ad-59f5-418a-b9e1-af89618424e3" (UID: "bbdf58ad-59f5-418a-b9e1-af89618424e3"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 13 00:50:46.947143 kubelet[1401]: I0513 00:50:46.947142 1401 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bbdf58ad-59f5-418a-b9e1-af89618424e3-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "bbdf58ad-59f5-418a-b9e1-af89618424e3" (UID: "bbdf58ad-59f5-418a-b9e1-af89618424e3"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:50:46.947209 kubelet[1401]: I0513 00:50:46.947155 1401 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bbdf58ad-59f5-418a-b9e1-af89618424e3-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "bbdf58ad-59f5-418a-b9e1-af89618424e3" (UID: "bbdf58ad-59f5-418a-b9e1-af89618424e3"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:50:46.947602 kubelet[1401]: I0513 00:50:46.947566 1401 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbdf58ad-59f5-418a-b9e1-af89618424e3-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "bbdf58ad-59f5-418a-b9e1-af89618424e3" (UID: "bbdf58ad-59f5-418a-b9e1-af89618424e3"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 13 00:50:46.947602 kubelet[1401]: I0513 00:50:46.947596 1401 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bbdf58ad-59f5-418a-b9e1-af89618424e3-cni-path" (OuterVolumeSpecName: "cni-path") pod "bbdf58ad-59f5-418a-b9e1-af89618424e3" (UID: "bbdf58ad-59f5-418a-b9e1-af89618424e3"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:50:46.947787 kubelet[1401]: I0513 00:50:46.947608 1401 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bbdf58ad-59f5-418a-b9e1-af89618424e3-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "bbdf58ad-59f5-418a-b9e1-af89618424e3" (UID: "bbdf58ad-59f5-418a-b9e1-af89618424e3"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:50:46.947787 kubelet[1401]: I0513 00:50:46.947774 1401 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbdf58ad-59f5-418a-b9e1-af89618424e3-kube-api-access-fwjpn" (OuterVolumeSpecName: "kube-api-access-fwjpn") pod "bbdf58ad-59f5-418a-b9e1-af89618424e3" (UID: "bbdf58ad-59f5-418a-b9e1-af89618424e3"). InnerVolumeSpecName "kube-api-access-fwjpn". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 00:50:46.948874 kubelet[1401]: I0513 00:50:46.948850 1401 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbdf58ad-59f5-418a-b9e1-af89618424e3-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "bbdf58ad-59f5-418a-b9e1-af89618424e3" (UID: "bbdf58ad-59f5-418a-b9e1-af89618424e3"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 00:50:46.949513 kubelet[1401]: I0513 00:50:46.949490 1401 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbdf58ad-59f5-418a-b9e1-af89618424e3-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "bbdf58ad-59f5-418a-b9e1-af89618424e3" (UID: "bbdf58ad-59f5-418a-b9e1-af89618424e3"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 13 00:50:47.045826 kubelet[1401]: I0513 00:50:47.045805 1401 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bbdf58ad-59f5-418a-b9e1-af89618424e3-cilium-cgroup\") on node \"10.0.0.142\" DevicePath \"\"" May 13 00:50:47.045826 kubelet[1401]: I0513 00:50:47.045820 1401 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bbdf58ad-59f5-418a-b9e1-af89618424e3-cni-path\") on node \"10.0.0.142\" DevicePath \"\"" May 13 00:50:47.045826 kubelet[1401]: I0513 00:50:47.045828 1401 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bbdf58ad-59f5-418a-b9e1-af89618424e3-clustermesh-secrets\") on node \"10.0.0.142\" DevicePath \"\"" May 13 00:50:47.045920 kubelet[1401]: I0513 00:50:47.045836 1401 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bbdf58ad-59f5-418a-b9e1-af89618424e3-hostproc\") on node \"10.0.0.142\" DevicePath \"\"" May 13 00:50:47.045920 kubelet[1401]: I0513 00:50:47.045843 1401 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bbdf58ad-59f5-418a-b9e1-af89618424e3-host-proc-sys-kernel\") on node \"10.0.0.142\" DevicePath \"\"" May 13 00:50:47.045920 kubelet[1401]: I0513 00:50:47.045850 1401 reconciler_common.go:288] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/bbdf58ad-59f5-418a-b9e1-af89618424e3-cilium-ipsec-secrets\") on node \"10.0.0.142\" DevicePath \"\"" May 13 00:50:47.045920 kubelet[1401]: I0513 00:50:47.045856 1401 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bbdf58ad-59f5-418a-b9e1-af89618424e3-cilium-run\") on node \"10.0.0.142\" DevicePath \"\"" May 13 00:50:47.045920 kubelet[1401]: I0513 00:50:47.045862 1401 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bbdf58ad-59f5-418a-b9e1-af89618424e3-xtables-lock\") on node \"10.0.0.142\" DevicePath \"\"" May 13 00:50:47.045920 kubelet[1401]: I0513 00:50:47.045869 1401 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bbdf58ad-59f5-418a-b9e1-af89618424e3-etc-cni-netd\") on node \"10.0.0.142\" DevicePath \"\"" May 13 00:50:47.045920 kubelet[1401]: I0513 00:50:47.045875 1401 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bbdf58ad-59f5-418a-b9e1-af89618424e3-hubble-tls\") on node \"10.0.0.142\" DevicePath \"\"" May 13 00:50:47.045920 kubelet[1401]: I0513 00:50:47.045881 1401 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bbdf58ad-59f5-418a-b9e1-af89618424e3-lib-modules\") on node \"10.0.0.142\" DevicePath \"\"" May 13 00:50:47.046111 kubelet[1401]: I0513 00:50:47.045887 1401 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bbdf58ad-59f5-418a-b9e1-af89618424e3-cilium-config-path\") on node \"10.0.0.142\" DevicePath \"\"" May 13 00:50:47.046111 kubelet[1401]: I0513 00:50:47.045895 1401 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-fwjpn\" (UniqueName: \"kubernetes.io/projected/bbdf58ad-59f5-418a-b9e1-af89618424e3-kube-api-access-fwjpn\") on node \"10.0.0.142\" DevicePath \"\"" May 13 00:50:47.243651 systemd[1]: var-lib-kubelet-pods-bbdf58ad\x2d59f5\x2d418a\x2db9e1\x2daf89618424e3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfwjpn.mount: Deactivated successfully. May 13 00:50:47.243730 systemd[1]: var-lib-kubelet-pods-bbdf58ad\x2d59f5\x2d418a\x2db9e1\x2daf89618424e3-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 13 00:50:47.243790 systemd[1]: var-lib-kubelet-pods-bbdf58ad\x2d59f5\x2d418a\x2db9e1\x2daf89618424e3-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 13 00:50:47.243836 systemd[1]: var-lib-kubelet-pods-bbdf58ad\x2d59f5\x2d418a\x2db9e1\x2daf89618424e3-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. May 13 00:50:47.320087 kubelet[1401]: E0513 00:50:47.320065 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:50:47.929192 systemd[1]: Removed slice kubepods-burstable-podbbdf58ad_59f5_418a_b9e1_af89618424e3.slice. May 13 00:50:47.959904 systemd[1]: Created slice kubepods-burstable-pod2797001f_5883_46f7_9a6f_f2441d940f31.slice. May 13 00:50:48.002894 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1271196254.mount: Deactivated successfully. May 13 00:50:48.052179 kubelet[1401]: I0513 00:50:48.052133 1401 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2797001f-5883-46f7-9a6f-f2441d940f31-etc-cni-netd\") pod \"cilium-mcptk\" (UID: \"2797001f-5883-46f7-9a6f-f2441d940f31\") " pod="kube-system/cilium-mcptk" May 13 00:50:48.052305 kubelet[1401]: I0513 00:50:48.052202 1401 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2797001f-5883-46f7-9a6f-f2441d940f31-lib-modules\") pod \"cilium-mcptk\" (UID: \"2797001f-5883-46f7-9a6f-f2441d940f31\") " pod="kube-system/cilium-mcptk" May 13 00:50:48.052305 kubelet[1401]: I0513 00:50:48.052227 1401 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2797001f-5883-46f7-9a6f-f2441d940f31-xtables-lock\") pod \"cilium-mcptk\" (UID: \"2797001f-5883-46f7-9a6f-f2441d940f31\") " pod="kube-system/cilium-mcptk" May 13 00:50:48.052305 kubelet[1401]: I0513 00:50:48.052244 1401 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2797001f-5883-46f7-9a6f-f2441d940f31-bpf-maps\") pod \"cilium-mcptk\" (UID: \"2797001f-5883-46f7-9a6f-f2441d940f31\") " pod="kube-system/cilium-mcptk" May 13 00:50:48.052305 kubelet[1401]: I0513 00:50:48.052259 1401 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2797001f-5883-46f7-9a6f-f2441d940f31-cilium-cgroup\") pod \"cilium-mcptk\" (UID: \"2797001f-5883-46f7-9a6f-f2441d940f31\") " pod="kube-system/cilium-mcptk" May 13 00:50:48.052305 kubelet[1401]: I0513 00:50:48.052278 1401 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2797001f-5883-46f7-9a6f-f2441d940f31-host-proc-sys-net\") pod \"cilium-mcptk\" (UID: \"2797001f-5883-46f7-9a6f-f2441d940f31\") " pod="kube-system/cilium-mcptk" May 13 00:50:48.052305 kubelet[1401]: I0513 00:50:48.052300 1401 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2797001f-5883-46f7-9a6f-f2441d940f31-host-proc-sys-kernel\") pod \"cilium-mcptk\" (UID: \"2797001f-5883-46f7-9a6f-f2441d940f31\") " pod="kube-system/cilium-mcptk" May 13 00:50:48.052432 kubelet[1401]: I0513 00:50:48.052321 1401 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2797001f-5883-46f7-9a6f-f2441d940f31-cilium-ipsec-secrets\") pod \"cilium-mcptk\" (UID: \"2797001f-5883-46f7-9a6f-f2441d940f31\") " pod="kube-system/cilium-mcptk" May 13 00:50:48.052432 kubelet[1401]: I0513 00:50:48.052338 1401 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2797001f-5883-46f7-9a6f-f2441d940f31-cilium-config-path\") pod \"cilium-mcptk\" (UID: \"2797001f-5883-46f7-9a6f-f2441d940f31\") " pod="kube-system/cilium-mcptk" May 13 00:50:48.052432 kubelet[1401]: I0513 00:50:48.052355 1401 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fspq7\" (UniqueName: \"kubernetes.io/projected/2797001f-5883-46f7-9a6f-f2441d940f31-kube-api-access-fspq7\") pod \"cilium-mcptk\" (UID: \"2797001f-5883-46f7-9a6f-f2441d940f31\") " pod="kube-system/cilium-mcptk" May 13 00:50:48.052432 kubelet[1401]: I0513 00:50:48.052375 1401 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2797001f-5883-46f7-9a6f-f2441d940f31-hostproc\") pod \"cilium-mcptk\" (UID: \"2797001f-5883-46f7-9a6f-f2441d940f31\") " pod="kube-system/cilium-mcptk" May 13 00:50:48.052432 kubelet[1401]: I0513 00:50:48.052392 1401 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2797001f-5883-46f7-9a6f-f2441d940f31-hubble-tls\") pod \"cilium-mcptk\" (UID: \"2797001f-5883-46f7-9a6f-f2441d940f31\") " pod="kube-system/cilium-mcptk" May 13 00:50:48.052432 kubelet[1401]: I0513 00:50:48.052410 1401 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2797001f-5883-46f7-9a6f-f2441d940f31-cilium-run\") pod \"cilium-mcptk\" (UID: \"2797001f-5883-46f7-9a6f-f2441d940f31\") " pod="kube-system/cilium-mcptk" May 13 00:50:48.052557 kubelet[1401]: I0513 00:50:48.052424 1401 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2797001f-5883-46f7-9a6f-f2441d940f31-cni-path\") pod \"cilium-mcptk\" (UID: \"2797001f-5883-46f7-9a6f-f2441d940f31\") " pod="kube-system/cilium-mcptk" May 13 00:50:48.052557 kubelet[1401]: I0513 00:50:48.052443 1401 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2797001f-5883-46f7-9a6f-f2441d940f31-clustermesh-secrets\") pod \"cilium-mcptk\" (UID: \"2797001f-5883-46f7-9a6f-f2441d940f31\") " pod="kube-system/cilium-mcptk" May 13 00:50:48.270122 kubelet[1401]: E0513 00:50:48.270035 1401 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:50:48.270612 env[1198]: time="2025-05-13T00:50:48.270560896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mcptk,Uid:2797001f-5883-46f7-9a6f-f2441d940f31,Namespace:kube-system,Attempt:0,}" May 13 00:50:48.281694 env[1198]: time="2025-05-13T00:50:48.281626297Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:50:48.281694 env[1198]: time="2025-05-13T00:50:48.281666223Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:50:48.281694 env[1198]: time="2025-05-13T00:50:48.281684949Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:50:48.281880 env[1198]: time="2025-05-13T00:50:48.281845821Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ac0ffe85188aad808c9d62ca0d0d44bce9195bfc8aae87ce96647faa8453ad9e pid=3036 runtime=io.containerd.runc.v2 May 13 00:50:48.297837 systemd[1]: Started cri-containerd-ac0ffe85188aad808c9d62ca0d0d44bce9195bfc8aae87ce96647faa8453ad9e.scope. May 13 00:50:48.302913 kubelet[1401]: I0513 00:50:48.302195 1401 setters.go:600] "Node became not ready" node="10.0.0.142" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-13T00:50:48Z","lastTransitionTime":"2025-05-13T00:50:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 13 00:50:48.315559 env[1198]: time="2025-05-13T00:50:48.315510106Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mcptk,Uid:2797001f-5883-46f7-9a6f-f2441d940f31,Namespace:kube-system,Attempt:0,} returns sandbox id \"ac0ffe85188aad808c9d62ca0d0d44bce9195bfc8aae87ce96647faa8453ad9e\"" May 13 00:50:48.316275 kubelet[1401]: E0513 00:50:48.316254 1401 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:50:48.317670 env[1198]: time="2025-05-13T00:50:48.317630916Z" level=info msg="CreateContainer within sandbox \"ac0ffe85188aad808c9d62ca0d0d44bce9195bfc8aae87ce96647faa8453ad9e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 13 00:50:48.320724 kubelet[1401]: E0513 00:50:48.320505 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:50:48.329321 env[1198]: time="2025-05-13T00:50:48.329273934Z" level=info msg="CreateContainer within sandbox \"ac0ffe85188aad808c9d62ca0d0d44bce9195bfc8aae87ce96647faa8453ad9e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"05729564cef715f99585438918b278b9d5d695dc37c51b4c0127b837b51b1015\"" May 13 00:50:48.329790 env[1198]: time="2025-05-13T00:50:48.329754668Z" level=info msg="StartContainer for \"05729564cef715f99585438918b278b9d5d695dc37c51b4c0127b837b51b1015\"" May 13 00:50:48.342810 systemd[1]: Started cri-containerd-05729564cef715f99585438918b278b9d5d695dc37c51b4c0127b837b51b1015.scope. May 13 00:50:48.365289 env[1198]: time="2025-05-13T00:50:48.365251020Z" level=info msg="StartContainer for \"05729564cef715f99585438918b278b9d5d695dc37c51b4c0127b837b51b1015\" returns successfully" May 13 00:50:48.369328 systemd[1]: cri-containerd-05729564cef715f99585438918b278b9d5d695dc37c51b4c0127b837b51b1015.scope: Deactivated successfully. May 13 00:50:48.442075 env[1198]: time="2025-05-13T00:50:48.441995758Z" level=info msg="shim disconnected" id=05729564cef715f99585438918b278b9d5d695dc37c51b4c0127b837b51b1015 May 13 00:50:48.442075 env[1198]: time="2025-05-13T00:50:48.442042856Z" level=warning msg="cleaning up after shim disconnected" id=05729564cef715f99585438918b278b9d5d695dc37c51b4c0127b837b51b1015 namespace=k8s.io May 13 00:50:48.442075 env[1198]: time="2025-05-13T00:50:48.442050961Z" level=info msg="cleaning up dead shim" May 13 00:50:48.449723 env[1198]: time="2025-05-13T00:50:48.449667172Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:50:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3120 runtime=io.containerd.runc.v2\n" May 13 00:50:48.813926 kubelet[1401]: I0513 00:50:48.813878 1401 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbdf58ad-59f5-418a-b9e1-af89618424e3" path="/var/lib/kubelet/pods/bbdf58ad-59f5-418a-b9e1-af89618424e3/volumes" May 13 00:50:48.930087 kubelet[1401]: E0513 00:50:48.930056 1401 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:50:48.931732 env[1198]: time="2025-05-13T00:50:48.931677640Z" level=info msg="CreateContainer within sandbox \"ac0ffe85188aad808c9d62ca0d0d44bce9195bfc8aae87ce96647faa8453ad9e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 13 00:50:48.947082 env[1198]: time="2025-05-13T00:50:48.947023806Z" level=info msg="CreateContainer within sandbox \"ac0ffe85188aad808c9d62ca0d0d44bce9195bfc8aae87ce96647faa8453ad9e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"864e46973728e050891f49cf7c217a49657f39bade80de5be60200bc49864e3d\"" May 13 00:50:48.947531 env[1198]: time="2025-05-13T00:50:48.947488960Z" level=info msg="StartContainer for \"864e46973728e050891f49cf7c217a49657f39bade80de5be60200bc49864e3d\"" May 13 00:50:48.964390 systemd[1]: Started cri-containerd-864e46973728e050891f49cf7c217a49657f39bade80de5be60200bc49864e3d.scope. May 13 00:50:48.985647 env[1198]: time="2025-05-13T00:50:48.985604060Z" level=info msg="StartContainer for \"864e46973728e050891f49cf7c217a49657f39bade80de5be60200bc49864e3d\" returns successfully" May 13 00:50:48.989048 systemd[1]: cri-containerd-864e46973728e050891f49cf7c217a49657f39bade80de5be60200bc49864e3d.scope: Deactivated successfully. May 13 00:50:49.284634 env[1198]: time="2025-05-13T00:50:49.284495748Z" level=info msg="shim disconnected" id=864e46973728e050891f49cf7c217a49657f39bade80de5be60200bc49864e3d May 13 00:50:49.284634 env[1198]: time="2025-05-13T00:50:49.284542686Z" level=warning msg="cleaning up after shim disconnected" id=864e46973728e050891f49cf7c217a49657f39bade80de5be60200bc49864e3d namespace=k8s.io May 13 00:50:49.284634 env[1198]: time="2025-05-13T00:50:49.284551583Z" level=info msg="cleaning up dead shim" May 13 00:50:49.291030 env[1198]: time="2025-05-13T00:50:49.290980079Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:50:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3182 runtime=io.containerd.runc.v2\n" May 13 00:50:49.320825 kubelet[1401]: E0513 00:50:49.320772 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:50:49.724021 env[1198]: time="2025-05-13T00:50:49.723945846Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:50:49.726648 env[1198]: time="2025-05-13T00:50:49.726603355Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:50:49.728661 env[1198]: time="2025-05-13T00:50:49.728607677Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:50:49.729190 env[1198]: time="2025-05-13T00:50:49.729146280Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 13 00:50:49.731192 env[1198]: time="2025-05-13T00:50:49.731161853Z" level=info msg="CreateContainer within sandbox \"29c3dfbc9b4ce60271fc9b0cbfc5538c3abc62ce9ee44d4d85472eb04ddd9a44\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 13 00:50:49.746108 env[1198]: time="2025-05-13T00:50:49.746046509Z" level=info msg="CreateContainer within sandbox \"29c3dfbc9b4ce60271fc9b0cbfc5538c3abc62ce9ee44d4d85472eb04ddd9a44\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"81aae5f35df170dd1c3dc72b448f8b68bfef70f007caa81251a09d9d71b65a67\"" May 13 00:50:49.746638 env[1198]: time="2025-05-13T00:50:49.746586084Z" level=info msg="StartContainer for \"81aae5f35df170dd1c3dc72b448f8b68bfef70f007caa81251a09d9d71b65a67\"" May 13 00:50:49.763141 systemd[1]: Started cri-containerd-81aae5f35df170dd1c3dc72b448f8b68bfef70f007caa81251a09d9d71b65a67.scope. May 13 00:50:49.784738 env[1198]: time="2025-05-13T00:50:49.784643599Z" level=info msg="StartContainer for \"81aae5f35df170dd1c3dc72b448f8b68bfef70f007caa81251a09d9d71b65a67\" returns successfully" May 13 00:50:49.933506 kubelet[1401]: E0513 00:50:49.933471 1401 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:50:49.935572 kubelet[1401]: E0513 00:50:49.935535 1401 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:50:49.937501 env[1198]: time="2025-05-13T00:50:49.937447510Z" level=info msg="CreateContainer within sandbox \"ac0ffe85188aad808c9d62ca0d0d44bce9195bfc8aae87ce96647faa8453ad9e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 13 00:50:49.942550 kubelet[1401]: I0513 00:50:49.942489 1401 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-9z2kd" podStartSLOduration=0.67136149 podStartE2EDuration="3.942472245s" podCreationTimestamp="2025-05-13 00:50:46 +0000 UTC" firstStartedPulling="2025-05-13 00:50:46.458834248 +0000 UTC m=+60.545432334" lastFinishedPulling="2025-05-13 00:50:49.729945004 +0000 UTC m=+63.816543089" observedRunningTime="2025-05-13 00:50:49.942201204 +0000 UTC m=+64.028799309" watchObservedRunningTime="2025-05-13 00:50:49.942472245 +0000 UTC m=+64.029070330" May 13 00:50:49.961448 env[1198]: time="2025-05-13T00:50:49.961360433Z" level=info msg="CreateContainer within sandbox \"ac0ffe85188aad808c9d62ca0d0d44bce9195bfc8aae87ce96647faa8453ad9e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5f29e60e225908e12077bc1c82eb17b753ca3f735e956956f7ac2d4eab2a4e36\"" May 13 00:50:49.962035 env[1198]: time="2025-05-13T00:50:49.961976933Z" level=info msg="StartContainer for \"5f29e60e225908e12077bc1c82eb17b753ca3f735e956956f7ac2d4eab2a4e36\"" May 13 00:50:49.978118 systemd[1]: Started cri-containerd-5f29e60e225908e12077bc1c82eb17b753ca3f735e956956f7ac2d4eab2a4e36.scope. May 13 00:50:50.007918 env[1198]: time="2025-05-13T00:50:50.006958345Z" level=info msg="StartContainer for \"5f29e60e225908e12077bc1c82eb17b753ca3f735e956956f7ac2d4eab2a4e36\" returns successfully" May 13 00:50:50.008666 systemd[1]: cri-containerd-5f29e60e225908e12077bc1c82eb17b753ca3f735e956956f7ac2d4eab2a4e36.scope: Deactivated successfully. May 13 00:50:50.033505 env[1198]: time="2025-05-13T00:50:50.033460996Z" level=info msg="shim disconnected" id=5f29e60e225908e12077bc1c82eb17b753ca3f735e956956f7ac2d4eab2a4e36 May 13 00:50:50.033505 env[1198]: time="2025-05-13T00:50:50.033502695Z" level=warning msg="cleaning up after shim disconnected" id=5f29e60e225908e12077bc1c82eb17b753ca3f735e956956f7ac2d4eab2a4e36 namespace=k8s.io May 13 00:50:50.033505 env[1198]: time="2025-05-13T00:50:50.033511882Z" level=info msg="cleaning up dead shim" May 13 00:50:50.041309 env[1198]: time="2025-05-13T00:50:50.041276790Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:50:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3277 runtime=io.containerd.runc.v2\n" May 13 00:50:50.321614 kubelet[1401]: E0513 00:50:50.321563 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:50:50.939711 kubelet[1401]: E0513 00:50:50.939669 1401 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:50:50.939873 kubelet[1401]: E0513 00:50:50.939763 1401 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:50:50.941470 env[1198]: time="2025-05-13T00:50:50.941431659Z" level=info msg="CreateContainer within sandbox \"ac0ffe85188aad808c9d62ca0d0d44bce9195bfc8aae87ce96647faa8453ad9e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 13 00:50:50.958541 env[1198]: time="2025-05-13T00:50:50.958488901Z" level=info msg="CreateContainer within sandbox \"ac0ffe85188aad808c9d62ca0d0d44bce9195bfc8aae87ce96647faa8453ad9e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"798254b9ca10a8b5795843027f5b1fd2e7c392b574d4e10e84c44d6e34605479\"" May 13 00:50:50.959136 env[1198]: time="2025-05-13T00:50:50.959098447Z" level=info msg="StartContainer for \"798254b9ca10a8b5795843027f5b1fd2e7c392b574d4e10e84c44d6e34605479\"" May 13 00:50:50.975250 systemd[1]: Started cri-containerd-798254b9ca10a8b5795843027f5b1fd2e7c392b574d4e10e84c44d6e34605479.scope. May 13 00:50:50.993641 systemd[1]: cri-containerd-798254b9ca10a8b5795843027f5b1fd2e7c392b574d4e10e84c44d6e34605479.scope: Deactivated successfully. May 13 00:50:50.995289 env[1198]: time="2025-05-13T00:50:50.995245444Z" level=info msg="StartContainer for \"798254b9ca10a8b5795843027f5b1fd2e7c392b574d4e10e84c44d6e34605479\" returns successfully" May 13 00:50:51.015181 env[1198]: time="2025-05-13T00:50:51.015130072Z" level=info msg="shim disconnected" id=798254b9ca10a8b5795843027f5b1fd2e7c392b574d4e10e84c44d6e34605479 May 13 00:50:51.015181 env[1198]: time="2025-05-13T00:50:51.015178634Z" level=warning msg="cleaning up after shim disconnected" id=798254b9ca10a8b5795843027f5b1fd2e7c392b574d4e10e84c44d6e34605479 namespace=k8s.io May 13 00:50:51.015181 env[1198]: time="2025-05-13T00:50:51.015187331Z" level=info msg="cleaning up dead shim" May 13 00:50:51.022648 env[1198]: time="2025-05-13T00:50:51.022581590Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:50:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3332 runtime=io.containerd.runc.v2\n" May 13 00:50:51.243793 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-798254b9ca10a8b5795843027f5b1fd2e7c392b574d4e10e84c44d6e34605479-rootfs.mount: Deactivated successfully. May 13 00:50:51.322530 kubelet[1401]: E0513 00:50:51.322486 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:50:51.800309 kubelet[1401]: E0513 00:50:51.800275 1401 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 13 00:50:51.944097 kubelet[1401]: E0513 00:50:51.944066 1401 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:50:51.945588 env[1198]: time="2025-05-13T00:50:51.945544274Z" level=info msg="CreateContainer within sandbox \"ac0ffe85188aad808c9d62ca0d0d44bce9195bfc8aae87ce96647faa8453ad9e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 13 00:50:51.968531 env[1198]: time="2025-05-13T00:50:51.968464782Z" level=info msg="CreateContainer within sandbox \"ac0ffe85188aad808c9d62ca0d0d44bce9195bfc8aae87ce96647faa8453ad9e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"60355b495e088f5270c90d244faf3e14910b6ef179e3fa5fd447d3a95ebd2394\"" May 13 00:50:51.969071 env[1198]: time="2025-05-13T00:50:51.969034804Z" level=info msg="StartContainer for \"60355b495e088f5270c90d244faf3e14910b6ef179e3fa5fd447d3a95ebd2394\"" May 13 00:50:51.988985 systemd[1]: Started cri-containerd-60355b495e088f5270c90d244faf3e14910b6ef179e3fa5fd447d3a95ebd2394.scope. May 13 00:50:52.017590 env[1198]: time="2025-05-13T00:50:52.017525723Z" level=info msg="StartContainer for \"60355b495e088f5270c90d244faf3e14910b6ef179e3fa5fd447d3a95ebd2394\" returns successfully" May 13 00:50:52.321994 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) May 13 00:50:52.323217 kubelet[1401]: E0513 00:50:52.323175 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:50:52.948525 kubelet[1401]: E0513 00:50:52.948461 1401 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:50:52.962196 kubelet[1401]: I0513 00:50:52.962113 1401 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-mcptk" podStartSLOduration=5.962094192 podStartE2EDuration="5.962094192s" podCreationTimestamp="2025-05-13 00:50:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:50:52.962025544 +0000 UTC m=+67.048623649" watchObservedRunningTime="2025-05-13 00:50:52.962094192 +0000 UTC m=+67.048692307" May 13 00:50:53.323394 kubelet[1401]: E0513 00:50:53.323311 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:50:54.271140 kubelet[1401]: E0513 00:50:54.271108 1401 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:50:54.324378 kubelet[1401]: E0513 00:50:54.324334 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:50:54.535501 systemd[1]: run-containerd-runc-k8s.io-60355b495e088f5270c90d244faf3e14910b6ef179e3fa5fd447d3a95ebd2394-runc.u7FHwZ.mount: Deactivated successfully. May 13 00:50:54.868457 systemd-networkd[1025]: lxc_health: Link UP May 13 00:50:54.880988 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 13 00:50:54.881018 systemd-networkd[1025]: lxc_health: Gained carrier May 13 00:50:55.324953 kubelet[1401]: E0513 00:50:55.324888 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:50:56.271772 kubelet[1401]: E0513 00:50:56.271739 1401 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:50:56.325703 kubelet[1401]: E0513 00:50:56.325667 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:50:56.498093 systemd-networkd[1025]: lxc_health: Gained IPv6LL May 13 00:50:56.954868 kubelet[1401]: E0513 00:50:56.954832 1401 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:50:57.326264 kubelet[1401]: E0513 00:50:57.326185 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:50:57.957516 kubelet[1401]: E0513 00:50:57.957180 1401 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:50:58.326845 kubelet[1401]: E0513 00:50:58.326808 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:50:59.327723 kubelet[1401]: E0513 00:50:59.327634 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:51:00.328613 kubelet[1401]: E0513 00:51:00.328560 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:51:01.328734 kubelet[1401]: E0513 00:51:01.328684 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:51:02.329823 kubelet[1401]: E0513 00:51:02.329772 1401 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"